Industry trends

With the new Google Lens filters, you can enjoy an easier life!

Photo/ unsplash

Google unveiled Google Lens, an artificial intelligence (AI) image recognition technology in May 2017. Over the years, Google Lens has now been able to identify over 1 billion objects. When you take a photo of a name card with Google Lens, the phone number, address and email on it can be added to your contact list directly. When you take a photo of a recipe with Google Lens, the ingredients can be added to your shopping list automatically. Or when you take a photo of the date of a certain event in a magazine, it will be instantly added to your calendar. While the convenient Google Lens is already so attractive, it has been recently updated with new filters, Translate, Shopping and Dining. This series of new functions can facilitate Google Lens to finish specific tasks in the AR scene independently.

While Google Lens is definitely a smart app, it often requires other apps to be in operation at the same time to perform most of its functions. However, according to the discovery of 9to5Google regarding Google Lens 9.61, the latest version of Google Lens, new options named as ‘Filter’ has been added to Google Lens. The new filters are Translate, Shopping and Dining. With a new interface which looks similar to Huawei’s HiVision, Google Lens offers five basic shortcut buttons of ‘Translate’, ‘Text’, ‘Auto’, ‘Shopping’ and ‘Dining’.

Photo/ The five basic shortcut buttons ‘Translate’, ‘Text’, ‘Auto’, ‘Shopping’ and ‘Dining’.

The Translate filter can automatically detect a language and translate it, instead of copying the text to Google Translate to complete the task; combining with the augmented reality (AR) function that is similar to the Google Maps AR and using the location information, the Dining filter makes use of the visual features of Google Lens to help users to search for nearby restaurants and learn about the popular dishes; the Shopping filter can show you a special interface tailor-made for displaying product information. From what has been revealed so far, Google seems to hope to provide different and in-depth functions, rather than a one-piece pattern to deal with all needs. When will these functions be added to Google Lens is still uncertain, however, this year’s Google I/O is going to be held on 7 May, there is a high possibility that these new functions will be launched at the event. Adding these new functions, it is likely that Google wants its AI technology to provide more accurate identification for specific things through independent recognition. These newly added functions can make visual-detecting intelligence more convenient.

Write A Comment