Google is constantly striving to improve its search capabilities, and its latest updates continue to push the boundaries of what’s possible. In addition to a new gesture-powered search feature for Android devices, Google has now introduced an AI-powered addition to its visual search capabilities in Google Lens.
The new feature utilizes generative AI to provide users with answers to their visual queries. Starting today, users can simply point their camera at an object or upload a photo or screenshot to Lens, and then ask a question about what they’re seeing. The AI will use its vast knowledge to provide accurate and insightful results.
This feature is an enhancement of the existing multisearch capabilities in Lens, which allows users to search using both text and images simultaneously. While previous visual searches would lead users to other visual matches, the new AI-powered results offer deeper insights.
Google offers an example where this feature could come in handy, such as learning more about a plant. By taking a photo of the plant and asking “When do I water this?”, the AI will not only identify the plant, but also provide information on how often it should be watered – for example, “every two weeks.” This data is pulled from various sources on the web, including websites, product sites, and videos.
The AI-powered overviews for multisearch on Lens are now available to all users in the U.S. and are offered in English. Unlike some of Google’s other AI experiments which are only available through Google Labs, this feature can be accessed through the Lens camera icon in the Google search app for iOS or Android, or in the search box on an Android phone.
This new feature works alongside another recent update – a search gesture called Circle to Search. Users can simply circle, scribble or mark an item with this gesture and ask a question about it, launching a generative AI query. This is part of Google’s efforts to maintain the relevancy of its search engine in the age of AI. While the current web is filled with SEO-optimized content, Circle to Search and the new AI-powered capabilities in Lens strive to improve search results by tapping into a vast web of knowledge, including many pages in Google’s index, while presenting the results in a different format.
However, it’s worth noting that relying on AI may not always result in accurate or relevant answers. The AI can only work with the information it has access to, and if the source material is lacking or incorrect, the answers may also be inaccurate. To address this, Google’s genAI products – such as the experimental Search Generative Experience (SGE) – will cite their sources, giving users the opportunity to fact-check the results.
While SGE remains in Google Labs for now, the company stated that it will gradually roll out generative AI advancements more widely, as seen with the launch of the new AI overviews for multisearch in Lens. The gesture-based Circle to Search feature will be available starting January 31st.
“Remember to use appropriate HTML tags and structure your article in a way that is easy to read and follow.”
Ultimately, these updates demonstrate Google’s ongoing dedication to improving its search engine with the help of AI technology. As the internet continues to evolve, it’s important for search engines to adapt and provide relevant, accurate information to users. With the launch of new features like Circle to Search and AI-powered overviews in Lens, Google is taking a step in the right direction towards achieving this goal.