Google is enhancing its search engine with advanced artificial intelligence capabilities, allowing users to ask questions about images using their voice. This update comes despite previous challenges with misleading information generated by AI. The changes were announced recently, marking a significant step in Google’s ongoing efforts to innovate its search functionality.
This latest update builds on an AI-driven transformation that Google began in May. At that time, the company introduced "AI Overviews," which provided summarized answers to user queries at the top of the search results page. However, these summaries sparked concerns among content creators and publishers about a potential decline in traffic to their websites. Many feared that fewer users would click on links to their sites, which could negatively impact ad revenue—an essential source of income for digital news organizations.
In response to these concerns, Google has decided to integrate more links to external websites within its AI Overviews. According to recent studies, these AI-generated summaries have already affected visits to well-known publishers like The New York Times and tech review sites such as TomsGuide.com. Interestingly, the same analysis indicated that the citations within these overviews have been driving increased traffic to specialized sites like Bloomberg and the National Institutes of Health, suggesting a shift in user behaviour.
By enhancing its search engine with more AI technology, Google reinforces its position in the industry, especially as the tech landscape undergoes significant changes. This evolution in Google’s search capabilities builds upon its seven-year-old Lens feature, which allows users to ask questions about objects in images. Currently, Lens processes over 20 billion queries monthly, with a significant portion coming from users aged 18 to 24. This younger demographic is crucial for Google, especially as it faces competition from emerging AI alternatives like ChatGPT and Perplexity, which position themselves as direct answer engines.
The updated Lens feature will enable users to ask questions in a conversational style while viewing something through their camera lens. For instance, users can capture videos of moving objects, such as fish in an aquarium, and then pose questions about them. The AI will then provide answers through an AI Overview. Rajan Patel, Google’s vice president of search engineering and co-founder of the Lens feature, emphasized that the aim is to make search easier and more intuitive for users, allowing them to search from anywhere.
Despite the potential benefits of AI in enhancing the search experience, there are inherent risks. AI systems have sometimes produced inaccurate information, which could undermine the credibility of Google’s search engine. Past incidents have included odd suggestions, such as putting glue on pizza or eating rocks. Google attributed these errors to misinformation and manipulation by users trying to mislead the AI.
Confident in its improvements, Google plans to rely on AI to determine which information appears on the results page. Starting with recipe and meal idea queries on mobile devices, AI will organize the results into clusters, featuring photos, videos, and articles on the subject. This method aims to enhance the user experience by making it easier to find relevant information quickly.