Google is allegedly in the process of testing a new artificial intelligence (AI) feature on its Search platform, referred to as AI Mode. This feature was initially mentioned in rumors dating back to December 2024. It aims to offer users a full-screen interface for making complex and exploratory inquiries, with AI providing responses in a conversational style and listing URLs for those interested in further exploring the subject matter. This functionality is distinct from the AI Overviews that currently appear above search results on Google Search.
Google Search Could Get an AI Mode
A recent report from 9to5Google highlights that the tech giant, based in Mountain View, is currently testing AI Mode internally, a practice known as “dogfooding.” The publication references an internal communication aimed at Google employees, soliciting their participation in testing the feature.
The internal email describes AI Mode as a tool that enables “Search intelligently research[ing] for you,” organizing information into easily digestible summaries and linking to additional content available online. The report mentions that Google has provided example queries to illustrate the tool’s potential, such as, “How many boxes of spaghetti should I buy to feed 6 adults and 10 children, and have enough for seconds?”
Furthermore, the email discloses that AI Mode is powered by a customized version of Gemini 2.0, which is equipped with advanced reasoning and cognitive abilities. A screenshot of the user interface was included in the email, though it is described as an early iteration, not indicative of the final design. The feature is also expected to be compatible with mobile devices.
AI Mode in Google Search
Photo Credit: 9to5Google
The provided screenshot indicates that the AI Mode will be accessible alongside other filters, including Images, Videos, and News. When users select it, a full-screen interface appears, where the Gemini-powered AI chatbot delivers conversational answers to their inquiries. The right side of the display lists URLs from which the AI has retrieved its information, allowing users to click through for more detailed exploration.
At the bottom of the interface, there is a text field for users to submit follow-up questions. Mobile applications will further enable users to utilize their microphones for voice input. Additionally, thumbs-up and thumbs-down icons are incorporated at the bottom for users to rate the quality of the responses they receive. It is important to note that Google has yet to formally announce this feature, and the timeline for its public rollout remains uncertain.