Google has unveiled multiple new features for its Gemini platform, including the introduction of the Gemini 2.5 models. However, the updates from the tech company based in Mountain View don’t stop there. At a recent TedTalk, Google provided a preview of its upcoming artificial intelligence (AI) Glasses, showcasing their potential functionalities. The company also indicated plans for further Gemini features that may be launched shortly, focusing on enhancing the two-way real-time voice conversation feature known as Gemini Live.
Google AI Glasses, New Gemini Features Teased
During a TedTalk presentation, Shahram Izadi, the Vice President and General Manager of Android XR at Google, demonstrated the AI Glasses, which appear to be an innovative addition to the company’s product lineup. This new wearable may be inspired by a prototype from 2013 that never made it to production, but with the integration of Gemini’s advanced capabilities, it aims to offer greater functionality.
Google first suggested its plans for extended reality (XR) glasses in December 2024 while revealing details about Android XR. The company noted, “Created in collaboration with Samsung, Android XR combines years of investment in AI, AR, and VR to bring helpful experiences to headsets and glasses.”
In the live demonstration, Izadi showcased glasses that resemble standard prescription eyewear but include built-in camera sensors and speakers. These glasses feature a screen for Gemini to interact directly with the wearer. The demo illustrated the AI chatbot’s ability to perceive the user’s surroundings and provide real-time responses. For example, it could observe a crowd and instantly generate a haiku inspired by the facial expressions of those present.
Moreover, Izadi highlighted a memory feature within the AI Glasses. This capability, initially introduced last year under Project Astra, allows Gemini to retain information about objects and visual details even after they have exited the user’s view. Google indicated that this memory can last for up to 10 minutes.
In a separate interview with CBS’s 60 Minutes, Demis Hassabis, CEO of Google DeepMind, suggested that this memory feature could soon be extended to Gemini Live. While Gemini Live with Video currently allows the AI to view video feeds from the user’s device, it does not yet possess the ability to remember specific details. Additionally, the AI Glasses are expected to offer functionalities beyond answering questions, including the capability to facilitate online purchases.
Furthermore, Hassabis mentioned that Gemini Live could also deliver a personalized greeting when the feature is activated.