Google has unveiled an array of new features for its Gemini platform, including the launch of the Gemini 2.5 models. However, the tech company, based in Mountain View, has more innovations on the horizon. During a recent live demonstration at a TedTalk, Google provided a glimpse of its upcoming artificial intelligence (AI) Glasses along with various capabilities. The company also teased additional Gemini enhancements expected to be introduced shortly, focused on improving the functionality and overall experience of the two-way real-time voice interaction feature, Gemini Live.
New AI Glasses and Gemini Features Introduced
In a TedTalk presentation, Shahram Izadi, Google’s Vice President and General Manager of Android XR, showcased a live demonstration of the AI Glasses, a forthcoming product that may pay homage to a prototype from 2013 that never reached the market. The tech firm is incorporating Gemini’s advanced features to enhance the device’s functionality.
Google first provided insight into its extended reality (XR) glasses back in December 2024 when it unveiled Android XR. The company described the platform as a collaboration with Samsung, combining extensive investments in AI, augmented reality (AR), and virtual reality (VR) to deliver enhanced experiences on headsets and glasses.
During the recent demonstration, Izadi displayed glasses resembling traditional prescription eyewear, but embedded with camera sensors and speakers. The glasses include a screen where Gemini operates and interacts with users. In the demonstration, Google illustrated that the AI chatbot can observe the user’s field of view and respond to inquiries in real time. For example, Gemini was shown analyzing a crowd and instantly composing a haiku based on the expressions of individuals present.
The demonstration also highlighted the memory feature of the AI Glasses, initially unveiled in Project Astra last year. Gemini has the capability to recall objects and visual details even after they are no longer visible to the user or the camera, with this memory extending up to 10 minutes.
Additionally, in an interview on CBS’ 60 Minutes, Google DeepMind CEO Demis Hassabis alluded to the possibility of expanding the memory function to Gemini Live in the near future. Currently, while Gemini Live with Video can access video feeds from a user’s device, it does not possess memory capabilities. Moreover, the upcoming AI Glasses are expected to offer more functionalities beyond answering questions, including tasks like facilitating online purchases.
Hassabis also mentioned that Gemini Live could greet users when they activate the feature, showcasing its potential for personalized interaction.