On Tuesday at the I/O 2025 event, Google unveiled a series of new features and updates focused on artificial intelligence (AI). The company outlined its overarching vision for AI development and the evolution of its existing product lineup. Demis Hassabis, Co-Founder and CEO of Google DeepMind, presented advancements related to Project Astra, Project Mariner, and the ambitious goal of creating a universal AI assistant with its Gemini Robotics initiative.
Project Astra and Project Mariner Introduced to Users
In a recent blog post, Hassabis elaborated on Google’s vision for a universal AI assistant, defined as “a more general and more useful kind of AI.” This assistant aims to grasp user context, proactively plan tasks, and perform actions across various devices. Significant strides have been made through new enhancements in both Project Astra and Project Mariner.
Project Astra focuses on enhancing real-time functionalities of the Gemini models. The initial set of features has been introduced through Gemini Live, which now has the ability to access a device’s camera and interpret on-screen content in real-time. The project has also upgraded voice output, introducing a more natural-sounding voice enabled by native audio generation, alongside enhanced memory and computer control capabilities.
During the I/O 2025 keynote, the improved Gemini Live showcased its ability to engage with users expressively, maintain conversation continuity despite interruptions, and multitask effectively in the background. Leveraging computer control, Gemini Live was shown to make business calls, scroll through documents, and perform web searches for information.
These cutting-edge features are currently undergoing testing and will ultimately be integrated into Gemini Live, AI Mode in Search, and the Live API. They are also expected to be implemented in new applications, including smart glasses.
Project Mariner, which was initiated in December 2024, focuses on developing autonomous capabilities within Gemini. Google has been experimenting with various prototypes for human-agent systems, including a browser-oriented AI agent capable of making restaurant reservations and scheduling appointments.
According to Google, Project Mariner now encompasses a system with agents capable of completing up to 10 tasks simultaneously, which can assist with online purchasing and research. These updated features are currently being rolled out to Google AI Ultra subscribers in the United States.
Moreover, developers utilizing the Gemini API will gain access to the new computer use capabilities. DeepMind has indicated plans to extend these functionalities to additional products later this year.
Gemini Robotics and Advanced World Models
During the keynote, Google also discussed the development of world models, which are sophisticated foundational AI models possessing extensive knowledge of real-world physics and spatial intelligence. These models are ideal for training robots through simulation environments.
Google is leveraging Gemini 2.0 models within its Gemini Robotics division, a platform dedicated to training and developing both humanoid and non-humanoid robots. Currently, this platform is in testing phases with selected trusted testers.