Google is set to reveal new features and enhancements for its Gemini AI at the upcoming Google I/O 2025 developer conference. This announcement came through the company’s April newsletter, which targeted Gemini Advanced subscribers. The tech powerhouse, based in Mountain View, teased the introduction of a “more personalized assistant” and new productivity tools aimed at enhancing user experience. Subscribers have recently gained access to the Veo 2 model and new features within Gemini Live.
Exciting Developments for Gemini Expected at Google I/O 2025
According to a report from 9to5Google, the tech giant has indicated to Gemini Advanced subscribers that they can anticipate new functionalities for the AI chatbot, which will be showcased during their annual developer conference.
In the newsletter, the company stated, “We’ll announce a wave of exciting updates that will allow you to experience a more personalised assistant, unlock enhanced productivity, and open up new possibilities for interacting with and leveraging Gemini.”
Although specific details were not disclosed, a “more personalized assistant” could indicate that enhancements may be on the way for the Gemini AI assistant on Android devices. It is likely that the AI-driven voice assistant will gain the ability to connect with additional applications and perform a broader range of tasks directly on devices.
On the productivity front, the new features may involve deeper integration of the chatbot with various Google applications, as well as innovative functions in current offerings. For instance, Google could introduce a collaboration feature akin to Canvas in Google Docs or enable the creation of visual representations of data in Google Sheets beyond traditional graphs and charts. With the recent advancements in Imagen 3, the company seems well-equipped to provide these capabilities.
The most intriguing aspect of the announcement centers on the potential avenues for interacting with Gemini. Users presently can engage with the AI chatbot through text, images, videos, and voice, but its autonomous capabilities remain confined to the Deep Research tool. With the onset of Google I/O, there is speculation surrounding the introduction of DeepMind’s Project Mariner.
Project Mariner is an AI agent powered by Gemini that can perform multiple tasks using the user’s browser. This prototype, which is currently in the alpha testing stage, is natively multimodal and possesses the ability to comprehend text, code, images, forms, and various web elements.