Google is reportedly enhancing its Gemini Live feature by enabling it to interface with various applications. A recent report indicates that the tech giant is exploring a method for Gemini Live to execute specific app-related tasks autonomously, without requiring user input. Gemini Live is designed to facilitate two-way, real-time voice interactions, allowing users to pose questions to the AI chatbot and receive responses akin to human speech. Notably, Google has recently introduced a feature that permits Gemini Live to utilize the device’s camera for answering inquiries related to the user’s environment.
Gemini Live to Reportedly Connect With Apps Soon
An article from Android Authority highlights that Google might be developing new functionality within Gemini Live to allow connectivity with various applications. This information surfaced during an Android application package (APK) teardown conducted by the publication, which identified related code strings in the beta version of the Google app for Android (version 16.17.38.sa.arm64).
One particularly intriguing element within the code reportedly contains the phrase “Extensions_on_Live_Phase_One.” Although Google shifted from using “extensions” to “apps” in the branding of the Gemini app, the term may still hold internal significance or reflect older code. The presence of the word “Live” suggests that this feature is intended for use with Gemini Live, as the company is promoting similar functionalities under that branding.
The mention of “Phase One” could indicate that Google plans a gradual rollout of this AI integration with applications. A similar method was applied during the introduction of multiple first-party and third-party applications with the Gemini AI assistant over a period of months. However, this is the extent of the information currently available.
No specific details have emerged regarding how Gemini Live will interface with apps, whether it will engage solely with Google’s in-house applications or with third-party services, or what types of tasks it may handle. If the speculation holds true, it aligns with communication sent to Gemini Advanced users by Google.
In this correspondence, the company hinted at revealing new AI features during Google I/O 2025, scheduled for May 20-21, that would “open up new possibilities for interacting with and leveraging Gemini.”