During its I/O developer conference, Google unveiled two new methods for users to engage with its AI-driven “Live” mode, designed to enable users to search and inquire about objects within their camera’s view. This feature will soon be integrated into Google Search as part of the enhanced AI Mode and will also be available on the Gemini app for iOS, which has been accessible on Android for approximately a month.
The camera-sharing capability was first introduced at last year’s Google I/O as part of Project Astra, a project under development, before it became officially available in Gemini Live on Android. This functionality enables the AI chatbot to interpret the camera feed, allowing users to engage in real-time discussions about their surroundings—such as seeking recipe ideas from the ingredients visible in their kitchens.
Google plans to integrate this feature directly into the new AI Mode of Search, alongside Google Lens. By selecting the “Live” icon, users will have the ability to share their camera feed with Search and pose specific inquiries about what they are observing.
This Live Search feature is set to launch “later this summer,” with initial availability in beta for Labs testers. This is one of the several features on the horizon for the AI Mode, alongside a research-focused Deep Search option, an AI agent capable of performing web tasks, and various new shopping functionalities.
The iOS version of the Gemini app will receive the same functionality, which will include the option to communicate with Gemini about items displayed on the screen in addition to the camera feed. This camera and screen sharing feature was launched in Gemini Live on the Pixel 9 and Galaxy S25 last month, subsequently expanding to all Android devices shortly after. Initially, Google had intended to restrict this feature to its paid Gemini Advanced subscription but later opted to make it available for free to iOS users, mirroring its availability on Android.