Google is set to introduce a range of new capabilities to Gemini Live, its real-time AI assistant. Starting next week, Gemini Live will be equipped to highlight specific items directly on your screen while utilizing your device’s camera, enhancing its ability to assist users in locating objects.
For instance, if you are searching for a suitable tool among a collection, simply aim your smartphone’s camera at the tools, and Gemini Live will showcase the correct one on your display. This functionality will debut with the new Pixel 10 devices set to launch on August 28, with plans to extend visual guidance to additional Android devices simultaneously, followed by an iOS rollout in the weeks to come.
In addition, Google will implement new integrations that enhance Gemini Live’s ability to engage with various applications, such as Messages, Phone, and Clock. For instance, if a user is discussing directions with Gemini Live and notices they are running behind schedule, they can prompt the assistant to send a message. Users might say, “This route looks good. Now, send a message to Alex that I’m running about 10 minutes late,” and Gemini could then prepare a text for them.
In a further development, Google is set to roll out an improved audio model for Gemini Live. The update is expected to significantly enhance how the assistant incorporates various aspects of human speech, including intonation, rhythm, and pitch. In upcoming versions, Gemini will adapt its tone depending on the subject matter, for instance, adopting a soothing tone when discussing sensitive topics.
Users will also have the option to adjust the speed of Gemini’s speech, mirroring a feature recently introduced in ChatGPT’s voice mode. Furthermore, if prompted to narrate a story from a unique perspective, the chatbot may even adopt an accent, providing a more engaging experience.
0 Comments