On Wednesday, Google unveiled a variety of new models under its Gemini 2.0 artificial intelligence (AI) initiative. The technology firm, headquartered in Mountain View, is extending the reach of its Gemini 2.0 Flash Thinking Experimental AI model to mobile applications and its web client. Additionally, it is introducing an agentic variant of the AI model, designed to interact with specific applications. An experimental version of Gemini 2.0 Pro is being made available for paid subscribers, while a lite edition of the 2.0 Flash model is set for public preview.
Google Launches Multiple Models in Gemini 2.0 Series
In a blog post, Google outlined the various models that will be available. Some will be accessible to free users of Gemini, while others will require a paid subscription or be reserved solely for developers.
The Gemini 2.0 Flash Thinking model stands out due to its emphasis on reasoning. This model, which can be compared to DeepSeek-R1 and OpenAI’s o1, was initially introduced in December 2024 but was previously accessible only through Google’s AI Studio.
The company is now making this model available to all users of the Gemini app and website. Users will find the new AI model in the model selector at the top of the interface. Although staff members from Gadgets 360 have not yet tested the model, it is expected to roll out globally in the upcoming days. Details regarding any potential rate limits for free users utilizing the Thinking model remain unclear.
In conjunction with this, Google is also launching an agentic version of the 2.0 Flash Thinking model. This iteration is capable of interacting with applications such as YouTube, Google Search, and Google Maps. Users may be able to ask Gemini to perform specific tasks within these platforms, though the full extent of its capabilities is still unknown.
For users subscribed to Gemini Advanced, Google is introducing an experimental Gemini 2.0 Pro, touted as the high-performance model of the series. This advanced model excels in solving complex problems and addressing tasks related to coding and mathematics. It boasts a context window of 2 million tokens and is equipped with an application programming interface (API) that can utilize tools including Google Search and code execution. This model will also be available through Google AI Studio and Vertex AI.
Developers will gain access to the Gemini 2.0 Flash-Lite model, which Google claims performs better than its 1.5 Flash predecessor, while maintaining efficiency in speed and cost. This version allows for a context window of one million tokens and supports multimodal input. Furthermore, Google is also rolling out the 2.0 Flash model to developers through the Gemini API. Currently, it is equipped to manage text-based tasks, with plans to incorporate image generation and text-to-speech capabilities in future updates.
Both the 2.0 Flash-Lite and the 2.0 Flash models will also be featured in Google AI Studio and Vertex AI.