1. News
  2. INTERNET
  3. Google Unveils Gemini 2.0: The Future of AI Reasoning

Google Unveils Gemini 2.0: The Future of AI Reasoning

featured
Share

Share This Post

or copy the link

On Thursday, Google unveiled a new artificial intelligence (AI) model from its Gemini 2.0 lineup, specifically designed for enhanced reasoning capabilities. Named Gemini 2.0 Thinking, this large language model (LLM) extends inference time to allow for more thorough problem-solving in areas such as complex reasoning, mathematics, and coding tasks. The tech giant from Mountain View asserts that the model operates at a significantly faster rate, even with the increased processing duration.

Google Unveils New AI Model Emphasizing Reasoning

Jeff Dean, Chief Scientist at Google DeepMind, introduced the Gemini 2.0 Flash Thinking model in a post on X (formerly Twitter). He emphasized that the LLM is “trained to use thoughts to strengthen its reasoning.” The model is now accessible through Google AI Studio, and developers can utilize it via the Gemini API.

gemini flash thinking g360 Gemini 2 Flash Thinking

Gemini 2.0 Flash Thinking AI model

During testing conducted by staff members of Gadgets 360, the advanced reasoning-oriented model demonstrated an ability to effortlessly tackle complex questions that challenge its predecessor, the 1.5 Flash model. The typical processing time recorded during these tests ranged from three to seven seconds, a notable improvement when compared to OpenAI’s o1 series, which may exceed 10 seconds for query processing.

The Gemini 2.0 Flash Thinking model also provides transparency in its thought processes, allowing users to examine the steps taken to reach a solution. In tests, the LLM achieved the correct answer in approximately eight out of ten instances, recognizing that errors are expected since it remains an experimental model.

While specifics regarding the model’s architecture were not disclosed, Google outlined its limitations in a developer-oriented blog post. Presently, the Gemini 2.0 Flash Thinking can handle an input limit of 32,000 tokens and accepts text and images as inputs. However, it only supports text outputs, with a maximum of 8,000 tokens. Notably, the API does not include built-in tools for functions such as search or code execution.

Google Unveils Gemini 2.0: The Future of AI Reasoning
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!