• Home
  • Ai
  • Ai News
  • Google Gemini 2.0 Flash Thinking AI Model With Advanced Reasoning Capabilities Launched

Google Gemini 2.0 Flash Thinking AI Model With Advanced Reasoning Capabilities Launched

The model features increased inference time computation to bring advanced reasoning capabilities.

Listen Story
Google Gemini 2.0 Flash Thinking AI Model With Advanced Reasoning Capabilities Launched

Photo Credit: Google

Google claims the AI model can solve complex reasoning and mathematics questions at a high speed

Highlights
  • Gemini 2.0 Flash Thinking is an experimental AI model
  • It is available via Google AI Studio and Gemini API
  • Recently, OpenAI released the full version of reasoning-focused o1 series
Advertisement

Google released a new artificial intelligence (AI) model in the Gemini 2.0 family on Thursday which is focused on advanced reasoning. Dubbed Gemini 2.0 Thinking, the new large language model (LLM) increases the inference time to allow the model to spend more time on a problem. The Mountain View-based tech giant claims that it can solve complex reasoning, mathematics, and coding tasks. Additionally, the LLM is said to perform tasks at a higher speed, despite the increased processing time.

Google Releases New Reasoning Focused AI Model

In a post on X (formerly known as Twitter), Jeff Dean, the Chief Scientist at Google DeepMind, introduced the Gemini 2.0 Flash Thinking AI model and highlighted that the LLM is “trained to use thoughts to strengthen its reasoning.” It is currently available in Google AI Studio, and developers can access it via the Gemini API.

gemini flash thinking g360 Gemini 2 Flash Thinking

Gemini 2.0 Flash Thinking AI model

 

Gadgets 360 staff members were able to test the AI model and found that the advanced reasoning focused Gemini model solves complex questions that are too difficult for the 1.5 Flash model with ease. In our testing, we found the typical processing time to be between three to seven seconds, a significant improvement compared to OpenAI's o1 series which can take upwards of 10 seconds to process a query.

The Gemini 2.0 Flash Thinking also shows its thought process, where users can check how the AI model reached the result and the steps it took to get there. We found that the LLM was able to find the right solution eight out of 10 times. Since it is an experimental model, the mistakes are expected.

While Google did not reveal the details about the AI model's architecture, it highlighted its limitations in a developer-focused blog post. Currently, the Gemini 2.0 Flash Thinking has an input limit of 32,000 tokens. It can only accept text and images as inputs. It only supports text as output and has a limit of 8,000 tokens. Further, the API does not come with built-in tool usage such as Search or code execution.

Comments

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2025 hub.

Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Amazon Prime Video to Limit Streaming to 5 Devices Per Account Starting January 2025
Crypto Price Today: Bitcoin Falls to $97,000, Most Altcoins See Losses
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »