PANews reported on February 6 that according to IT Home, Google published a blog post inviting all Gemini application users to access the latest Gemini 2.0 Flash application model and release the 2.0 Flash Thinking reasoning experimental model.

The 2.0 Flash model was first unveiled at the 2024 I/O conference and quickly became a popular choice in the developer community due to its low latency and high performance. The model is suitable for large-scale, high-frequency tasks, can handle context windows of up to 1 million tokens, and demonstrates strong multimodal reasoning capabilities. The Gemini 2.0 Flash model can interact with applications including YouTube, Google Search, and Google Maps, helping users discover and expand knowledge in multiple application scenarios.

The Gemini 2.0 Flash Thinking model builds on the speed and performance of 2.0 Flash, and is trained to break down prompts into a series of steps, thereby enhancing its reasoning and providing better responses. The 2.0 Flash Thinking Experimental model shows its thinking process, allowing users to see why it responds in a certain way, what its assumptions are, and trace the model's reasoning logic. This transparency allows users to gain a deeper understanding of the model's decision-making process.