Google Unveils Gemini 3, Its Latest AI Model

Key Points
- Gemini 3 launches in two variants: Pro for everyday use and Deep Think for enhanced reasoning.
- The Gemini app receives a major redesign, adding a My Stuff folder and real‑time generative layouts.
- An AI agent can execute multi‑step tasks across Google services, initially for AI Ultra members.
- Search now employs Gemini 3 to produce visual, interactive answers instead of plain text.
- Multimodal upgrades include video analysis and a million‑token context window for long sessions.
- Gemini 3 aims to improve reasoning capabilities across Google’s consumer and developer platforms.
Google has introduced Gemini 3, a new AI model that powers the Gemini app, Search and developer tools. The rollout includes two variants—Gemini 3 Pro for everyday use and Gemini 3 Deep Think for enhanced reasoning. The Gemini app receives a major redesign with a My Stuff folder and an AI agent that can execute multi‑step tasks across Google services. Search now leverages Gemini 3 to generate visual, interactive answers. The model also adds multimodal strengths such as video analysis and a million‑token context window, positioning it as a significant step forward in Google’s AI ecosystem.
New Model Variants
Google’s Gemini 3 arrives in two flavors. Gemini 3 Pro is the standard, full‑feature version that is immediately available in the Gemini app, Search and developer offerings. Gemini 3 Deep Think provides an enhanced reasoning mode and is being tested with Google AI Ultra subscribers. Both variants share the same core architecture while offering different levels of capability.
Gemini App Overhaul
The Gemini app undergoes one of its largest updates. A new navigation system and a My Stuff folder organize every piece of AI‑generated content. The interface now builds generative layouts in real time, presenting answers as magazine‑style itineraries, visual diagrams, tables or custom‑coded simulations instead of plain text. An AI agent can act on a user’s behalf, carrying out dozens of steps across connected Google apps. The agent is initially available to AI Ultra members and will expand over time.
Search Integration
Gemini 3 reshapes Google Search by handling complex queries with a generative UI. When a user asks a tough question, the model creates a visual layout that may include diagrams, interactive calculators or even manipulable models. This approach replaces the traditional list of links with small app‑like experiences, while still providing source links for further exploration. The system routes the hardest queries to Gemini 3 behind the scenes, improving answer relevance and depth.
Advanced Multimodal Abilities
Gemini 3 expands multimodal capabilities. It can analyze video to understand movement, timing and other details, enabling tasks like sports game analysis and training plan suggestions. The model also supports a million‑token context window, allowing it to retain extensive information across long sessions without losing coherence. These strengths make Gemini 3 adept at handling handwritten recipes, voice notes, and other mixed‑media inputs.
Strategic Positioning
Google positions Gemini 3 as a leap in reasoning rather than just raw size or speed. The company highlights its ability to merge diverse inputs—such as a handwritten recipe and a voice note—into a cohesive output like a cookbook. By integrating Gemini 3 across consumer products and developer tools, Google aims to deepen user engagement with its AI ecosystem while offering more powerful, interactive experiences.