Google adds interactive 3D models to Gemini AI, letting users tweak simulations in real time

Key Points
- Google’s Gemini AI now creates interactive 3D models and simulations on user request.
- Feature is available to Pro‑tier users via a “Show me the visualization” button.
- Users can rotate, zoom, and adjust variables with sliders in real time.
- Similar visual capabilities were recently added by Anthropic’s Claude and OpenAI’s ChatGPT.
- The upgrade aims to boost engagement and set Gemini apart in the AI chatbot market.
Google has upgraded its Gemini chatbot with a feature that creates interactive 3D models and simulations on demand. Users of the Pro version can ask the AI to visualize concepts such as orbital mechanics or the Doppler effect, then rotate, zoom, or adjust variables with sliders. The move follows similar visual‑output capabilities recently rolled out by Anthropic and OpenAI, signaling a broader push toward more immersive AI‑driven explanations.
Google rolled out a new capability for its Gemini AI that goes beyond static images and text. The chatbot now produces interactive three‑dimensional models and simulations that users can manipulate in real time. When a Pro‑tier user asks Gemini to illustrate something—say, a double pendulum or the Moon’s orbit around Earth—the system generates a rotatable model, complete with sliders and toggles that let the user change speed, hide elements, or pause the animation.
In a hands‑on test, the author prompted Gemini for a Moon‑Earth simulation. The AI responded with a 3D scene where the Moon could be spun around the planet, its orbital path could be hidden, and a speed slider let the user accelerate or decelerate the motion. Zoom and rotation controls worked smoothly, making the experience feel more like a lightweight physics lab than a typical chatbot exchange.
This upgrade arrives just weeks after rivals Anthropic and OpenAI introduced comparable visual tools. Anthropic’s Claude now appends charts, diagrams, and other interactive graphics to its answers, while OpenAI’s ChatGPT can generate visualizations for math and science topics. Until now, Gemini could only produce static, interactive images; the new 3D feature marks its first foray into dynamic simulations.
Access to the functionality is limited to Gemini’s Pro model. Users select the Pro option in the prompt bar, pose a request such as “show me a double pendulum,” and then click the “Show me the visualization” button that appears beneath Gemini’s text response. The AI then renders the model and presents the interactive controls.
Google’s rollout suggests the company sees interactive visualizations as a way to deepen user engagement and differentiate Gemini in a crowded AI chatbot market. By letting users explore concepts hands‑on, Gemini moves closer to the kind of experiential learning tools traditionally reserved for specialized software.
Industry observers note that the race to embed visual output in conversational AI could reshape how educators, engineers, and casual users seek answers. As more platforms adopt real‑time graphics, the line between search, tutoring, and simulation blurs, opening new possibilities for both productivity and entertainment.