Google Gemini adds interactive visualizations to chat, rolling out to Pro users

Key Points
- Google Gemini now generates interactive visualizations on request.
- Users trigger the feature with prompts like "show me" or "help me visualize".
- Demo visualizations include a moon‑orbit model with speed control and a car‑engine animation with step‑by‑step view.
- The capability is limited to the Pro model and is not available for Education or Workspace accounts.
- Anthropic's Claude offers a similar feature and includes a save option, which Gemini currently lacks.
- Google has not provided comment on future enhancements or a save function.
Google has expanded its Gemini AI chat tool with a new feature that creates interactive visualizations instead of static images. Users can ask Gemini to "show me" or "help me visualize" a topic, then click a button to launch a dynamic simulation with sliders and adjustable views. The capability, demonstrated with a moon‑orbit model and a car‑engine animation, is now available worldwide for Pro‑model users, though it remains excluded from Education and Workspace accounts. Anthropic recently introduced a similar function for Claude, but Gemini currently lacks a save option for the visuals.
Google is turning its Gemini chat AI into a visual playground. When users ask the model to illustrate a concept, the system now offers an interactive simulation rather than a single picture. A button labeled "show me the visualization" appears, and a click launches a dynamic graphic that users can manipulate with sliders, speed controls and view adjustments.
Testing the feature revealed its breadth. A request to see how the moon orbits Earth produced a rotating model with a speed slider, letting the viewer speed up or slow down the lunar path. A second prompt about a car engine generated an animated diagram where users could pause the motion, step through each component, or switch the engine on and off. Both demos showed more depth than a static diagram could convey.
Google says the tool is meant for situations where a plain image falls short. "Show me" or "help me visualize" are the trigger phrases that cue the system to build the interactive asset. The visualizations appear only when the Pro version of Gemini is in use, and the rollout is global. However, the feature does not extend to Education or Workspace accounts at this time.
Anthropic introduced a comparable capability for its Claude model in March, and reviewers noted impressive results. Gemini’s version differs in that it currently lacks a way to save the generated visualizations for later use, a feature Claude offers. Google has not commented on whether a save function is planned.
The addition aligns with a broader push to make generative AI tools more multimodal—able to handle text, images, audio and now interactive graphics. By embedding these simulations directly in the chat flow, Google hopes to reduce the need for users to hunt for external diagrams or videos. The move also underscores the competitive race among AI developers to enrich conversational assistants with richer, more actionable outputs.
While the feature is limited to Pro subscribers, the global availability signals Google’s confidence that interactive visualizations will become a standard expectation for AI chat interfaces. As other platforms experiment with similar tools, the industry may soon see a shift from static illustration to real‑time, user‑controlled visual explanations.