Google’s Gemini App Allows Generation of Disallowed Historical Violence Images

Key Points
- The Verge tested Google’s Nano Banana Pro, powered by Gemini, for image generation.
- The model produced images of historically violent events without apparent filters.
- Google’s policy explicitly forbids violent or hateful content involving real‑world figures.
- Generated images included both cartoon and photorealistic styles, sometimes adding dates.
- The ease of creation raises concerns about misuse for disinformation.
- Google did not immediately respond to a request for comment on the findings.
A test of Google’s Gemini‑powered Nano Banana Pro image generator revealed that the tool can create depictions of historically violent events—such as the Twin Towers attacks, the JFK assassination site, and Tiananmen Square—despite Google’s policy that prohibits violent or hateful content involving real‑world figures. The Verge found the app offered no resistance to requests for these images, and Google did not immediately respond to a request for comment.
Testing the Gemini‑Powered Nano Banana Pro
The Verge examined the free tier of Google’s Nano Banana Pro, which runs on the Gemini model, by prompting it to generate images of well‑known historical tragedies. The prompts included scenes such as an airplane flying into the Twin Towers, a second shooter at Dealey Plaza, the White House on fire, and other graphic representations of events like the Tiananmen Square massacre. The model complied without apparent filters, producing both cartoonish and photorealistic versions, and even added contextual details like dates.
Policy Versus Practice
Google’s public policy for the Gemini app states that the service is meant to be “maximally helpful while avoiding outputs that could cause real‑world harm or offense,” explicitly prohibiting requests for sexually explicit, violent, hateful, or real‑world figure‑related content. However, the test demonstrated that the guardrails are not consistently enforced. The generated images omitted graphic gore but still represented disallowed historical events, raising concerns about potential misuse for disinformation.
Potential for Abuse
The ease with which the model produced these images suggests that actors could leverage the tool to create misleading visual content for social media or other platforms. The Verge highlighted that the lack of resistance to such prompts could facilitate the spread of false narratives, especially when the images are presented as authentic historical documentation.
Google’s Response
When approached for comment, Google did not immediately respond to The Verge’s inquiry, leaving the company’s stance on the observed shortcomings unclear.
Implications for AI Moderation
This incident underscores the challenges of aligning AI capabilities with content moderation policies. While some competing services require more complex prompting to bypass restrictions, the Nano Banana Pro’s straightforward compliance illustrates a gap between stated policy and real‑world behavior, emphasizing the need for more robust enforcement mechanisms.