Developer Claims to Crack Google’s AI Watermark, Company Says Tool Falls Short

Key Points
- Developer Aloshdenny claims to reverse‑engineer Google’s SynthID watermark.
- Method uses 200 pure black or white Gemini‑generated images and signal processing.
- Open‑source code posted on GitHub; detailed explanation on Medium.
- Google spokesperson says the tool cannot systematically remove SynthID.
- SynthID remains a near‑invisible, robust watermark across Google’s AI products.
A software developer using the handle Aloshdenny says he has reverse‑engineered Google DeepMind’s SynthID system, allowing him to strip or embed the near‑invisible watermarks that tag AI‑generated images. The open‑source method, posted on GitHub and detailed in a Medium post, relies on generating 200 pure‑black or pure‑white images with Gemini and applying signal‑processing tricks. Google disputes the claim, stating the tool cannot systematically remove SynthID and that the watermark remains robust. The back‑and‑forth highlights the ongoing tug‑of‑war over AI‑generated content attribution.
A software developer who goes by the online name Aloshdenny says he has peeled back the layers of Google DeepMind’s SynthID watermarking system, exposing a way to both erase and insert the hidden tags that identify AI‑generated images. The claim, posted on GitHub and explained in a candid Medium essay, hinges on a simple premise: generate 200 completely black or white images with Google’s Gemini model, crank up contrast and saturation, then denoise to reveal the watermark’s pattern.
Aloshdenny’s write‑up, peppered with self‑deprecating humor about “way too much free time” and a brief mention of weed, walks readers through a three‑step process. First, he averages the extracted patterns across the image set to calculate the magnitude and phase of the watermark signal at each frequency bin per color channel. Next, he hunts for those frequencies in new images and partially removes them at the exact angle they were originally inserted. The result, he admits, does not wipe the watermark clean; instead, it confuses the decoder enough that it gives up.
SynthID, Google’s near‑invisible watermark, embeds a signal directly into the pixel data at the moment an image is created. Designed to survive typical post‑processing, the watermark tags content from Google’s AI suite—including models dubbed Nano Banana and Veo 3—as well as AI‑generated creator clones on YouTube. Google touts SynthID as a deterrent, raising the cost of misuse by making removal technically demanding.
Google’s response was swift. Spokesperson Myriam Khan told The Verge that Aloshdenny’s tool “is incorrect to say this tool can systematically remove SynthID watermarks.” She reaffirmed that SynthID remains “a robust, effective watermarking tool for AI‑generated content.” The company’s stance underscores that while the watermark can be muddied, it has not been fully broken.
The exchange underscores a broader tension in the AI community. Engineers and researchers are eager to test the limits of attribution technologies, while companies like Google seek to protect the integrity of their tools against malicious actors. Aloshdenny himself praised SynthID’s engineering, noting that the fact he could only “confuse the decoder” speaks to the system’s strength. He also cautioned that the method is technically complex, unlikely to be wielded by casual script‑kiddies.
As AI‑generated media proliferates, the ability to trace origin stories becomes increasingly important for platforms, publishers, and regulators. Watermarking solutions such as SynthID aim to embed provenance without degrading visual quality, offering a middle ground between transparency and user experience. Whether the latest reverse‑engineering effort will prompt Google to tweak its algorithm remains to be seen, but the dialogue itself signals that the battle over AI content attribution is far from settled.