Developer Sparks Fear After Claim Google AI Images Can Be Misidentified
A new claim about breaking Google’s AI watermark system is raising a bigger issue: how anyone will verify what’s real online.
A developer says they reverse-engineered parts of Google DeepMind’s SynthID, but the company is pushing back hard on that claim.
According to The Verge, the developer used around 200 AI-generated images and signal analysis to disrupt how SynthID is detected. Google responded that the system “cannot be systematically removed” and remains reliable.
That leaves a critical gap. If detection can be confused—even without full removal—it opens the door to images that appear authentic but aren’t, or real images that get flagged incorrectly.
“It is incorrect to say this tool can systematically remove SynthID watermarks,” a Google spokesperson told The Verge.
Subscribe free for daily political analysis they won’t broadcast. Join 110K+ readers →
The stakes go beyond tech companies. SynthID was built to help identify AI-generated images by embedding invisible signals into pixels that survive common edits like cropping or compression.
But documentation shows those signals can weaken under heavy manipulation, meaning detection is probabilistic—not guaranteed.
For workers, freelancers, and everyday users, that creates risk. A fake image could pass as real in hiring, news, or legal situations, while authentic work could be dismissed as AI-generated.
The next phase will likely involve competing systems—watermarks, metadata, and verification tools—but none are foolproof yet.
For now, the internet is entering a phase where seeing is no longer believing.




