Developer Claim Fuels Concern Real Photos Could Be Flagged as AI
A new claim that Google’s AI watermark system can be bypassed is raising a bigger fear: what happens when nobody can tell what’s real anymore.
According to The Verge, a developer says they partially reverse-engineered Google DeepMind’s SynthID system, showing how detection can be confused. But Google disputes that, saying the watermark “cannot be systematically removed.”
SynthID works by embedding invisible signals into pixels, designed to survive edits like cropping or compression. That’s supposed to make AI images traceable even after being shared or altered.
Subscribe free for daily political analysis they won’t broadcast. Join 110K+ readers →
But researchers and documentation show limits. Heavy edits or targeted manipulation can weaken detection, meaning some altered images may slip through.
That creates a growing gray zone where real images could be flagged as AI—and fake ones could appear authentic.
The result isn’t just a tech issue. It’s a trust problem affecting workers, creators, and anyone relying on images online.
And right now, there’s no single system that can reliably prove what’s real.




