Back
GitHub / Hacker NewsResearchGitHub / Hacker News2026-04-10

A Solo Researcher Cracked Google's AI Watermark — SynthID Defeated With Spectral Analysis

Independent researcher Alosh Denny reverse-engineered Google's SynthID invisible watermarking system — embedded in every Gemini-generated image — using only signal processing, achieving 90% detection accuracy and a bypass that drops 91% of phase coherence. The project raises sharp questions about AI watermarking as a provenance mechanism.

Original source

## What Happened

Alosh Denny, an independent researcher with no access to Google's proprietary SynthID encoder or decoder, successfully reverse-engineered the invisible watermark embedded in every image generated by Google Gemini. Using spectral analysis and signal processing alone, the [reverse-SynthID project](https://github.com/aloshdenny/reverse-SynthID) achieved:

- **90% detection accuracy** — reliably identifying whether an image was Gemini-generated - **Multi-resolution spectral bypass (V3)** — achieving 75% carrier energy drop, 91% phase coherence drop, and 43+ dB PSNR — meaning the watermark signal can be substantially degraded while preserving image quality - An ongoing crowdsource effort to expand the SynthID codebook using "pure black / pure white" Gemini images

The project landed on Hacker News front page today and is generating significant discussion about the implications for AI content provenance.

## Why It Matters

SynthID was announced with considerable fanfare as Google's solution to the AI-generated content problem. The pitch: invisible watermarks that survive cropping, compression, and basic editing, detectable by Google's infrastructure but invisible to the human eye. It became a cornerstone of DeepMind's responsible AI narrative.

The fact that it can be significantly degraded — and detected — through pure signal processing by a solo researcher in weeks raises uncomfortable questions. If watermark bypass is accessible to anyone with signal processing skills and a GitHub repo, the policy frameworks being built on top of AI watermarking as a trust mechanism may be substantially weaker than assumed.

## The Broader Watermarking Problem

The SynthID reverse-engineering follows a pattern that has repeated across DRM, steganography, and digital fingerprinting for decades: anything embedded in the signal domain can eventually be found and manipulated in the signal domain. The question was never whether SynthID could theoretically be broken, but how quickly and how easily.

The answer, apparently, is "by one person, in weeks, with open-source tools."

Google has not publicly responded to the research. The project is actively being extended — Denny is crowdsourcing Gemini image samples to build a more complete spectral codebook.

Panel Takes

The Builder

The Builder

Developer Perspective

This is an important data point for anyone building content provenance pipelines. If your trust model depends on SynthID surviving adversarial conditions, it doesn't. The methodology is clean — spectral analysis, no proprietary access — which means the bypass is repeatable and improvable by anyone. Plan accordingly.

The Skeptic

The Skeptic

Reality Check

The research is technically interesting but the "AI watermarking is broken" narrative may be overstated. Detection (did Google make this?) and bypass (make it look like Google didn't) are different threats, and neither fully invalidates watermarking for casual-misuse prevention. The real problem is that policy is moving faster than the technology can support — this is a forcing function to be more honest about limitations.

The Futurist

The Futurist

Big Picture

This accelerates the shift from watermarking to cryptographic provenance — C2PA, signed manifests, model-level attestation. Signal-domain watermarks were always a stopgap. The real infrastructure for AI content trust is cryptographic signing at generation time, not steganographic hiding. Expect pressure on Google, Adobe, and the C2PA coalition to move faster on those standards.