4 politico News
  • Spain Politics 🇪🇸
  • Europe & EU 🇪🇺
  • World Affairs
  • Opinion & Analysis
Reading: Google’s AI Detection Flip-Flops on Doctored White House Photo
Share
Notification
4 politico News4 politico News
Font ResizerAa
Search
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Google’s AI Detection Flip-Flops on Doctored White House Photo
4 politico News > World Affairs > Google’s AI Detection Flip-Flops on Doctored White House Photo
World Affairs

Google’s AI Detection Flip-Flops on Doctored White House Photo

4 politico News
Last updated: January 25, 2026 12:17 am
4 politico News
Share
11 Min Read
A screenshot of the initial  response from Gemini, Google's AI chatbot, stating that the crying image contained forensic markers indicating the image had been manipulated with Google’s generative AI tools, taken on Jan. 22, 2026. Screenshot: The Intercept

4 politico News

Contents
  • Initial SynthID Results
  • We’re independent of corporate interests — and powered by members. Join us.
  • Join Our Newsletter Thank You For Joining!
  • A Striking Reversal
  • Bullshit Detector?

When the official White House X account posted an image depicting activist Nekima Levy Armstrong in tears during her arrest, there were telltale signs that the image had been altered.

Less than an hour before, Homeland Security Secretary Kristi Noem had posted a photo of the exact same scene, but in Noem’s version Levy Armstrong appeared composed, not crying in the least.

Seeking to determine if the White House version of the photo had been altered using artificial intelligence tools, we turned to Google’s SynthID — a detection mechanism that Google claims is able to discern whether an image or video was generated using Google’s own AI. We followed Google’s instructions and used its AI chatbot, Gemini, to see if the image contained SynthID forensic markers.

The results were clear: The White House image had been manipulated with Google’s AI. We published a story about it.

After posting the article, however, subsequent attempts to use Gemini to authenticate the image with SynthID produced different outcomes.

In our second test, Gemini concluded that the image of Levy Armstrong crying was actually authentic. (The White House doesn’t even dispute that the image was doctored. In response to questions about its X post, a spokesperson said, “The memes will continue.”)

In our third test, SynthID determined that the image was not made with Google’s AI, directly contradicting its first response.

At a time when AI-manipulated photos and videos are growing inescapable, these inconsistent responses raise serious questions about SynthID’s reliability to tell fact from fiction.

A screenshot of the initial  response from Gemini, Google’s AI chatbot, stating that the crying image contained forensic markers indicating the image had been manipulated with Google’s generative AI tools, taken on Jan. 22, 2026. Screenshot: The Intercept

Initial SynthID Results

Google describes SynthID as a digital watermarking system. It embeds invisible markers into AI-generated images, audio, text or video created using Google’s tools, which it can then detect — proving whether a piece of online content is authentic.

“The watermarks are embedded across Google’s generative AI consumer products, and are imperceptible to humans — but can be detected by SynthID’s technology,” says a page on the site for DeepMind, Google’s AI division.

Google presents SynthID as having what in the realm of digital watermarking is known as “robustness” — it claims to be able to detect the watermarks even if an image undergoes modifications, such as cropping or compression. Therefore, an image manipulated with Google’s AI should contain detectable watermarks even if it has been saved multiple times or posted on social media.

Google steers those who want to use SynthID toward its Gemini AI chatbot, which they can prompt with questions about the authenticity of digital content.

“Want to check if an image or video was generated, or edited, by Google AI? Ask Gemini,” the SynthID landing page says.

We decided to do just that.

We saved the image file that the official White House account posted on X, bearing the filename G_R3H10WcAATYht.jfif, and uploaded it to Gemini. We asked whether SynthID detected the image had been generated with Google’s AI.

To test SynthID’s claims of robustness, we also uploaded a further cropped and re-encoded image, which we named imgtest2.jpg.

Finally, we uploaded a copy of the photo where Levy Armstrong was not crying, as previously posted by Noem. (In the above screenshot, Gemini refers to Noem’s photo as signal-2026-01-22-122805_002.jpeg because we downloaded it from the Signal messaging app).

“I’ve analyzed the images you provided,” wrote Gemini. “Based on the results from SynthID, all or part of the first two images were likely generated or modified with Google AI.”

“Technical markers within the files imgtest2.jpg and G_R3H10WcAATYht.jfif indicate the use of Google’s generative AI tools to alter the subject’s appearance,” the bot wrote. It also identified the version of the image posted by Noem as appearing to “be the original photograph.”

With confirmation from Google that its SynthID system had detected hidden forensic watermarks in the image, we reported in our story that the White House had posted an image that had been doctored with Google’s AI.

This wasn’t the only evidence the White House image wasn’t real; Levy Armstrong’s attorney told us that he was at the scene during the arrest and that she was not at all crying. The White House also openly described the image as a meme.

We’re independent of corporate interests — and powered by members. Join us.


Become a member


Join Our Newsletter


Thank You For Joining!


Original reporting. Fearless journalism. Delivered to you.


Will you take the next step to support our independent journalism by becoming a member of The Intercept?


Become a member

By signing up, I agree to receive emails from The Intercept and to the Privacy Policy and Terms of Use.

A Striking Reversal

A few hours after our story published, Google told us that they “don’t think we have an official comment to add.” A few minutes after that, a spokesperson for the company got back to us and said they could not replicate the result we got. They asked us for the exact files we uploaded. We provided them.

The Google spokesperson then asked, “Were you able to replicate it again just now?”

We ran the analysis again, asking Gemini to see if SynthID detected the image had been manipulated with AI. This time, Gemini failed to reference SynthID at all — despite the fact we followed Google’s instructions and explicitly asked the chatbot to use the detection tool by name. Gemini now claimed that the White House image was instead “an authentic photograph.”

It was a striking reversal considering Gemini previously said that the image contained technical markers indicating the use of Google’s generative AI. Gemini also said, “This version shows her looking stoic as she is being escorted by a federal agent” — despite our question addressing the version of the image depicting Levy Armstrong in tears.

A screenshot of Gemini’s second response, this time stating that the same image it previously said SynthID detected as being doctored with AI, was in fact an authentic photograph, taken on Jan. 22, 2026. Screenshot: The Intercept

Less than an hour later, we ran the analysis one more time, prompting Gemini to yet again use SynthID to check whether the image had been manipulated with Google’s AI. Unlike the second attempt, Gemini invoked SynthID as instructed. This time, however, it said, “Based on an analysis using SynthID, this image was not made with Google AI, though the tool cannot determine if other AI products were used.”

A screenshot of Gemini’s third response, this time stating that SynthID had determined that the image was not made with Google AI, after all, despite earlier saying SynthID found that it had been generated with Google’s AI, taken on Jan. 22, 2026. Screenshot: The Intercept

Google did not answer repeated questions about this discrepancy. In response to inquiries, the spokesperson continued to ask us to share the specific phrasing of the prompt that resulted in Gemini recognizing a SynthID marker in the White House image.

We didn’t store that language, but told Google it was a straightforward prompt asking Gemini to check whether SynthID detected the image as being generated with Google’s AI. We provided Google with information about our prompt and the files we used so the company could check its records of our queries in its Gemini and SynthID logs.

“We’re trying to understand the discrepancy,” said Katelin Jabbari, a manager of corporate communications at Google. Jabbari repeatedly asked if we could replicate the initial results, as “none of us here have been able to.”

After further back and forth following subsequent inquiries, Jabbari said, “Sorry, don’t have anything for you.”

Bullshit Detector?

Aside from Google’s proprietary tool, there is no easy way for users to test whether an image contains a SynthID watermark. That makes it difficult in this case to determine whether Google’s system initially detected the presence of a SynthID watermark in an image without one, or if subsequent tests missed a SynthID watermark in an image that actually contains one.

As AI become increasingly pervasive, the industry is trying to put behind its long history of being what researchers call a “bullshit generator.”

Supporters of the technology argue tools that can detect if something is AI will play a critical role establishing the common truth amid the pending flood of media generated or manipulated by AI. They point to their successes, as with one recent example where SynthID debunked an arrest photo of Venezuelan President Nicolas Maduro flanked by federal agents as an AI-generated image. The Google tool said the photo was bullshit.

If AI-detection technology fails to produce consistent responses, though, there’s reason to wonder who will call bullshit on the bullshit detector.

TAGGED:DetectionDoctoredFlipFlopsGooglesHousePhotoWar on GazaWhite
Share This Article
Facebook Email Copy Link Print

Popular News

Trump's FCC Chief Says His Censorship Protects the Little Guy. It Really Serves One Powerful Man.
Trump’s FCC Chief Says His Censorship Protects the Little Guy. It Really Serves One Powerful Man.
April 2, 2026
Supreme Court’s Dangerous Anti-Trans Precedent
Supreme Court’s Dangerous Anti-Trans Precedent
April 2, 2026
The “Casualty Cover-Up” Amid Trump's Wars in the Middle East
The “Casualty Cover-Up” Amid Trump’s Wars in the Middle East
April 2, 2026
How to Keep ICE Agents Out of Your Phone at the Airport
How to Keep ICE Agents Out of Your Phone at the Airport
April 2, 2026
4 politico News

Quick Links

  • Spain Politics 🇪🇸
  • Europe & EU 🇪🇺
  • World Affairs
  • Opinion & Analysis

© 4 Politico. All Rights Reserved.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?