Red Hot Cyber, il blog italiano sulla sicurezza informatica
Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
2nd Edition GlitchZone RHC 320x100 2
Redhotcyber Banner Sito 970x120px Uscita 101125
Google Gemini Improves AI-Generated Image Verification

Google Gemini Improves AI-Generated Image Verification

Redazione RHC : 23 November 2025 09:37

Google has expanded the capabilities of its Gemini artificial intelligence service by adding a tool to the app and web version to check images for signs of automatic generation . This feature seems like a logical step: visual content is increasingly being created using AI models, and the demand for methods to distinguish real images from synthetic ones is growing.

The new detector is based on the SynthID system, digital markers invisible to the human eye, introduced in 2023. They are embedded in images created by Google generators and persist even after resizing or partial processing . For this reason, the check only works with content specifically created using Google templates.

If a photo doesn’t have an integrated watermark, the tool can’t reliably determine whether the image was created by AI. Testing content created by other models confirms this limitation: Gemini can sometimes make assumptions based on small visual cues, but this can’t be considered a definitive test.

SynthID is open source, and Google has even partnered with partners like Hugging Face and Nvidia, but most generators use different approaches. For example, ChatGPT uses the C2PA metadata schema, supported by Microsoft, Adobe, Meta , and others. Google has announced plans to add C2PA compatibility to expand tag detection beyond its own ecosystem.

But even this update doesn’t guarantee security, as this summer researchers at the University of Waterloo developed a method called UnMarker that can remove watermarks from AI models, including SynthID, in just a few minutes on an Nvidia A100 GPU. Google’s DeepMind team reached similar conclusions, noting that C2PA metadata is even less reliable in some scenarios.

At the same time, the company unveiled an updated version of its image generation system, called Nano Banana Pro. This model is based on the Gemini 3 Pro and is designed to more accurately reproduce text within a frame, a weakness of visual AI in the past.

The algorithm can now generate infographics and other materials where readability of captions is important. Content creation speed has also increased significantly. Images still contain the visible Gemini icon and invisible SynthID tags.

In a test, Nano Banana Pro created an illustration specifically for demonstration purposes and then attempted to clean it from SynthID . But even after removing the traces, the system still recognized the image as generated.

Therefore, Gemini’s new functionality helps identify some of the images created by Google’s tools, but it isn’t universally applicable. Removing or distorting embedded traces is still possible, which means that tools for verifying the origin of digital content remain just one way to navigate the synthetic graphics pipeline.

Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli