Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Cybersecurity is about sharing. Recognize the risk,
combat it, share your experiences, and encourage others
to do better than you.
HackTheBox 320x100 1
Cyber Offensive Fundamentals 970x120 1
Google Gemini Improves AI-Generated Image Verification

Google Gemini Improves AI-Generated Image Verification

23 November 2025 09:37

Google has expanded the capabilities of its Gemini artificial intelligence service by adding a tool to the app and web version to check images for signs of automatic generation . This feature seems like a logical step: visual content is increasingly being created using AI models, and the demand for methods to distinguish real images from synthetic ones is growing.

The new detector is based on the SynthID system, digital markers invisible to the human eye, introduced in 2023. They are embedded in images created by Google generators and persist even after resizing or partial processing . For this reason, the check only works with content specifically created using Google templates.

If a photo doesn’t have an integrated watermark, the tool can’t reliably determine whether the image was created by AI. Testing content created by other models confirms this limitation: Gemini can sometimes make assumptions based on small visual cues, but this can’t be considered a definitive test.

SynthID is open source, and Google has even partnered with partners like Hugging Face and Nvidia, but most generators use different approaches. For example, ChatGPT uses the C2PA metadata schema, supported by Microsoft, Adobe, Meta , and others. Google has announced plans to add C2PA compatibility to expand tag detection beyond its own ecosystem.

But even this update doesn’t guarantee security, as this summer researchers at the University of Waterloo developed a method called UnMarker that can remove watermarks from AI models, including SynthID, in just a few minutes on an Nvidia A100 GPU. Google’s DeepMind team reached similar conclusions, noting that C2PA metadata is even less reliable in some scenarios.

At the same time, the company unveiled an updated version of its image generation system, called Nano Banana Pro. This model is based on the Gemini 3 Pro and is designed to more accurately reproduce text within a frame, a weakness of visual AI in the past.

The algorithm can now generate infographics and other materials where readability of captions is important. Content creation speed has also increased significantly. Images still contain the visible Gemini icon and invisible SynthID tags.

In a test, Nano Banana Pro created an illustration specifically for demonstration purposes and then attempted to clean it from SynthID . But even after removing the traces, the system still recognized the image as generated.

Therefore, Gemini’s new functionality helps identify some of the images created by Google’s tools, but it isn’t universally applicable. Removing or distorting embedded traces is still possible, which means that tools for verifying the origin of digital content remain just one way to navigate the synthetic graphics pipeline.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

Cropped RHC 3d Transp2 1766828557 300x300
The editorial staff of Red Hot Cyber is composed of IT and cybersecurity professionals, supported by a network of qualified sources who also operate confidentially. The team works daily to analyze, verify, and publish news, insights, and reports on cybersecurity, technology, and digital threats, with a particular focus on the accuracy of information and the protection of sources. The information published is derived from direct research, field experience, and exclusive contributions from national and international operational contexts.