May 22 - At the I/O Developer Conference 2025, theGoogleintroduce a solutionDeep fakes(deepfake) and disinformation issues, launching the SynthID Detector,is a new tool for recognizing AI-generated media through digital watermarking.

1AI cites a blog post about a 5,50% surge in deeply faked video between 2019 and 2024, and a significant increase in AI-generated content on social media.
SynthID Detector is currently being rolled out to early testers, with journalists, researchers and developers able to join the waiting list. The tool recognizes AI-generated media content by scanning documents for invisible digital watermarks. Whether it's an image, video, audio, or text, it can pinpoint the exact section where the watermark is located.
For example, in audio, it can point out specific clips where the watermark appears, and in photos, it highlights areas where the watermark may be embedded. Even if the content is shared or undergoes multiple transformations, the watermark can still be detected, Google says.
Google plans to embed the SynthID Detector tool into the video generation tool Veo 3, the image generation tool Imagen 4, and the music generation tool Lyria 2. Google will also work with NVIDIA to apply the watermarking technology to the NVIDIA Cosmos model and work with service providers such as GetReal Security to verify the watermark.
Google also points out that SynthID is not foolproof, especially in cases of text or extreme image modifications where watermarks can be bypassed.