Google Puts Watermark on AI Images

Google Puts Watermark on AI Images

images provided by pixabay

This post is also available in: heעברית (Hebrew)

In an attempt to combat the rise of misinformation, Google’s AI arm SynthID is trialing a digital watermark to spot images made by artificial intelligence, which works by embedding changes to individual pixels in images so the watermarks are visible to computers but not to the human eye. Still, DeepMind says it is not “foolproof against extreme image manipulation”.

The usual kind of watermarks, which are typically a logo or text added on an image, aren’t enough for identifying Al-generated images because they can be easily edited out.

According to BBC News, Google’s system creates an effectively invisible watermark, which will allow people to use its software to find out instantly whether the picture is real or made by a machine, but the system will currently only apply to images generated by Google’s own image generator, Imagen.

Pushmeet Kohli, head of research at DeepMind states that the firm’s software can still identify the presence of the watermark even after the image is cropped or edited.

“You can change the color, you can change the contrast, you can even resize it… [and DeepMind] will still be able to see that it is AI-generated,” he said, but nevertheless reiterated that this is an “experimental launch” of the system, and the company needs people to use it to further improve it.

In July of 2023, Google was one of seven leading AI companies (including Microsoft and Amazon) to sign an agreement to ensure the safe development and use of AI, including ensuring that people can spot computer-made images by implementing watermarks.

Claire Leibowicz from the campaign group Partnership on AI stated that there needs to be more coordination between businesses and that standardization is needed in the field. She explained that there are currently so many different methods being pursued, that they need to monitor their impact and get better reporting on which are working and to what end.

In addition to images, Meta also published a research paper for its unreleased video generator called “Make-A-Video”, saying that watermarks will be added to generated videos to meet similar demands for transparency over AI-generated works.