Study Unveils the Problems of Watermarking AI-Generated Content

image by pixabay

Nowadays, the internet is increasingly being saturated with texts and other media types that are completely created by artificial intelligence. This content is sometimes referred to as AIGC (AI-generated content) and can often be mistaken for content created by humans.

This growing use of generative AI has raised many questions related to intellectual property and copyright, and many companies and developers who were unhappy with the widespread commercial use of content generated by their models have introduced watermarks to deal with AIGC.

According to Techxplore, watermarks are essentially patterns or characterizing marks that can be placed on images, videos or logos to clarify who created them and owns their copyrights, but while watermarks have been widely used for many years, their effectiveness for regulating the use of AIGC is not yet established.

Researchers at Nanyang Technological University, Chongqing University, and Zhejiang University recently performed a study exploring the effectiveness of watermarking as a way to prevent the unwanted and uncredited spreading of AIGC.

As part of their study, the researchers outlined a computational strategy to either erase or forge watermarks in images generated by AI models. A person using this strategy would first collect data from a target AI company, application, or content-generating service and then use a publicly available denoising model to ‘purify’ it. The user would finally need to train a generative adversarial network (GAN) using this purified data. After training, this GAN-based model could successfully remove or forge watermarks.

Their work highlights the vulnerabilities and consequent impracticality of using watermarking to enforce the copyrights of AIGC, and may inspire companies and developers specialized in generative AI to develop different and more advanced watermarking approaches that are better suited for preventing the illegal dissemination of AIGC.