This post is also available in: עברית (Hebrew)
As part of the current effort to combat the flood of false AI-generated information and deep-fake images, leading companies such as Google, Meta, OpenAI and others agreed to add watermarks to their AI-generated content, as reported by the White House.
The companies made a commitment to the White House to implement differentiating measures such as watermarking AI-generated content to help make the technology safer, as well as thoroughly test systems before releasing them, and share information about how to reduce risks and invest in cybersecurity.
Ever since artificial intelligence apps like ChatGPT became available to the public and took the world by storm, lawmakers worldwide have considered how to manage the dangers of the emerging technology to national security and the economy.
As we reported in the past, EU lawmakers are working on a set of draft rules where systems like ChatGPT would have to disclose AI-generated content and help distinguish so-called deep-fake images from real ones. The US Congress is currently considering a bill to require political ads to disclose whether AI was used to create any of its content.
According to Cybernews, the companies committed to developing a system to “watermark” all forms of content, from text, images, and audio, to videos generated by AI, in order to inform users when the technology has been used.
This watermark, which will be embedded in the content in a technical manner, will presumably make it easier for users to spot deep-fake images or audio that may depict misleading images of events that did not occur. It is currently unclear how the watermark will be evident when sharing the information.
These companies have also pledged to focus on protecting users’ privacy as AI develops, and on ensuring that the technology is free of bias and not used to discriminate against vulnerable groups.