This post is also available in: עברית (Hebrew)
OpenAI declared it is going to introduce new tools meant to combat disinformation, just in time for the many elections that will be held in some of the world’s leading countries.
It is clear by now that AI’s great assistance and technological advances come with the heavy price of flooding the internet with disinformation. With elections looming in countries like the US, India and the UK, OpenAI declared that it will not allow its tech (like the chatbot ChatGPT and image generator DALL-E 3) to be used for political campaigns.
According to Techxplore, OpenAI said in a blog post that they want to make sure their technology is not used in a way that could undermine the democratic process. “We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”
While fears over election disinformation are nothing new, the sheer availability of AI text and image generators has greatly increased this threat, especially with users not having the ability to easily distinguish fake or manipulated content.
OpenAI’s alleged tools would attach reliable attribution to text generated by ChatGPT and give users the ability to detect if an image was created using DALL-E 3.
The company stated that it intends to implement the digital credentials of C2PA (the Coalition for Content Provenance and Authenticity), an approach that encodes details about the content’s provenance using cryptography. Members of the coalition include Microsoft, Sony, Adobe, Nikon and Canon, and it aims to improve methods for identifying and tracing digital content.
OpenAI further specifies that when asked procedural questions about US elections, ChatGPT will direct users to authoritative websites.
Furthermore, the company added that DALL-E 3 has “guardrails” that prevent users from generating images of real people, including political candidates.