ChatGPT – Could AI Lead to Swarm of Fake Information?

ChatGPT – Could AI Lead to Swarm of Fake Information?

Image provided by pixabay

This post is also available in: heעברית (Hebrew)

Many of us have already heard about ChatGPT, one of the first AI chat bots to be accessible to a wide public audience. ChatGPT has been praised for its innovative technology and detailed responses, however more and more instances of misuse have been reported and many people are concerned that this new technology will be used to spread misinformation.

Unlike previous AI companions such as Siri or Alexa, ChatGPT doesn’t search the internet to answer your questions. ChatGPT is a language model, meaning the bot analysis vast amounts of data and generates an answer based on its previous experience. From answering simple questions, to recommending books and even writing full length essays, it is unsurprising that this new AI based tool is at the center of a heated debate on ethics and disingenuous information.

According to a report made by Georgetown University, USA, “there are also possible negative applications of generative language models, or ‘language models’ for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor.”

“For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion,” the report continues to say.

Prepared to dive into the world of futuristic technology? Attend INNOTECH 2023, the international convention and exhibition for cyber, HLS and innovation at Expo, Tel Aviv, on March 29th-30th

Interested in sponsoring / a display booth at the 2023 INNOTECH exhibition? Click here for details!