The Power of Persuasion: AI Uses Personal Information to Change Your Mind

The Power of Persuasion: AI Uses Personal Information to Change Your Mind

image provided by pixabay

This post is also available in: heעברית (Hebrew)

A new study from EPFL demonstrates the persuasive power of LLMs, finding that participants debating GPT-4 (which had access to their personal information) were more likely to change their opinion compared to those debating humans.

Associate Professor Robert West, head of the Data Science Lab in the School of Computer and Communication Sciences, spoke about this issue. He spoke about the very real fear of unknowingly speaking to a non-human entity: “The danger is superhuman like chatbots that create tailor-made, convincing arguments to push false or misleading narratives online.”

Earlier work on the matter revealed that while LLMs can be at least as persuasive as humans, introducing personalized information about the person it is talking to (knowing the person’s age, gender, ethnicity, education level, employment status, and political affiliation) majorly improves its performance. West adds that this type of information is only a sliver of what an AI model could know about a person through the internet.

According to Techxplore, in a previous study, the researchers recruited 820 people to participate in a controlled experiment in which each participant was randomly assigned a topic in one of four conditions: debating another human with/without personal information or debating a chatbot with /without personal information. The article published about the experiment (“On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial”) showed that participants who debated the chatbot with access to their personal information had 81.7% higher odds of increased agreement with their opponents compared to participants who debated humans. GPT-4 still outperformed humans without personalization, but the effect was far lower.

It seems that after being trained on countless social media posts and books and papers from psychology about persuasion, the models managed to learn through online patterns that a certain way of making an argument is more likely to lead to a persuasive outcome.

West concluded: “LLMs have shown signs that they can reason about themselves, so given that we are able to interrogate them, I can imagine that we could ask a model to explain its choices and why it is saying a precise thing to a particular person with particular properties. There’s a lot to be explored here because the models may be doing things that we don’t even know about yet in terms of persuasiveness, cobbled together from many different parts of the knowledge that they have.”