Can Chatbots Guess Your Private Data?

Can Chatbots Guess Your Private Data?

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Researchers at ETH Zurich are concerned about the ability of chatbots to infer private details about users from seemingly innocent texts.

The researchers found that large language models (LLMs) can infer “a wide range of personal attributes” like the person’s sex, income, and location just from text obtained from social media sites.

Robin Staab, a doctoral student at ETH Zurich who contributed to the report “Beyond Memorization: Violating Privacy via Inference with Large Language Models,” said that LLMs bypass the best efforts of chatbot developers to ensure user privacy and maintain ethics standards. As models train on massive amounts of unprotected online data, their ability to deduce personal details is troubling.

He explains: “By scraping the entirety of a user’s online posts and feeding them to a pre-trained LLM, malicious actors can infer private information never intended to be disclosed by the users.”

According to Techxplore, with such information, users can be targeted by political campaigns or advertisers who might know more about them than desired, or worse- criminals or stalkers may learn the identities of potential victims.

An example provided by Techxplore shows a case where a Reddit user complained about their commute to work (from which the chatbot was able to infer their location), spoke about buying certain items (that revealed their sex) and made comments about a TV show that aired when they were in high school (disclosing their age).

The researchers also found that chatbots can detect language characteristics that reveal much about a person- with region-specific slang and phrasing enabling it to pinpoint a user’s location or identity. For example, the chatbot was able to narrow down a user’s location down to being from three possible locations, where a phrase he used was popular.

Moreover, the researchers voiced great concern regarding the potential for malicious chatbots to encourage seemingly innocent conversation that steers users into potentially revealing comments.