ChatGPT Plugins Vulnerable to Threat Actors

ChatGPT Plugins Vulnerable to Threat Actors

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Software company Salt Labs reveals that ChatGPT plugins that let it interact with external programs and services have vulnerabilities that could be exploited during a cyberattack.

The company’s research team uncovered three flaws – one within ChatGPT itself, one with PluginLab (used with the AI model), and one with OAuth (used to approve interactions between applications). They explain that while such plugins are extremely useful, they permit the sharing of third-party data which can be exploited by cybercriminals.

“As more organizations leverage this type of technology, attackers too are pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data,” said Yaniv Balmas, vice president of research at Salt Security, adding: “Our recent vulnerability discoveries within ChatGPT illustrate the importance of protecting the plugins within such technology to ensure that attackers cannot access critical business assets and execute account takeovers.”

According to Cybernews, the ChatGPT glitch occurred when the AI model redirected users to a plugin website to get a security access code. Salt Labs researchers discovered that an attacker could exploit this function to deliver a code approval with a malicious plugin, enabling an attacker to automatically install their credentials on a victim’s account.

The second vulnerability is the AI website PluginLab. Salt Labs researchers discovered the website did not properly authenticate user accounts, thus allowing a potential attacker to insert another user ID and get a code representing the victim, allowing account takeover on the plugin.

The third issue concerned several plugins related to OAuth redirection, which could be manipulated by a threat actor sending an infected link to an unsuspecting user.

All the plugins highlighted by Salt Labs don’t verify URLs, and because of that, their use would have left a victim open to having their credentials stolen, paving the way for account takeover by an attacker.

Salt Labs reportedly reached out to OpenAI, which fixed the vulnerabilities.