Researchers warn developers about vulnerabilities in ChatGPT plugins

Updated 4 months ago on June 26, 2024

Salt Labs researchers urge OpenAI to clarify documentation and developers to be more aware of potential security vulnerabilities when working with ChatGPT.

ChatGPT plugins were introduced in March 2023 as an experimental means of connecting OpenAI's ChatGPT big language model to third-party applications. Plugin developers could use them to interact with external APIs to access data resources such as stock prices and sports scores, as well as connect to other services to perform actions such as booking airline tickets or ordering food. ChatGPT plugins are currently being supplanted by custom GPTs, which will be introduced in November 2023.

However, some of the early ChatGPT plugins continue to be used during the transition to custom GPTs, report security researchers at Salt Labs, a division of Salt Security that specializes in API protection. In June 2023, prior to the implementation of custom GPTs, Salt Labs researchers discovered an OAuth authentication implementation for ChatGPT plugins that could potentially allow attackers to install a malicious plugin on a user's account and gain access to sensitive data. The researchers also found that attackers could use manipulation of OAuth redirects through third-party ChatGPT plugins to steal credentials and access user accounts connected to ChatGPT, including on GitHub.

Two third-party plugin providers, PluginLab.ai and Kesem AI, were notified by Salt Labs and also patched their vulnerabilities. The researchers said they have seen no evidence that these vulnerabilities were exploited.

"It's theoretically possible, but very theoretical," Yaniv Balmas, principal investigator at Salt Labs, told TechTarget Editorial this week. "It's hard for us as security researchers to keep track of all the developments in generative AI, and that statement holds true for attackers as well, but will they get to that point? It's not a question of 'if' but 'when'."

The researchers also acknowledged that custom GPTs are better at alerting users to potential risks associated with connecting to third-party applications, and that OpenAI seems intent on abandoning ChatGPT plugins in favor of custom GPTs. But the actions of custom GPTs are "plugin-based," according to OpenAI documentation, and Salt Security found vulnerabilities in that framework as well.

According to Balmas, Salt researchers plan to disclose these vulnerabilities according to the normal disclosure process to allow OpenAI to fix the problem, and have so far declined to provide other details.

"We can't say anything about it other than that [custom GPTs] are better protected than plugins, but definitely not airtight," he said. "The impact is very similar to what we're showing in these vulnerabilities."

According to industry analysts, the vulnerability disclosed by Salt Labs could have serious consequences if poorly configured ChatGPT plugins gain access to critical applications, especially code repositories on GitHub.

"This leaves developers vulnerable to attack because services such as ChatGPT plugins can interact with your sensitive data. ... This unfettered access for adversaries gives them the opportunity to wreak serious havoc," - Tom Tiemann, analyst at Enterprise Strategy Group.

"This is noteworthy because it leaves developers vulnerable to attack because services like ChatGPT plugins can interact with your sensitive data," said Todd Tiemann, an analyst at TechTarget's Enterprise Strategy Group (ESG). "Depending on the plugin, this could also authorize access to your personal GitHub or Google Drive accounts. This unrestricted access for adversaries gives them the opportunity to wreak some serious havoc."

One analyst, however, said such vulnerabilities are to be expected with any new technology when developers try to release a minimum viable product.

"The OAuth redirect manipulation is real and needs to be fixed, but it's kind of a well-researched problem, and I understand that with every new technology there has to be a period of testing the waters," said Daniel Kennedy, an analyst at 451 Research, a unit of S&P Global. "There will be vulnerabilities and then they will be fixed - there will be talk at Black Hat about 'all these old tricks work on the new platform'."

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype