Critical vulnerabilities in ChatGPT plugin expose sensitive data

Updated 6 months ago on June 09, 2024

Three vulnerabilities in the extension features used by ChatGPT open the possibility of unauthorized zero-click access to user accounts and services, including sensitive repositories on platforms such as GitHub.

ChatGPT plugins and custom versions of ChatGPT published by developers extend the capabilities of the AI model to interact with external services by giving the popular OpenAI generative AI chatbot access and permissions to perform tasks on various third-party sites, including GitHub and Google Drive.

Salt Labs researchers have discovered three critical vulnerabilities in ChatGPT, the first of which occurs during the installation of new plugins when ChatGPT redirects users to plugin sites for code approval. By exploiting this vulnerability, attackers could trick users into approving malicious code, resulting in the automatic installation of unauthorized plugins and possible subsequent account compromise.

Second, PluginLab, a plugin development framework, lacks proper user authentication, allowing attackers to impersonate users and hijack accounts, as seen in the "AskTheCode" plugin connecting ChatGPT to GitHub.

Finally, Salt researchers found that some plugins are susceptible to OAuth redirect manipulation, allowing attackers to insert malicious URLs and steal user credentials, further facilitating account takeover.

The report notes that the problems have already been fixed and there is no evidence that the vulnerabilities were exploited, so users should update their apps as soon as possible.

GenAI security issues jeopardize a vast ecosystem

Yaniv Balmas, vice president of research at Salt Security, says the problems discovered by the research team could put hundreds of thousands of users and organizations at risk.

"Security leaders in any organization need to better understand the risks, so they should review what plugins and GPTs their company uses and what third-party accounts are at risk through those plugins and GPTs," he says. "As a starting point, we would suggest doing a security review of their code."

Balmas recommends that plugin and GPT developers better understand the inner workings of the GenAI ecosystem, security measures, how to use and abuse them. This includes what data is sent to GenAI and what permissions are granted to the GenAI platform or connected third-party plugins - for example, permission to Google Drive or GitHub.

Balmas notes that Salt's research team only tested a small percentage of this ecosystem, and says the findings point to a larger risk associated with other GenAI platforms, as well as many existing and future GenAI plugins.

Balmas also says OpenAI should put more emphasis on security in its developer documentation to help mitigate risks.

The security risks of GenAI plug-ins are likely to increase

Sarah Jones, cyber threat research analyst at Critical Start, agrees that Salt Lab's findings indicate a broader security risk associated with GenAI plugins.

"As GenAI becomes more integrated with workflows, vulnerabilities in plug-ins can give attackers access to sensitive data or functionality across platforms," she says.

This emphasizes the need for robust security standards and regular audits for both GenAI platforms and their connected ecosystems, as hackers begin to find flaws in these platforms.

Darren Guccione, CEO and co-founder of Keeper Security, says these vulnerabilities serve as a "stark reminder" of the security risks inherent in third-party applications and should prompt organizations to beef up their defenses.

"As organizations rush to use AI to gain competitive advantage and improve operational efficiency, the need to rapidly deploy these solutions should not take precedence over security assessments and employee training," he says.

The proliferation of AI-enabled applications has also led to software supply chain security challenges, requiring organizations to adapt their security controls and data management policies.

He notes that employees are increasingly entering sensitive data into AI tools, including intellectual property, financial data, business strategies and more, and unauthorized access by attackers can wreak havoc on an organization.

"An account takeover attack that jeopardizes an employee's GitHub account or other sensitive accounts can have equally damaging consequences," he warns.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype