Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Open AIa leading artificial intelligence (AI) research lab, today announced the launch of a bug bounty program to address the growing cybersecurity risks posed by powerful language models such as its own ChatGPT.
The program is run in partnership with the crowdsourced cybersecurity firm Bug crowd — invites independent researchers to report vulnerabilities in OpenAI systems in exchange for financial rewards ranging from $200 to $20,000, depending on the severity. OpenAI said the program is part of its “commitment to developing secure and advanced AI”.
Concerns have emerged in recent months about vulnerabilities in AI systems that can generate synthetic text, images and other media. Researchers found a 135% increase in AI-assisted social engineering attacks from January to February, coinciding with the adoption of ChatGPT, according to AI cybersecurity firm DarkTrace.
While the OpenAI announcement was welcomed by some experts, others said a bug bounty program is unlikely to fully address the wide range of cybersecurity risks posed by increasingly sophisticated AI technologies.
The scope of the program is limited to vulnerabilities that can directly affect OpenAI’s systems and partners. It does not seem to address wider concerns about malicious use of such technologies as impersonation, synthetic media or automated hacking tools. OpenAI did not immediately respond to a request for comment.
A bug bounty program with limited scope
The bug bounty program comes amid a spate of security vulnerabilities, with the emergence of GPT4 jailbreaks, which allow users to develop instructions for hacking computers and researchers discovering workarounds for “non-technical” users to create malware and phishing emails.
It also comes after a security researcher known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.
Given these controversies, the launch of a bug bounty platform offers OpenAI the opportunity to address vulnerabilities in its product ecosystem while positioning itself as an organization acting in good faith to address the security threats introduced by generative AI.
Unfortunately, OpenAI’s bug bounty program is very limited in the scope of the threats it addresses. For example, the bug bounty program Official Page notes: “Issues related to the content of model prompts and responses are strictly out of scope and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service.”
Examples of security issues that are considered out of scope include jailbreaks and security bypasses, making the model say bad things, making the model write malicious code, or telling the model how to do bad things.
In that sense, OpenAI’s bug bounty program can be good at helping the organization improve its own security posture, but it does little to address the security risks generative AI and GPT-4 pose to society at large.
VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.