Donations Make us online
Microsoft-backed OpenAI has launched a bug bounty program and is inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help the company identify and address vulnerabilities in its generative artificial intelligent systems.
“We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information,” OpenAI said in its blog post on Tuesday.
Based on the severity and impact of the reported vulnerability, OpenAI will hand out cash rewards ranging from $200 for low-severity findings to up to $20,000 for exceptional discoveries.
The company has partnered with Bugcrowd, a bug bounty platform, to manage the submission and reward process.
The OpenAI bug bounty program includes API targets, ChatGPT, third-party corporate targets, OpenAI API keys, and OpenAI research organization.
The API targets include OpenAI API and public cloud resources or infrastructure involved in serving the OpenAI API such as cloud storage accounts (e.g., Azure data blobs), and cloud compute servers (e.g., Azure virtual machines).
In terms of ChatGPT, the scope includes ChatGPT Plus, logins, subscriptions, OpenAI-created plugins (e.g. browsing, code interpreter), plugins that users create themselves, and all other functionality.
Also included in the scope of the program is confidential OpenAI corporate information that may be exposed through third parties such as Google Workspace, Asana, Trello, Jira, Monday.com, Notion, Confluence, Evernote, Intercom, Hubspot, Zendesk, Salesforce, Stripe, Airbase, Navan, Tableau, Mode, Charthop, and Looker, Bugcrowd said.
Issues related to the content of model prompts and responses are strictly out of scope and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service. Even model hallucinations are listed as out of scope by OpenAI.
“Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” OpenAI said.
Examples of issues that are out of scope include jailbreaks or safety bypasses, getting the model to say bad things, getting the model to tell you how to do bad things, and getting the model to write malicious code.
Model hallucinations refer to situations where the user gets the model to pretend to do bad things, gets the model to pretend to give answers to secrets, or gets the model to pretend to be a computer and execute code.
Once a vulnerability is discovered, information related to it needs to be communicated using OpenAI’s Bugcrowd program. The details of the vulnerability need to be kept confidential until authorized for release by OpenAI’s security team. OpenAI said it aims to provide authorization within 90 days of report receipt.
The announcement of the bug bounty program by the company comes within weeks of ChatGPT facing a security incident. Last month, the company revealed that a Redis client open source library bug had led to a ChatGPT outage and data leak, where users could see other users’ personal information and chat queries.
Chat queries and personal information such as subscriber names, email addresses, payment addresses, and partial credit card information of approximately 1.2% of ChatGPT Plus subscribers were exposed, the company acknowledged.
ChatGPT was launched by OpenAI in November and had over one million users within the first five days.
However, ChatGPT is increasingly facing competition. On Monday, Alibaba Cloud announced the launch of a new large language model, called Tongyi Qianwen, which it will roll out as a ChatGPT-style front end to all its business applications.
Tongyi Qianwen will support both English and Chinese inputs and rolled out in beta test for customers in China.
Another Chinese internet services and AI giant, Baidu, announced a Chinese language ChatGPT alternative, Ernie bot, last month. In its initial phase, 650 business partners would have access to the bot, and the company hopes to improve the bot based on feedback.
Copyright © 2023 IDG Communications, Inc.
Source link
Leave a Reply