OpenAI has launched the GPT-5.5 Bio Bug Bounty programme, inviting cybersecurity researchers, biosecurity experts and AI red teamers to test its latest model for biological safety weaknesses as the company moves to strengthen safeguards against emerging misuse risks.
Researchers Asked to Probe for Universal Jailbreaks
The central challenge of the programme is to find a universal jailbreak for GPT-5.5. In this context, a jailbreak is described as a prompt designed to bypass the model’s built-in safety filters and ethical guardrails. Participants are required to craft a single prompt that forces the model to answer a strict five-question biosafety challenge successfully.
The attack must be carried out from a clean chat session and without triggering automated moderation warnings or backend alerts. The testing environment for this specific bounty is restricted to GPT-5.5 running within Codex Desktop, and the stated purpose is to identify critical vulnerabilities and logic flaws before they can be exploited by malicious actors.
FCRF Academy Launches Premier Anti-Money Laundering Certification Program
Rewards, Timeline and Access Controls
The programme offers a top reward of $25,000 to the first researcher who successfully answers all five biosafety questions using a single prompt. Smaller discretionary awards may be granted for partial results that still provide useful threat intelligence.
Applications opened on April 23, 2026, and will be accepted on a rolling basis until June 22, 2026. The active testing phase is scheduled to begin on April 28, 2026 and conclude on July 27, 2026. OpenAI is sending direct invitations to a vetted list of trusted bio red teamers while also reviewing new applications submitted through its official portal.
Strict Confidentiality Requirements Apply
Access to the Bio Bug Bounty programme is tightly restricted because of the sensitivity of biological threat intelligence. Applicants must provide their full name, organisational affiliation and relevant technical experience in either AI security or biology. Accepted researchers must have an active ChatGPT account and sign a strict non-disclosure agreement before joining the testing platform.
This legal framework bars public disclosure of testing data, including engineered prompts, model completions, security findings and direct communications with the OpenAI engineering team. The programme is presented as a bio-specific initiative operating alongside the company’s broader security and threat research work, while researchers focused on more traditional software vulnerabilities or other AI logic flaws are directed to existing safety and security bug bounty programmes.
About the author – Rehan Khan is a law student and legal journalist with a keen interest in cybercrime, digital fraud, and emerging technology laws. He writes on the intersection of law, cybersecurity, and online safety, focusing on developments that impact individuals and institutions in India.