OpenAI has backed Illinois Senate Bill 3444, a proposal that would shield frontier AI developers from liability in extreme harm cases if they did not act intentionally or recklessly and published safety and transparency reports.

OpenAI Supports Bill to Protect AI Firms From Major Lawsuits

The420 Correspondent
4 Min Read

OpenAI has backed an Illinois state bill that would limit the liability of artificial intelligence developers in cases where AI systems are used to cause mass harm or major property damage, according to a Wired report. The proposed legislation, SB 3444, would shield AI labs from liability in incidents involving the death or serious injury of 100 or more people, or at least $1 billion in property damage (about ₹930 crore), provided the company did not act intentionally or recklessly and has published safety and transparency reports.

FCRF Returns With CDPO, Its Premier Data Protection Certification for Privacy Professionals

How the Illinois Bill Defines “Critical Harms”

The bill describes “critical harms” as including the use of AI to create chemical, biological, radiological or nuclear weapons, as well as situations in which an AI system independently engages in conduct that would be criminal if carried out by a human. Under the proposal, companies behind such systems, including those developing tools like ChatGPT, would not be held liable if they meet the conditions set out in the legislation. It applies to “frontier models,” defined as systems trained using more than $100 million in computational resources, or roughly ₹93 crore.

What OpenAI Said in Support of the Proposal

In a statement quoted by Wired, OpenAI spokesperson Jamie Radice said the company supported approaches that focus on reducing the risk of serious harm from the most advanced AI systems while still allowing the technology to reach people and businesses in Illinois. Radice also said such proposals help avoid a patchwork of state-by-state rules and move toward clearer national standards.

In testimony supporting the measure, OpenAI’s Caitlin Niedermeyer argued for a broader federal approach to AI regulation. She said the company believed the central goal of frontier regulation should be the safe deployment of advanced models in a way that also preserves United States leadership in innovation. The report said this marks a shift from OpenAI’s earlier posture, when it had largely opposed measures that could increase liability for AI developers.

Criticism and the Wider Liability Debate

The proposal has drawn criticism from opponents who argue that AI companies should not receive reduced liability protections. Wired quoted Scott Wisor of the Secure AI Project as saying that polling in Illinois showed strong public opposition to exempting AI companies from liability.

The debate comes amid broader unresolved questions over who should bear responsibility when AI systems are linked to serious harm. While the Illinois bill deals with catastrophic outcomes, the report noted that some companies have already faced legal action in smaller cases involving harm arising from interactions with AI systems.

Stay Connected