A cybersecurity firm claims its autonomous AI agent breached McKinsey’s internal AI platform “Lilli” in two hours through a SQL injection flaw, potentially exposing millions of chat records and internal documents.

AI Agent Breaches McKinsey’s Internal AI Platform in Two Hours, Exposes Security Risks

The420.in Staff
4 Min Read

A cybersecurity startup has claimed that its autonomous AI agent successfully breached an internal artificial intelligence platform used by consulting giant McKinsey & Company, gaining access to sensitive data in less than two hours.

The platform, known as Lilli, is an internal AI system used by McKinsey employees for research, document analysis, and strategy work. Security researchers said the breach highlighted how traditional software vulnerabilities can have far greater consequences when integrated with AI systems.

AI agent exploited classic SQL injection flaw

According to the researchers, the AI agent discovered a SQL injection vulnerability, one of the oldest and most common web application security flaws.

The vulnerability occurred because certain JSON field names were inserted directly into SQL queries, allowing the AI system to manipulate database requests. Automated scanners reportedly missed the issue because the injection occurred in JSON keys rather than values.

Using this weakness, the AI agent gradually extracted data from the system through multiple queries until it gained full read-and-write access to the platform’s production database.

Centre For Police Technology Invites Experts For Technical Sessions On Emerging Domains Of Police Technology

Large volumes of internal data potentially accessible

Once access was obtained, the agent reportedly reached a large volume of internal data associated with the platform.

Researchers said the database included:

  • 46.5 million chat messages between employees and the AI assistant
  • 728,000 files stored in the system
  • 57,000 user accounts
  • Millions of knowledge-base documents used to generate AI responses

Because Lilli is used internally by consultants, the dataset potentially contained information related to corporate strategy projects, mergers and acquisitions research, and internal consulting frameworks.

AI prompts stored in database raised manipulation risk

One of the most concerning discoveries was that the system prompts controlling the AI’s behavior were stored in the same database.

Security researchers warned that attackers with database access could potentially modify those prompts, silently altering how the AI system behaves.

Such manipulation could theoretically lead to tampered strategic recommendations, hidden data exfiltration, or manipulated outputs, without requiring any code changes to the platform.

Vulnerability patched after disclosure

The security firm said it notified McKinsey about the vulnerability through responsible disclosure procedures. The consulting firm reportedly patched the issue within a day after receiving the report.

A McKinsey spokesperson stated that a forensic investigation found no evidence that client data or confidential information had been accessed by unauthorized third parties.

AI security risks under growing scrutiny

Cybersecurity experts say the incident illustrates how traditional software vulnerabilities can become far more dangerous when combined with AI systems and large internal datasets.

While SQL injection has been known for decades, the case demonstrates that attackers can exploit such weaknesses to manipulate or extract data from modern AI platforms if security practices are not rigorously implemented.

As organizations increasingly deploy AI tools across internal operations, experts warn that AI-driven systems could introduce new attack surfaces that require stronger security oversight and monitoring.

About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.

Stay Connected