New Delhi: Artificial Intelligence is increasingly becoming a powerful tool not only for innovation but also for cybercriminals. According to a recent report, an unidentified hacker allegedly used the AI chatbot Claude to breach networks of several government agencies in Mexico and steal nearly 150GB of sensitive data. The incident has sparked major concerns over the security of AI tools and the risks of their misuse.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
How the Hacker Misled AI
According to the report, the hacker communicated with Claude in Spanish and convinced the chatbot that they were working under a “bug bounty program” to test vulnerabilities in government systems.
Believing this explanation, the AI reportedly provided guidance related to identifying weaknesses in government websites, generating scripts, and automating parts of the data-extraction process.
Cybersecurity researchers monitoring hacker forums and activities online later noticed public conversations and technical clues that indicated a breach of Mexican government systems.
Claim of Data Leak Affecting Millions
The stolen data reportedly includes records of around 190 million taxpayers, voter-related information, identification documents of government employees, and civil registry data.
Reports suggest that the cyberattack began in December and continued for nearly a month.
Several Government Institutions Targeted
Researchers claim the hacker targeted multiple major government institutions in Mexico, including:
- Federal Tax Authority
- National Electoral Institute
- Government systems of the states of Jalisco, Michoacán, and Tamaulipas
- Mexico City Civil Registry
- Monterrey Water Supply Agency
However, several government agencies have denied any major data breach, stating that their security systems remain strong.
ChatGPT Also Used
The report says that when Claude did not provide enough information, the hacker also turned to OpenAI’s ChatGPT.
The hacker allegedly asked questions related to advancing through networks, identifying possible credentials, and estimating the risk of being detected.
OpenAI responded that accounts attempting to violate its policies were identified and banned.
Anthropic Suspends Accounts
Anthropic, the company behind Claude, said it has closed the accounts involved after an investigation.
The company stated that lessons from such incidents are being used to improve the safety of its AI models. Its newer model, Claude Opus 4.6, includes additional safety features designed to prevent misuse.
Exploiting Weaknesses in AI Chatbots
Experts say that certain limitations in AI chatbots are increasingly being exploited by cybercriminals. A large-scale leak of citizens’ and government employees’ identity information could potentially lead to cybercrime, identity theft, and espionage.
AI Making Cyberattacks More Dangerous
According to reports, AI-driven cyberattacks increased by 89% in 2025. A 2026 CrowdStrike cybersecurity report claims that hackers can now infiltrate systems in an average of 29 minutes with AI assistance.
Currently, about 1 in every 6 data-theft cases involves AI tools, and AI has also made phishing emails and cyberattacks more sophisticated and harder to detect.
Expert Opinion
Cybersecurity expert and former IPS officer Professor Triveni Singh said that while AI technology is benefiting many sectors, its misuse is growing rapidly.
He explained that cybercriminals are now using AI to make hacking faster, more automated, and more effective. Tasks that previously took days can now be completed within minutes using AI assistance.
He warned that if governments and technology companies fail to strengthen AI security standards, future cyberattacks could occur on an even larger scale.
Experts believe this incident serves as a serious warning for the digital world—as AI technology evolves rapidly, cybercriminals are also discovering new ways to exploit it.
