Cybernews researchers found nearly 345,000 stolen credit card records exposed online after an AI-generated coding flaw left the fake carding platform Jerry’s Store unsecured. The leak included active cards, CVVs, expiry dates and billing details.

AI Coding Flaw at Jerry’s Store Exposes 345,000 Stolen Card Records

The420 Correspondent
5 Min Read

New Delhi | The rapidly growing dependence on Artificial Intelligence (AI)-based coding tools is now emerging as a serious cybersecurity concern. In a major international cyber investigation, researchers have revealed that a fraudulent online platform used for trading and testing stolen credit cards accidentally exposed sensitive data linked to nearly 345,000 payment cards on the open internet. Preliminary findings suggest the leak was caused by a security flaw in AI-generated code, ultimately exposing the operations of the cybercriminal network itself.

According to cybersecurity research team Cybernews, the platform, known as “Jerry’s Store,” had allegedly been operating as an underground marketplace where stolen payment cards were tested and sold. Investigators found that an unsecured server linked to the platform exposed highly sensitive information including cardholders’ names, card numbers, CVVs, expiry dates and billing addresses without any password protection or authentication layer. The discovery was made during a cyber investigation conducted in April 2026, triggering concern among global cybersecurity experts.

FCRF Academy Launches Premier Anti-Money Laundering Certification Program

Researchers found that the operators of the platform had used “Cursor,” an AI coding assistant, to build server infrastructure and internal monitoring systems. According to the investigation, the criminals reportedly asked the AI tool to create a statistics dashboard for managing card inventories and transactions. However, the AI-generated setup allegedly created a web directory structure without access controls or authentication security. This technical oversight left the database openly accessible online, exposing thousands of stolen payment card records to anyone who could locate the server.

The leaked database reportedly contained nearly 200,000 cards already marked as “invalid,” while approximately 145,000 cards were still active and usable. Cybersecurity analysts estimate that valid stolen credit cards are commonly sold on dark web marketplaces for anywhere between $7 and $18 each. Based on those estimates, the exposed database could be worth millions of dollars in illegal underground markets. Experts warned that such stolen financial data is highly valuable to cybercriminals because it can be used for fraudulent online purchases, identity theft and unauthorized financial transactions.

Investigators also uncovered how cybercriminals verify whether stolen cards remain active before selling them. According to the report, the operators allegedly used legitimate e-commerce and online service platforms including Amazon, Grubhub, Temu, Lyft and Sam’s Club to run small test transactions. If a payment succeeded, the card was marked as “valid” and later sold at higher prices on dark web networks. Security experts noted that these low-value transactions often blend into billions of regular digital payments processed daily, making them difficult for banks and payment companies to detect immediately.

Renowned cybercrime expert and former IPS officer Prof. Triveni Singh said AI-powered automation is rapidly becoming a new weapon for cybercriminals. According to him, “Earlier, creating such cyber fraud infrastructure required experienced hackers and advanced technical expertise. Now, AI tools have significantly lowered the barrier for cybercrime. Even individuals with limited technical knowledge can build sophisticated fraud platforms using AI assistance. The biggest risk arises when AI-generated code is deployed directly onto live servers without proper security audits.”

Technology experts say AI coding assistants can significantly accelerate software development, but a lack of human supervision and security testing may create serious vulnerabilities. Several recent studies have also shown that AI-generated code frequently contains security weaknesses, incorrect permission settings and data exposure risks if not properly reviewed by experienced developers.

Cybersecurity specialists have advised consumers to regularly monitor bank accounts and credit card statements, enable SMS and email transaction alerts and immediately block cards if suspicious activity is detected. Experts have also warned companies and software developers to implement strict security audits, penetration testing and manual verification before deploying AI-generated systems online. They believe such safeguards are essential to prevent future large-scale data leaks and financial cybercrime incidents driven by insecure AI-generated infrastructure.

Stay Connected