New Delhi: The rapid adoption of artificial intelligence (AI) in financial services is triggering an unexpected and growing risk—fraudsters are now using the same advanced technologies to outsmart the systems designed to stop them. A new global forecast has highlighted what experts are calling a “fraud paradox,” where AI is simultaneously strengthening and weakening the financial ecosystem.
According to the report, the scale of the problem is already massive. Global consumer losses from fraud crossed $12.5 billion in 2024, while nearly 60% of businesses reported an increase in fraud-related losses between 2024 and 2025. At the same time, AI-driven fraud prevention systems helped prevent an estimated $19 billion in losses in 2025—showing both the power and the limitations of the technology.
FCRF Launches Premier CISO Certification Amid Rising Demand for Cybersecurity Leadership
At the heart of the concern is what the report terms “machine-to-machine mayhem.” As financial institutions deploy AI agents capable of making autonomous decisions—such as approving transactions or managing accounts—fraudsters are deploying similar AI systems to carry out attacks at scale. These automated fraud systems can execute thousands of transactions simultaneously, far beyond human capability.
Experts say the biggest challenge lies in accountability. When an AI agent initiates a fraudulent transaction, it becomes unclear who is responsible—the user, the institution, or the technology provider. This grey area is expected to trigger major regulatory debates in 2026 as AI adoption accelerates.
Deepfake hiring and identity infiltration risks rise
Another alarming trend flagged in the report is the rise of deepfake-enabled job fraud. Cybercriminals are increasingly using AI-generated identities, including fake video interviews and synthetic resumes, to infiltrate companies. Once hired, these individuals can gain access to sensitive financial systems and data, creating internal security risks.
This tactic has already been observed globally, where organised groups have used fake digital identities to secure remote jobs, particularly in tech and financial sectors. Experts warn that as remote work continues, verifying identity will become significantly more complex.
Clone websites and emotionally intelligent scam bots
The report also highlights how AI is making traditional scams more sophisticated. Fraudsters are now able to create highly convincing replicas of legitimate banking and financial websites. Even after these fake sites are taken down, new versions quickly emerge, making it difficult for enforcement teams to keep up.
In parallel, AI-powered scam bots are becoming more advanced. Unlike earlier scripted scams, these bots can now hold realistic conversations, build emotional connections, and manipulate victims over time. This has significantly increased the success rate of fraud schemes such as romance scams and emergency-based financial requests.
Smart homes emerge as new attack surface
With the growing use of connected devices, cybercriminals are also targeting smart home ecosystems. Devices such as virtual assistants, smart locks, and connected appliances can be exploited to gather personal data or monitor user behaviour. This information can then be used to plan targeted financial frauds.
Experts believe that as financial transactions become more integrated with everyday digital ecosystems, the attack surface for cybercriminals will continue to expand.
Financial institutions struggle with regulation and data challenges
While AI adoption is accelerating, financial institutions are facing significant operational hurdles. According to the report, 84% of decision-makers consider AI critical to their business strategy, and 89% say it will play a key role in lending processes. However, regulatory uncertainty remains a major concern, with 73% of institutions worried about compliance frameworks.
Data quality has emerged as another critical challenge. Nearly 65% of organisations say they lack AI-ready data, which directly impacts the accuracy and reliability of fraud detection systems. Poor data quality can lead to both false positives and missed fraud cases, weakening overall security.
Additionally, compliance processes remain largely manual in many institutions, increasing operational burden. Reports suggest that over 60% of organisations still rely on manual systems for regulatory compliance, even as AI systems grow more complex.
Industry at a tipping point
Experts believe 2026 could mark a turning point where the financial sector will need to rethink its approach to AI governance. While AI remains a powerful tool in detecting and preventing fraud, its misuse by cybercriminals is evolving just as quickly.
The report underscores a clear message: technology alone cannot solve the fraud problem. Stronger regulations, better data infrastructure, and continuous monitoring will be essential to keep pace with increasingly intelligent threats.
As AI continues to reshape finance, the battle between innovation and exploitation is intensifying—making vigilance, adaptability, and accountability more critical than ever.