ChatGPT, Grok, and the Rise of Scams You Never Saw Coming

Swagta Nath
6 Min Read

As platforms like ChatGPT-4o, Grok 3, and Midjourney continue to evolve in their capabilities and reach, cyber risk specialists are raising alarms about the increasing misuse of generative AI. From sophisticated fraud to political manipulation, these tools are now enabling a new wave of cybercrimes, particularly threatening vulnerable regions like South Asia.

Madhu Srinivas, Chief Risk Officer at Signzy, a global RegTech firm focused on AI-based compliance and risk solutions, has warned of a disturbing trend: the use of generative AI not just for document forgery but for creating hyper-realistic visuals that are being deployed in scams, online abuse, and disinformation campaigns.

Everyday Users in the Crosshairs

The most dangerous aspect is how ordinary users are being targeted,” Srinivas explained in an exclusive conversation. “We’re witnessing cases where personal images are scraped and transformed into deepfakes, emotionally manipulative content is circulated through social media, and phishing attempts are made more convincing using AI-generated visuals.”

According to him, victims often don’t realise they’ve been manipulated until after significant harm is done. Non-consensual content, AI-enhanced fraud, and deepfake extortion cases are rising at an alarming rate.

ALSO READ: “DFIR Capability Maturity Assessment Framework” by ALGORITHA

Top 5 Crimes Driven by AI Imagery

Srinivas outlined five major misuse scenarios involving AI-generated visuals:

  1. Deepfake CEO Scams: Criminals impersonate top executives through video or email to trick employees into making unauthorized payments or sharing confidential data.

  2. Sextortion Threats: Offenders digitally alter personal photos into explicit content to blackmail victims, particularly targeting women and minors.

  3. Political Manipulation: Fake visuals of protests or violence are created to mislead voters and incite tension during elections.

  4. Biometric Spoofing: AI-generated faces and irises are now being used to fool facial recognition systems, compromising financial and national security infrastructure.

  5. Marketplace Scams: Fraudsters use AI-generated headshots to create fake profiles on platforms like Airbnb and dating apps, often as part of money-laundering or identity theft schemes.

Biometric Integrity Under Threat

The ability of generative AI to convincingly replicate facial and iris patterns is especially concerning for the banking sector, border security, and public surveillance systems. Srinivas warned that the widespread use of synthetic faces challenges the very foundation of biometric authentication systems.

South Asia’s Unique Vulnerability

Regions like South Asia are particularly susceptible, he noted, due to the rapid spread of information on platforms such as WhatsApp and Telegram, combined with political polarization and low media literacy.

A single AI-generated photo of a fabricated political event can go viral within minutes, creating public confusion or unrest long before the truth emerges,” he said.

He also pointed to the growing use of deepfakes in cyber extortion schemes, especially against journalists, human rights activists, and women. Some of these campaigns, he claimed, may be driven by foreign actors looking to destabilize social harmony.

Are Platforms Doing Enough?

While AI developers such as OpenAI and xAI have introduced safeguards like watermarking and content moderation, Srinivas believes the protections are not keeping pace with the technology. “Anyone with basic access can generate fake human faces or IDs. The guardrails just aren’t strong enough yet,” he said.

What Needs to Change

To combat the emerging threats, Srinivas proposed a set of key solutions:

  • Mandatory watermarking and metadata tagging on all AI-generated media.

  • Tiered access to sensitive prompts based on risk.

  • Open-source tools for public verification of images.

  • Transparent abuse reporting systems across platforms.

  • Cross-sector cooperation with regulators and law enforcement to create global standards for detecting and managing synthetic content.

Call to Action: Equip Society to Resist

He also emphasized the urgent need for various sectors to adapt:

  • Law enforcement should enhance digital forensic capabilities and update legal frameworks to tackle synthetic media crimes.

  • Journalists must treat visual verification as critical as traditional fact-checking.

  • Educators should integrate AI literacy and visual critical thinking into school curricula to prepare students for a world where seeing is no longer believing.

Srinivas concluded with a call for collective resilience in the face of digital deception: “Truth is under threat, but it’s not defenseless. With the right tools, partnerships, and awareness, we can safeguard society from misinformation and restore trust in what we see.”

Stay Connected