AI-generated faces and voices aren’t just internet toys anymore. They’re being used to steal money, swing opinions, and muddy the record in ways that are hard to unwind. In 2024, a finance employee in Hong Kong joined a video call and “saw” their CFO and colleagues; the entire meeting was a deepfake, and $25 million vanished. In the US, regulators moved after an AI-cloned “Biden” voice told voters to stay home—federal enforcers deemed AI voices in robocalls illegal and even proposed fines tied to that incident.
Let’s break it down: what attackers are doing, what’s actually working to stop them, and where the law and platforms are landing—globally and in India.
What attackers do with deepfakes
- High-value fraud. Video-conference deepfakes (faces, voices, backgrounds) are now realistic enough to pass casual verification, enabling wire-fraud and invoice-fraud playbooks at scale.
- Political manipulation. 2024–2025 election cycles worldwide saw AI-generated clips and audio circulate widely; India wasn’t spared—viral deepfakes and AI-assisted content popped up through the general election, prompting investigations and warnings.
- Financial-sector pressure. US Treasury’s FinCEN warned banks that deepfakes are now part of account opening, KYC, and payments fraud, and told them what to watch for.
FCRF Launches India’s Premier Certified Data Protection Officer Program Aligned with DPDP Act
Counter-technology: what works (and what doesn’t)
Detection is improving—but brittle.
AI detectors look for artifacts in pixels, acoustics, and motion. They help, but two hard truths persist: (1) detectors often don’t generalize to new models, and (2) watermarks can be stripped with modern diffusion “regeneration” tricks. Translation: a single silver bullet doesn’t exist.
Provenance > guessing authenticity.
Instead of guessing “is this fake?”, provenance tags attach who made this and how. The C2PA standard (backed by Adobe, Microsoft, OpenAI, camera makers, newsrooms, and others) encodes tamper-evident “Content Credentials” in media. Adoption accelerated through 2024–2025 and is now showing up in creative tools and even hardware.
Watermarking is becoming ubiquitous—within limits.
Google’s SynthID now embeds invisible marks across images, audio, video and even text, with a public SynthID Detector rolling out to help verify content from Google’s own models. It’s real progress for traceability—but remember the brittleness point above. Combine it with provenance and policy, not instead of them.
Platforms are labeling more AI content.
- YouTube requires creators to disclose realistic synthetic/altered media, and shows labels—especially on sensitive topics like news and elections.
- Meta expanded “Made with AI” labels across image, audio, and video starting May 2024, and now shows broader “AI Info” labels.
- TikTok began auto-labeling AI-generated content, not just relying on user disclosure. That’s a big shift.
Law & policy: where regulators are landing
- European Union. The EU AI Act imposes transparency duties: deepfake content must be clearly disclosed, and users must be told when they’re interacting with an AI system. These requirements are now locked in and phasing in.
- United States. The FCC ruled AI voices in robocalls are illegal under existing robocall rules and has started enforcement, including proposed fines tied to election-related voice fakes.
- India. Government advisories to platforms (late-2023, Mar 2024) pressed for labeling, due diligence, and stronger moderation under the IT Rules; CERT-In followed with a Nov 2024 advisory on deepfake threats and countermeasures. In Jan 2025, the Election Commission of India moved to require labels for AI-generated election content. Expect continued use of the IT Act and DPDP Act to prosecute harms as specific offenses (impersonation, cheating, etc.)
A practical playbook for organizations
- Assume video and voice can lie. For any out-of-band payment, data access, or HR action requested over video/voice, require a second channel check (call back on a known number, or use a pre-shared code word). Make this a written policy. Cases like the Hong Kong heist show why.
- Tighten CFO/AP flows. Dual approvals for unusual amounts, flagged vendor changes, and 24-hour holds on first-time beneficiaries.
- Upgrade KYC and auth. Use liveness checks and challenge-response prompts for voice. Treat “voice only” as weak identity.
- Adopt provenance on your own media. Turn on C2PA Content Credentials in creative tools; publish with credentials intact; keep originals hashed and archived for dispute resolution.
- Use detection—but validate. Run suspect clips through multiple detectors (image, audio, text). Track false positive/negative rates; don’t let a “clean” detector result override human escalation. Research shows detectors and watermarks can be evaded.
- SOC runbooks. Pre-write response steps: preserve URLs/files, capture hashes/metadata, verify with the purported speaker, and coordinate legal/PR before takedowns or public statements.
- Comms discipline. Educate executives and spokespeople: never approve payments or share sensitive data solely from a call/clip message—even if it “looks and sounds” right.
What this really means
The deepfake problem won’t be “solved.” It will be managed: provenance by default, platform labeling at scale, legal guardrails where the harm meets existing offenses, and tighter enterprise processes that assume eyes and ears can be fooled. The tech stack (C2PA + watermarking + detectors) is necessary—but it’s your human and process controls that stop the money moving and the rumor from becoming “fact.”