As artificial intelligence spreads deeper into daily life from wearable health devices to automated security systems experts warn that the same technologies designed to protect users and streamline services are rapidly reshaping the playbook of cybercrime. What is emerging, they say, is not just a rise in digital attacks, but a qualitative shift in how identity, trust and even the human body itself are exploited.
When the Body Becomes a Data Source
Smartwatches, fitness rings, AI-powered wearables and even sensor-laden mattresses now collect a continuous stream of intimate information: location data, heart rates, sleep cycles and stress patterns. Once marketed as tools of convenience and wellness, these devices are increasingly viewed by cybersecurity researchers as lucrative targets.
Konstantin Levinzon, co-founder and chief executive of Planet VPN, says the vulnerability often lies not in the hardware itself but in the ecosystems surrounding it. If attackers gain access to a device or its connected cloud services, data can be quietly siphoned off sometimes for months through insecure apps, poorly protected servers or Bluetooth exploits.
This form of intrusion, which some analysts describe as “digital body snatching,” marks a shift from traditional data theft. Unlike stolen passwords or credit card numbers, biometric and health data cannot be reset. Once exposed, it becomes a permanent liability, capable of being reused for surveillance, blackmail or sophisticated social engineering schemes.
Deepfakes and the Erosion of Trust
Alongside the harvesting of bodily data, advances in generative AI are accelerating another threat: hyper-realistic deepfakes. Artificially generated videos, images and audio—often indistinguishable from authentic recordings—are increasingly used to impersonate individuals, bypass identity checks and manipulate institutions.
The FBI has warned that criminals are already deploying fabricated kidnapping images and videos to extort victims. Levinzon argues that the more consequential risk lies ahead, as video generators such as OpenAI’s Sora demonstrate how easily convincing footage can be produced at scale.
Banks and financial institutions are expected to respond by tightening verification systems, particularly those reliant on video or facial recognition. That may mean additional steps for customers seeking to confirm their identity—friction that reflects a broader breakdown in digital trust. In some cases, insurers have begun offering coverage against reputational damage caused by deepfake attacks, an acknowledgment that the threat has moved from hypothetical to actuarial.
The Rise of Autonomous Attacks
Until recently, artificial intelligence functioned largely as an accelerant for cybercriminals helping automate phishing campaigns, optimize ransomware deployment or scan networks for weaknesses. That balance is now shifting.
Researchers point to the emergence of “agentic” AI systems capable of acting with minimal human oversight. In one documented case cited by Anthropic, a hacking campaign reportedly carried out as much as 80 to 90 percent of its operations autonomously, using AI tools to probe systems, exploit vulnerabilities and adapt tactics in real time.
As homes, workplaces and critical infrastructure become increasingly managed by AI-driven systems, Levinzon warns that any undiscovered security flaw becomes a potential entry point. Zero-day vulnerabilities unknown even to vendors can be identified and exploited without a human attacker ever touching a keyboard. Many experts expect the first fully autonomous cyberattacks to materialize within the next year.
A Record Year for Breaches, a Warning for 2026
The broader backdrop to these predictions is a year of unprecedented digital exposure. According to Cyble’s Global Cybersecurity Report 2025, nearly 15,000 data breach incidents were reported worldwide, including what researchers described as the largest password leak in history—16 billion credentials tied to platforms such as Apple, Google, Facebook and Telegram.
Major corporations and government institutions proved equally vulnerable. The U.S. Congressional Budget Office was recently hacked, while companies including Qantas, Oracle, Volvo and SK Telecom faced breaches that exposed millions of customer records or disrupted operations. In South Korea, more than 120,000 cameras were compromised in incidents involving so-called “sexploitation” footage.
Experts at Planet VPN argue that these events are a prelude rather than an outlier. As AI continues to amplify both defensive and offensive capabilities, the coming year is expected to bring more targeted attacks on medical data, more convincing identity fraud and greater reliance on automated systems by criminals. The risks, they say, are growing not just in volume, but in intimacy touching bodies, identities and institutions in ways that are increasingly difficult to disentangle.