AI Tools Shorten Time to Compromise, Expanding Cyber Risk Across Sectors

Critical AI Vulnerability Hits ServiceNow, Risking Enterprise Security

The420.in Staff
6 Min Read

A major security flaw in the IT service management platform ServiceNow has been identified, raising alarms across the cybersecurity ecosystem. The issue, described by some experts as among the most severe AI-related vulnerabilities ever discovered in enterprise software, involved weaknesses in the platform’s AI-driven features that could allow attackers to gain unauthorised access, impersonate users, and potentially compromise connected systems and data.

ServiceNow is widely used by Fortune 500 companies for critical functions including IT support, human resources workflows, security responses, and customer service operations. Because of its deep integration into core business processes, any exploit of its platform — especially its AI capabilities — could have wide-ranging impacts on organisational security and data integrity.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

What the Vulnerability Was and How It Worked

The security flaw stemmed from the way ServiceNow’s AI functions — particularly the “Virtual Agent” chatbot and its agentic AI capabilities — were implemented. These AI agents were designed to help users automate tasks and interact with the ServiceNow platform using natural language. However, researchers from the SaaS security firm AppOmni found that a combination of weak authentication and improper access controls meant attackers could exploit the system.

In the vulnerable configuration, a universal credential string used for third-party authentication was the same across all instances. Coupled with a lax authentication check that essentially validated only a user’s email address (without requiring a password or multifactor authentication), this opened the door for attackers to impersonate legitimate users — including administrators — if they also knew the tenant’s ServiceNow URL.

Once inside, a threat actor could leverage the AI agent to perform powerful operations. In demonstration, the researcher was able to use the compromised access to create a new account with administrator privileges within the platform. That level of access could allow attackers not only to control the ServiceNow instance itself but also to abuse its integrations with other enterprise systems such as Salesforce, Microsoft platforms, and security tools that depend on ServiceNow for workflow automation.

ServiceNow’s Response and Fixes

ServiceNow acknowledged the report of the vulnerability and moved to fix the issue in late October 2025. The vendor rotated the previously universal credential and updated the affected AI agent code to eliminate the insecure behavior. Security patches were applied to the majority of hosted instances, and updates were also shared with ServiceNow partners and customers managing their own on-premises deployments.

The affected components included versions of the Now Assist AI Agents software and the Virtual Agent API. Updated versions of these components are available — for the AI Agents, versions 5.1.18 and above (or 5.2.19 and above), and for the Virtual Agent API, versions 3.15.2 and above (or 4.0.4 and above). Customers are strongly advised to apply these updates as soon as possible to mitigate risk.

ServiceNow has stated that it has not seen evidence of active exploitation of this particular flaw in the wild, but cybersecurity experts stress that just because exploitation has not been observed publicly does not mean threat actors have not probed or abused the vulnerability in less visible ways. Prompt patching and configuration reviews are urged to shore up defences.

Why This Matters for Enterprises

The ServiceNow vulnerability underscores a broader cybersecurity challenge: AI and automation features can significantly increase risk surfaces if not secured effectively. Traditional authentication methods and access controls were not originally designed to manage the complexity and autonomy of agent-based AI tools. When these tools are given too much operational authority — such as the ability to create accounts or modify records — the impact of a single exploited flaw can be dramatic.

Security practitioners warn that organisations must treat AI workflows with the same rigor as core infrastructure: implementing strict identity governance, limiting privileges, subjecting AI agents to threat modelling before deployment, and ensuring ongoing monitoring of anomalous AI-driven actions.

For companies that rely on ServiceNow as a central backbone of operations, this incident is a stark reminder that integrated platforms — especially those enhanced with AI — need layered protections. Beyond patching the software itself, organisations should assess connected systems, audit integrations, and ensure that AI agents cannot trigger highly privileged actions without proper oversight.

About the author – Rehan Khan is a law student and legal journalist with a keen interest in cybercrime, digital fraud, and emerging technology laws. He writes on the intersection of law, cybersecurity, and online safety, focusing on developments that impact individuals and institutions in India.

Stay Connected