As companies race to deploy autonomous AI agents to cut costs and speed decisions, new research suggests that the same tools promising efficiency may also introduce quiet but profound security risks especially when control is handed to software built by non-experts.
A Travel Agent That Went Rogue
The experiment was designed to look ordinary. Inside Microsoft’s Copilot Studio, a no-code platform that allows businesses to build AI agents without traditional software development, researchers created a virtual travel assistant. Its job was routine: manage bookings, update itineraries and handle customer requests tasks increasingly entrusted to automated systems across industries.
But according to new findings by Tenable, a global cybersecurity firm, that ordinary setup concealed a serious vulnerability. Using a well-known attack technique called prompt injection, Tenable researchers were able to manipulate the instructions guiding the AI agent, effectively overriding its safeguards. The agent, which had been programmed to verify customer identities before making changes or sharing information, was coerced into leaking full payment card details and altering a booking to charge €0 providing free travel services without authorization.
The demonstration, conducted in a controlled environment, underscored how an AI agent designed for convenience could be repurposed for fraud with little resistance once its internal guardrails were bypassed.
The Hidden Power of Permissions
At the heart of the issue, Tenable says, is access. The AI agent in the Copilot Studio test was granted broad “edit” permissions so it could perform legitimate tasks, such as changing travel dates or updating customer records. Those same permissions, however, proved sufficient to manipulate pricing and payment flows once the agent’s instructions were compromised.
For non-technical users building agents on no-code platforms, these permission levels are often invisible or poorly understood. Researchers warn that this opacity increases the likelihood of misconfiguration, allowing AI systems to operate with far more authority than their creators intend.
“AI agent builders like Copilot Studio democratise the creation of powerful tools,” said Keren Katz, senior group manager of AI security product and research at Tenable. “But they also democratise the ability to execute financial fraud. That power can easily turn into a real, tangible security risk.”
A Growing Enterprise Risk
The findings arrive as enterprises rapidly embrace AI automation. Platforms like Copilot Studio promise to remove technical barriers, enabling business teams not just engineers to deploy autonomous agents across customer service, finance and operations. But security experts say that ease of use can mask deep structural risks.
Tenable warned that organizations adopting no-code AI tools often underestimate their security implications. Excessive permissions granted for convenience can be exploited by attackers to access sensitive systems and data, potentially leading to breaches involving personal and financial information, regulatory exposure under privacy laws, direct revenue loss through fraudulent transactions, and long-term reputational damage.
The research adds to broader concerns about enterprise AI deployment, particularly as autonomous agents are entrusted with sensitive business functions that once required human oversight.
Calls for Stronger AI Governance
In response, Tenable has urged organizations to adopt strict AI governance frameworks before deploying autonomous agents. Among its recommendations: limit AI agents’ access to the bare minimum required for their roles, map all systems and data an agent can reach before deployment, and actively monitor AI behavior for anomalies or misuse.
The researchers emphasized that the problem is not unique to one platform but reflects a systemic challenge facing enterprise AI. As autonomous systems gain authority, the traditional boundaries between software error, misconfiguration and malicious exploitation blur.
