The ServiceNow AI breach: Why agentic access requires layered defense
Gergely Danyi
•
Jan 15, 2026
Gergely Danyi
•
Jan 15, 2026
The recent ServiceNow vulnerability discovered by AppOmni's Aaron Costello, dubbed "the most severe AI-driven vulnerability uncovered to date," exposes a critical blind spot in how we secure agentic AI systems. While the exploit chain involved authentication bypasses, the real danger lay in what happened next: an attacker weaponized ServiceNow's "Now Assist" AI agent - specifically, a prebuilt agent with the ability to "create data anywhere in ServiceNow" - to grant themselves persistent admin access.
This wasn't a case of clever prompt injection or AI hallucination. The AI agent functioned exactly as designed. The problem was authorization: the agent had been granted overly broad permissions with no guardrails to prevent abuse. As Costello notes, "AI agents should be very narrowly scoped in terms of what they can do."
But what does "narrowly scoped" mean in practice for autonomous agents that need to dynamically access resources?
Traditional applications have clear authorization boundaries—the frontend controls what actions users can take. But with agentic AI, the LLM decides which tools to call and with what inputs based on natural language requests. This creates two critical authorization surfaces that need protection:
ServiceNow's breach demonstrates what happens when we fail at both levels. The attacker discovered that a powerful "create data anywhere" agent existed and could be invoked—no filtering of available tools based on user context. Then, once invoked, that agent had blanket write access across the entire platform—no data-level restrictions.
P0's Authz Control Plane addresses both of the aforementioned access surfaces with enforcement at the Model Context Protocol (MCP) layer. P0's authorization decisions are not just binary: the third option is to require another human-in-the-loop, the approver. Using the example of an inventory management system, we can show how this works in practice:
Costello's recommendation that "organizations need to ensure that AI agents are not given the ability to perform powerful actions" requires practical mechanisms for enforcement. This means:
The ServiceNow breach is a wake-up call. As we deploy more autonomous agents with access to critical business systems, we need authorization architectures designed specifically for the agentic paradigm not retrofitted from traditional security models. The stakes are too high to treat AI agents as just another API client.
Get a demo of P0 Security, the next-gen PAM platform built for every identity.