Ok
Sign inSupport
AI Agent
3 mins

The ServiceNow AI breach: Why agentic access requires layered defense

Gergely Danyi

Jan 15, 2026

Content
Gain control of your cloud access.
Get a demo
Share article
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The recent ServiceNow vulnerability discovered by AppOmni's Aaron Costello, dubbed "the most severe AI-driven vulnerability uncovered to date," exposes a critical blind spot in how we secure agentic AI systems. While the exploit chain involved authentication bypasses, the real danger lay in what happened next: an attacker weaponized ServiceNow's "Now Assist" AI agent - specifically, a prebuilt agent with the ability to "create data anywhere in ServiceNow" - to grant themselves persistent admin access.                              

This wasn't a case of clever prompt injection or AI hallucination. The AI agent functioned exactly as designed. The problem was authorization: the agent had been granted overly broad permissions with no guardrails to prevent abuse. As Costello notes, "AI agents should be very narrowly scoped in terms of what they can do."    

But what does "narrowly scoped" mean in practice for autonomous agents that need to dynamically access resources?    

The authorization challenge for agentic systems

Traditional applications have clear authorization boundaries—the frontend controls what actions users can take. But with agentic AI, the LLM decides which tools to call and with what inputs based on natural language requests. This creates two critical authorization surfaces that need protection:    

  1. Tool-level access: Which capabilities should the agent even know about?
  2. Data-level access: What underlying resources can those tools touch?

ServiceNow's breach demonstrates what happens when we fail at both levels. The attacker discovered that a powerful "create data anywhere" agent existed and could be invoked—no filtering of available tools based on user context. Then, once invoked, that agent had blanket write access across the entire platform—no data-level restrictions.    

Defense in depth through layered authorization controls    

P0's Authz Control Plane addresses both of the aforementioned access surfaces with enforcement at the Model Context Protocol (MCP) layer. P0's authorization decisions are not just binary: the third option is to require another human-in-the-loop, the approver. Using the example of an inventory management system, we can show how this works in practice:    

  1. Tool-level enforcement: The agent has access to basic inventory query tools without approval, but specialized tools like demand forecasting require human authorization. For instance, if the agent attempts to use the forecast_can_fulfill tool, P0's /evaluate endpoint blocks it and indicates the access can be requested. The agent automatically submits a request on the user's behalf, routing it to the appropriate approver.
  2. Data-level enforcement: Even after gaining tool-level access, the MCP server consults P0 again before executing the underlying SQL query. An exmple rule: inventory tables were automatically allowed, but the demand forecast table requires a human approval. This prevents the scenario where a broadly-scoped tool could become a backdoor to sensitive data.
  3. Human-in-the-loop enforcement: The approver receives detailed context through both the P0 UI and Slack notifications, showing exactly which tables and queries would be touched to enable informed decisions about access grants.

Implementing just-enough-privilege and Just-in-Time access    

Costello's recommendation that "organizations need to ensure that AI agents are not given the ability to perform powerful actions" requires practical mechanisms for enforcement. This means:    

  • Dynamic tool filtering based on user roles and context, not just shipping all tools to every agent
  • Request-time authorization checks at both the tool invocation and data access layers
  • Just-in-Time access with human approval workflows for sensitive operations
  • Detailed audit trails showing what the agent accessed, for who (end-user), and why

The ServiceNow breach is a wake-up call. As we deploy more autonomous agents with access to critical business systems, we need authorization architectures designed specifically for the agentic paradigm not retrofitted from traditional security models. The stakes are too high to treat AI agents as just another API client.  

Struggling to control production access in hybrid or multi-cloud environments?

Get a demo of P0 Security, the next-gen PAM platform built for every identity.