Home News ServiceNow AI Agents Exposed: Attackers Trick Them Into Leaking Data
News

ServiceNow AI Agents Exposed: Attackers Trick Them Into Leaking Data

Security researchers from AppOmni have discovered a serious vulnerability in ServiceNow’s Now Assist AI agent system. By exploiting how agents can discover and talk to each other, attackers can use “second-order” prompts to make one agent secretly coax another into performing dangerous actions — like copying or leaking sensitive data, changing records, or escalating permissions.

The issue isn’t a bug in the AI itself, but rather how some default settings are configured. For example, Now Assist agents are often grouped into the same “team” by default and allowed to find each other, which gives a malicious prompt room to redirect a harmless task into something harmful using an agent that has more privileges.

Because of these aggressive default settings, an attacker can hide a malicious prompt inside otherwise normal content. A relatively harmless agent may pick up that hidden prompt, then “recruit” a more powerful agent to do whatever the attacker wants — all under the radar. This could happen even if the system’s built-in AI prompt protections are turned on.

Another key risk: when an agent fulfills a task, it runs with the privileges of the user who initiated the interaction, not the attacker. That means if a user gave a command, the agent operates with their permissions, not the ones associated with the malicious prompt’s creator.

AppOmni recommends several defenses to reduce risk:

  • Use supervised execution mode for agents with high-level access
  • Turn off the “autonomous override” setting that lets agents carry out privileged tasks autonomously
  • Separate (segment) agent responsibilities across different teams
  • Watch agent behavior closely for any signs of unusual or suspicious actions

ServiceNow has responded, confirming that the behavior is intended under certain configurations. They’ve updated their documentation to make these risks more visible. But as companies integrate AI agents into key workflows, this research highlights how critical it is to double-check default configs and protect agent-to-agent communication.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

News

ChatGPT Experiences Global Outage, Conversations Disappear

ChatGPT went down worldwide on December 2, 2025, causing major disruptions for...

News

ChatGPT Goes Down Globally, Users Lose Access to Conversations

Recently, ChatGPT, the popular AI chatbot, experienced a major worldwide outage that...

News

Global Police Crackdown: Billions in Fraud Money and Cyber Infrastructure Seized

In 2025, international law‑enforcement agencies stepped up their efforts dramatically to dismantle...

News

Banking Trojan Spreads via WhatsApp, Hitting Brazilian Users Hard

Brazil has recently experienced a major surge in banking‑trojan attacks that are...