Home News ServiceNow AI Agents Exposed: Attackers Trick Them Into Leaking Data
News

ServiceNow AI Agents Exposed: Attackers Trick Them Into Leaking Data

Security researchers from AppOmni have discovered a serious vulnerability in ServiceNow’s Now Assist AI agent system. By exploiting how agents can discover and talk to each other, attackers can use “second-order” prompts to make one agent secretly coax another into performing dangerous actions — like copying or leaking sensitive data, changing records, or escalating permissions.

The issue isn’t a bug in the AI itself, but rather how some default settings are configured. For example, Now Assist agents are often grouped into the same “team” by default and allowed to find each other, which gives a malicious prompt room to redirect a harmless task into something harmful using an agent that has more privileges.

Because of these aggressive default settings, an attacker can hide a malicious prompt inside otherwise normal content. A relatively harmless agent may pick up that hidden prompt, then “recruit” a more powerful agent to do whatever the attacker wants — all under the radar. This could happen even if the system’s built-in AI prompt protections are turned on.

Another key risk: when an agent fulfills a task, it runs with the privileges of the user who initiated the interaction, not the attacker. That means if a user gave a command, the agent operates with their permissions, not the ones associated with the malicious prompt’s creator.

AppOmni recommends several defenses to reduce risk:

  • Use supervised execution mode for agents with high-level access
  • Turn off the “autonomous override” setting that lets agents carry out privileged tasks autonomously
  • Separate (segment) agent responsibilities across different teams
  • Watch agent behavior closely for any signs of unusual or suspicious actions

ServiceNow has responded, confirming that the behavior is intended under certain configurations. They’ve updated their documentation to make these risks more visible. But as companies integrate AI agents into key workflows, this research highlights how critical it is to double-check default configs and protect agent-to-agent communication.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

News

WormGPT-4 and KawaiiGPT Fuel Rise of AI-Driven Cybercrime

Cybercriminals are increasingly turning to “dark” large language models (LLMs) such as...

News

Brazilian Crypto Holders Targeted via WhatsApp by Malware Worm

Cybercriminals are targeting crypto holders in Brazil using a malicious campaign on...

News

Radzarat Trojan Masquerades as PDF Converter on Android

A new Android Trojan called Radzarat is deceiving users by posing as...

News

Sophisticated macOS Infostealer Hits Newer Apple Silicon Devices

Researchers have discovered a new, highly-sophisticated macOS malware called DigitStealer that masquerades...