Back to all threats

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

The Hacker News

Summary

Malicious actors can exploit vulnerabilities in ServiceNow's Now Assist AI platform through second-order prompt injection attacks, allowing unauthorized actions and potential data exfiltration. This issue highlights significant security risks associated with default configurations in generative AI systems.

Impact

ServiceNow's Now Assist generative artificial intelligence platform

In the Wild

Unknown

Timeline

Newly disclosed

Remediation

Review and adjust default configurations in ServiceNow's Now Assist to prevent prompt injection attacks. Implement security best practices for generative AI systems.