AIjacking Threat Exposed: How Hackers Hijacked Microsoft’s Copilot Agent Without a Single Click

 

Imagine this — a customer service AI agent receives an email and, within seconds, secretly extracts your entire customer database and sends it to a hacker. No clicks, no downloads, no alerts.
Security researchers recently showcased this chilling scenario with a Microsoft Copilot Studio agent. The exploit worked through prompt injection, a manipulation technique where attackers hide malicious instructions in ordinary-looking text inputs.
As companies rush to integrate AI agents into customer service, analytics, and software development, they’re opening up new risks that traditional cybersecurity tools can’t fully protect against. For developers and data teams, understanding AIjacking — the hijacking of AI systems through deceptive prompts — has become crucial.
In simple terms, AIjacking occurs when attackers use natural language to trick AI systems into executing commands that bypass their programmed restrictions. These malicious prompts can be buried in anything the AI reads — an email, a chat message, a document — and the system can’t reliably tell the difference between a real instruction and a hidden attack.
Unlike conventional hacks that exploit software bugs, AIjacking leverages the very nature of large language models. These models follow contextual language instruction

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: