Microsoft has recently patched a vulnerability in the Microsoft 365 Copilot that might have resulted in theft of sensitive user information prior to the fix. The said data might have been extracted from your system by using a technique called ASCII smuggling. A security researcher named Johann Rehberger explained that ASCII smuggling is a novel technique that uses special unicode characters that mirror ASCII but aren't actually visible anywhere in the user interface. According to a report by The Hacker News, the "Attacker can have the LLM render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!"


Rehberger added, "Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots."


This attack utilises a number of attack methods to form a reliable exploit chain. This attack follows this procedure:



  • Trigger prompt injection via malicious content concealed in a document shared on the chat

  • Using a prompt injection payload to instruct Copilot to search for more emails and documents

  • Leveraging ASCII smuggling to entice the user into clicking on a link to exfiltrate valuable data to a third-party server


What Do We Know


The result of the attack was that sensitive information from emails, including multi-factor authentication (MFA) codes, could be sent to a server controlled by the attacker. Microsoft has since resolved these issues following a responsible disclosure in January 2024.


Zenity (free software and a cross-platform program) outlined that the attack methods enable cybercriminals to carry out retrieval-augmented generation (RAG) poisoning and indirect prompt injection, leading to remote code execution attacks. These attacks can potentially grant full control over Microsoft Copilot and other AI applications. In a possible attack scenario, an external hacker with the ability to execute code could manipulate Copilot to deliver phishing pages to users.


One of the more innovative attack techniques involves using the AI for spear-phishing. Known as LOLCopilot, this red-teaming strategy allows an attacker with access to a victim's email account to send phishing messages that mimic the victim's own communication style.


This development follows the demonstration of proof-of-concept (PoC) attacks against Microsoft's Copilot system, which have shown how responses can be manipulated, private data exfiltrated, and security measures bypassed. This underscores the ongoing need to monitor risks associated with artificial intelligence (AI) tools.


Microsoft has also recognised that Copilot bots publicly accessible through Microsoft Copilot Studio, which lack authentication safeguards, could potentially be exploited by threat actors. If attackers have prior knowledge of the Copilot's name or URL, they could use this access to extract sensitive information.