Details have emerged about a vulnerability now fixed in Microsoft 365 Copilot that could allow the theft of sensitive user information using a technique called ASCII smuggling.
Microsoft 365 Copilot “suffered” from a curious vulnerability called ASCII smuggling, here’s what it is
“ASCII smuggling is a new technique that uses special Unicode characters that look like ASCII but are not actually visible in the user interface.“, has said security researcher Johann Rehberger, who then added: “This means that an attacker can cause the [modello di linguaggio avanzato] make data invisible to the user and embed it within clickable hyperlinks. This technique basically prepares the data for exfiltration.!“
The entire attack combines a number of attack methods to create a reliable exploit chain. This includes the following steps:
- Activate what is called in jargon prompt injection via malicious content hidden in a document shared in the chat
- Use a prompt injection payload to instruct Copilot to search for other emails and documents
- Exploit ASCII smuggling to lure the user into clicking on a link and transferring valuable data to a third-party server
The end result of the attack was that sensitive data in emails, including multi-factor authentication (MFA) codes, could be transmitted to a server controlled by the attacker, but Microsoft later addressed these issues after a responsible disclosure in January 2024.
Microsoft 365 Copilot and other issues it “suffered” from, regarding data exfiltration
The development comes as proof-of-concept (PoC) attacks have been demonstrated against Microsoft’s Copilot system (and by extension Microsoft 365 Copilot) to manipulate responses, exfiltrate private data and bypass security protections, once again highlighting the need to monitor risks in artificial intelligence (AI) tools.
The methods, detailed by Zenithallow cyber criminals to perform augmented generation poisonings from recovery (RAG) and indirect prompt injections, leading to remote code execution attacks that can completely take over Microsoft Copilot and other AI applications; In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Microsoft 365 Copilot into serving users with phishing pages..
This Microsoft 365 Copilot Exploit Is Just the Tip of the Iceberg
Perhaps one of the most innovative attacks is the ability to turn AI into a spear-phishing machine; the red-teaming technique, call LOL Copilotallows an attacker with access to the victim’s email account to send phishing messages that mimic the style of compromised users.
Microsoft also has recognized that publicly exposed Copilot bots built using Microsoft Copilot Studio and lacking authentication protections could be a conduit for cybercriminals to extract sensitive information, provided they have prior knowledge of the Copilot name or URL.
“Organizations should assess their risk tolerance and exposure to prevent data leaks from Copilot (formerly Power Virtual Agents) and enable data loss prevention and other security checks accordingly to control the creation and publishing of Copilot“, Rehberger said.
New technologies, old methods
Despite the rapid advances in technology, user deception techniques remain surprisingly similar over time, and while today we hear about artificial intelligence models, ASCII smuggling, and prompt injection, the goal remains the same: exploit the user’s trust and ignorance to obtain sensitive information.
Phishing scams, response manipulation, and data exfiltration are all modern versions of old tricks used to deceive the user. Technologies evolve, but human vulnerabilities and social engineering tactics continue to be the preferred entry point for attackers.
#Microsoft #Copilot #Data #Theft #Vulnerability #Fixed