A new cyber attack technique, which was called LLMjacking, from the Sysdig Threat Research Team recently appeared on the web.
Long story short, cybersecurity researchers have discovered a new type of attack that uses stolen cloud credentials to target large language model services (LLM, Large Language Model, hence the name LLMjacking) hosted on the cloud; in short, cyber criminals “shoot” this new attack, with the aim of selling access to other cyber criminals.
Details of the new LLMjacking attack: word from cybersecurity researchers
“After gaining initial access, they exfiltrated cloud credentials and gained access to the cloud environment, where they tried to access on-premises LLM templates hosted by cloud providers“, has said security researcher Alessandro Brucato, regarding the LLMjacking attack. “In this case, a local LLM Claude model (v2/v3) from Anthropic was targeted“.
The intrusion path used to carry out the scheme involves the violation of a system running a vulnerable version of the Laravel Framework (for example, the vulnerability CVE-2021-3129), followed by obtaining Amazon Web Services (AWS) credentials to access LLM services.
The LLMjacking attack and open-source Python models
Among the tools used there is one Open-source Python scripts which checks and validates the keys for various offers related to LLMs (commonly called “artificial intelligence“) Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI, among others.
“No legitimate LLM queries were actually executed during the verification phase“, explained researcher Alessandro Brucato, again regarding the LLMjacking attack. “Instead, only what was necessary was done to understand what the credentials could do and what the limitations were“.
The keychecker also has integration with another open-source tool called oai-reverse-proxy which acts as a reverse proxy server for LLM APIs, indicating that cybercriminals are likely providing access to compromised accounts without actually exposing the underlying credentials; a hijack essentially, hence the name “LLMjacking“.
“If the attackers were gathering an inventory of useful credentials and wanted to sell access to available LLM templates, a reverse proxy like this could allow them to monetize their efforts“Brucato said.
But with this attack it is also possible to “manipulate” various things, using particular prompts
Furthermore, attackers have been observed querying the logging settings in a probable attempt to avoid detection when they use compromised credentials to execute their prompts.
The development represents a departure from attacks that focus on prompt injection and model poisoning, instead allowing attackers to monetize their access to LLMs while the cloud account owner pays the bill without their knowledge or consent.
A very profitable attack
Sysdig stated that an attack of this type could rack up over $46,000 per day in LLM consumption costs for the victim, as stated before this is effectively the “hijacking”, not just of personal data, also of wallets, i.e. wallets that have money inside, although in this specific case they are not necessarily cryptocurrencies, but “real” money.
“Using LLM services can be expensive, depending on the model and the amount of tokens provided to it“, said researcher Alessandro Brucato. “By maximizing altitude limits, attackers can even block the compromised organization from using the templates legitimately, disrupting business operations“.
Organizations are advised to enable detailed logging and monitor cloud logs for suspicious or unauthorized activity, as well ensure effective vulnerability management processes are in place to prevent initial access.
#LLMjacking #cyber #threat #based