New research has found that Artificial Intelligence (AI) providers such as Hugging Face they are susceptible to two critical risks that could allow cybercriminals to escalate privileges (i.e., gain administrator permissions via ai-as-a-service) and thus gain multi-owner access to other customers' models and even take control of continuous integration and delivery (CI/CD) flows.
Ai-as-a-service (AI as a service) and its dangers, what you need to know
“Malicious models pose a significant risk to AI systems, especially to AI-as-a-service providers [ai-as-a-service] because potential attackers could exploit these patterns to carry out multi-owner attacks,” they have declared Wiz researchers Shir Tamari and Sagi Tzadik who then added: “L'potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers [ai-as-a-service].“
The development comes as machine learning workflows have emerged as a new supply chain attack vector, with repositories like Hugging Face becoming an attractive target for staging adversary attacks designed to obtain sensitive information and access target environments, using ai as a service.
Threats are like a kind of double-edged sword, emerging following the discovery (leveraging an ai-as-a-service) of the shared Inference infrastructure and the shared CI/CD takeover; This makes it possible to run untrusted models loaded into the service in pickle format and take control of the CI/CD flow to carry out a supply chain attack.
The cloud security company's findings show that it is possible to hack the service running custom models by loading a fraudulent model and exploit container escape techniques to break out of your tenant and compromise the entire service, effectively allowing ai-as-a-service users to gain multi-owner access to other customers' models stored and executed in Hugging Face.
“Hugging Face will still allow the user to infer the Pickle-based model loaded on the platform's infrastructure, even when it is considered dangerous” the researchers clarified.
This essentially allows an attacker to create a PyTorch (Pickle) model with the ability to execute arbitrary code on load and chain it with misconfigurations in the Amazon Elastic Kubernetes service (EKS) to gain elevated privileges and move laterally within the cluster.
“The secrets we obtained could have had a significant impact on the platform if they had been in the hands of a malicious perpetrator” the researchers said. “Secrets within shared environments can often lead to multi-owner access and loss of sensitive data“, which is simplified precisely by these ai-as-a-service.
To mitigate the problem, We recommend enabling hop-limited IMDSv2 to prevent pods from accessing the Instance Metadata Service (IMDS) and gaining the role of a Node within the cluster.
Research has also found that it is possible to achieve remote code execution via a specially created Dockerfile when running an application on the Hugging Face Spaces service and use it to extract and insert (i.e. overwrite) all images available in an internal container registry.
Hugging Face, in coordinated disclosurewhich talks about ai-as-a-service, said it had addressed all identified issues and that is also urging users to use templates only from trusted sourcesenable multi-factor authentication (MFA), and refrain from using pickle files in production environments.
“This research shows that using untrusted AI models (especially those based on Pickle) could lead to serious security consequences” the researchers said. “Additionally, if you plan to allow users to use untrusted AI models in your environment, it is extremely important to ensure that they run in a sandbox environment.“
The disclosure follows another research by Lasso Security that it is possible for generative AI models like OpenAI ChatGPT and Google Gemini to distribute packets of malicious (and non-existing) code to unsuspecting software developers.
In other words, L'idea is to find a recommendation for an unpublished package and publish a Trojanized package in its place to propagate the malware; the phenomenon of hallucinations of AI packages highlights the need to exercise caution when relying on large language models (LLMs) for coding solutions.
AI company Anthropic also detailed a new method called “many-shot jailbreaking” which can be used to bypass security protections built into LLMs to produce potentially malicious query responses by exploiting the models context window.
“The ability to enter ever-increasing amounts of information has obvious benefits for LLM users, but it also comes with risks: Jailbreak vulnerabilities that exploit the longer context window” has said the company earlier this week.
The technique, simply put, involves introducing a large number of fake dialogues between a human and an AI assistant within a single prompt for the LLM in an attempt to “direct model behavior” and answer queries it otherwise wouldn't (e.g., “How to build a bomb?”).
#Aiasaservice #risks #potential #dangers