Cyber security researchers have drafted a detailed report on a now fixed security vulnerability affecting the open-source artificial intelligence (AI) infrastructure platform Ollamawhich could be exploited to achieve remote code execution.
Tracked as CVE-2024-37032, the vulnerability has been named Probllama by cloud security firm Wiz. After a responsible disclosure on May 5, 2024, the issue was resolved in the version 0.1.34which was released on May 7, 2024.
The critical AI flaw on Ollama
Ollama is a packaging service, deploying and running large language models (LLMs) locally on Windows, Linux and macOS devices.
The underlying issue is a case of insufficient input validation resulting in a path traversal vulnerability that an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution.
The vulnerability requires that the cybercriminal (or at least a group of hackers) you send specially crafted HTTP requests to the Ollama API server for a successful exploit; it should be remembered that the HTTP protocol is no longer as secure as it once was (now the standard is HTTPS).
In particular, it takes advantage of the “/api/pull” API endpoint (which is used to download a model from the official registry or a private repository) to provide a malicious model manifest file that contains a path traversal payload in the digest field.
This issue could be abused to not only corrupt arbitrary files on the system, but also to achieve remote code execution by overwriting a configuration file (“etc/ld.so.preload”) associated with the dynamic linker (which is known as “ld.so”) to include a malicious shared library and launch it every time before running any program.
The risks present on Ollama’s artificial intelligence that were present according to researchers
While the risk of remote code execution is greatly reduced in default Linux installations, as the API server connects to localhost, it’s not the same for Docker installations, where the API server is publicly exposed.
“This issue is extremely severe in Docker installations, as the server runs with root privileges and listens on 0.0.0.0 by default, which allows remote exploitation of this vulnerability,” has said security researcher Sagi Tzadik.
Complicating matters further is the inherent lack of authentication associated with Ollamathus allowing an attacker to exploit a publicly accessible server to steal or tamper with AI models and compromise self-hosted AI inference servers.
This also requires such services to be protected using middleware as reverse proxies with authentication; Wiz said it has identified over 1,000 exposed Ollama instances hosting numerous AI models without any protection.
“CVE-2024-37032 is an easy-to-exploit remote code execution vulnerability affecting modern AI infrastructure“Tzadik said. “Although the code is relatively new and written in modern programming languages, classic vulnerabilities such as Path Traversal remain a problem.”
A happy ending
The development comes as AI security firm Protect AI has warned beyond 60 security flaws affecting various open-source AI/ML tools, including critical issues that could lead to information disclosure, access to sensitive resources, privilege escalation, and complete system takeover.
The most serious of these vulnerabilities is CVE-2024-22476 (CVSS score 10.0), a SQL injection vulnerability in Intel Neural Compressor software that could allow attackers to download arbitrary files from the host system; fortunately every type of problem related to Ollama was fixed in version 2.5.0.
#Ollama #security #problem #resolved