THREATINT

We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.

Please see our statement on Data Privacy.

Fathom (Privacy friendly web analytics)
Zendesk (Helpdesk and Chat)

Ok

Home | EN
Support
CVE
PUBLISHED

CVE-2024-34359

llama-cpp-python vulnerable to Remote Code Execution by Server-Side Template Injection in Model Metadata

AssignerGitHub_M
Reserved2024-05-02
Published2024-05-10
Updated2024-06-06

Description

llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.



CRITICAL: 9.7CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H

Problem types

CWE-76: Improper Neutralization of Equivalent Special Elements

Product status

>= 0.2.30, <= 0.2.71
affected

References

https://github.com/abetlen/llama-cpp-python/security/advisories/GHSA-56xg-wfcc-g829

https://github.com/abetlen/llama-cpp-python/commit/b454f40a9a1787b2b5659cd2cb00819d983185df

cve.org CVE-2024-34359

nvd.nist.gov CVE-2024-34359

Download JSON

Share this page
https://cve.threatint.com/CVE/CVE-2024-34359
© Copyright 2024 THREATINT. Made in Cyprus with +