We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
A Python command injection vulnerability exists in the `SagemakerLLM` class's `complete()` method within `./private_gpt/components/llm/custom/sagemaker.py` of the imartinez/privategpt application, versions up to and including 0.3.0. The vulnerability arises due to the use of the `eval()` function to parse a string received from a remote AWS SageMaker LLM endpoint into a dictionary. This method of parsing is unsafe as it can execute arbitrary Python code contained within the response. An attacker can exploit this vulnerability by manipulating the response from the AWS SageMaker LLM endpoint to include malicious Python code, leading to potential execution of arbitrary commands on the system hosting the application. The issue is fixed in version 0.6.0.
Reserved 2024-04-30 | Published 2024-11-14 | Updated 2024-11-18 | Assigner @huntr_aiCWE-78 Improper Neutralization of Special Elements used in an OS Command
huntr.com/bounties/1d1e8f06-ec45-4b17-ae24-b83a41304c15
github.com/...ommit/86368c61760c9cee5d977131d23ad2a3e063cbe9
Support options