LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. Prior to 1.83.0, the /config/update endpoint does not enforce admin role authorization. A user who is already authenticated into the platform can then use this endpoint to modify proxy configuration and environment variables, register custom pass-through endpoint handlers pointing to attacker-controlled Python code, achieving remote code execution, read arbitrary server files by setting UI_LOGO_PATH and fetching via /get_image, and take over other privileged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables. Fixed in v1.83.0.
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. Prior to 1.83.0, when JWT authentication is enabled (enable_jwt_auth: true), the OIDC userinfo cache uses token[:20] as the cache key. JWT headers produced by the same signing algorithm generate identical first 20 characters. This configuration option is not enabled by default. Most instances are not affected. An unauthenticated attacker can craft a token whose first 20 characters match a legitimate user's cached token. On cache hit, the attacker inherits the legitimate user's identity and permissions. This affects deployments with JWT/OIDC authentication enabled. Fixed in v1.83.0.
In berriai/litellm before version 1.44.12, the `litellm/litellm_core_utils/litellm_logging.py` file contains a vulnerability where the API key masking code only masks the first 5 characters of the key. This results in the leakage of almost the entire API key in the logs, exposing a significant amount of the secret key. The issue affects version v1.44.9.
A Denial of Service (DoS) vulnerability exists in berriai/litellm version v1.44.5. This vulnerability can be exploited by appending characters, such as dashes (-), to the end of a multipart boundary in an HTTP request. The server continuously processes each character, leading to excessive resource consumption and rendering the service unavailable. The issue is unauthenticated and does not require any user interaction, impacting all users of the service.
BerriAI/litellm version 1.40.12 contains a vulnerability that allows remote code execution. The issue exists in the handling of the 'post_call_rules' configuration, where a callback function can be added. The provided value is split at the final '.' mark, with the last part considered the function name and the remaining part appended with the '.py' extension and imported. This allows an attacker to set a system method, such as 'os.system', as a callback, enabling the execution of arbitrary commands when a chat response is processed.
A code injection vulnerability exists in the berriai/litellm application, version 1.34.6, due to the use of unvalidated input in the eval function within the secret management system. This vulnerability requires a valid Google KMS configuration file to be exploitable. Specifically, by setting the `UI_LOGO_PATH` variable to a remote server address in the `get_image` function, an attacker can write a malicious Google KMS configuration file to the `cached_logo.jpg` file. This file can then be used to execute arbitrary code by assigning malicious code to the `SAVE_CONFIG_TO_DB` environment variable, leading to full system control. The vulnerability is contingent upon the use of the Google KMS feature.