vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint. This results in server memory exhaustion, potentially leading to a crash or unresponsiveness. The attack does not require authentication, making it exploitable by any remote user. This vulnerability is fixed in 0.10.1.1.
| Package (Ecosystem) | Introduced | Fixed | Limit |
|---|---|---|---|
| vllm(PyPI) | 0.1.0 | 0.10.1.1 | N/A |
CVSS Metrics