CVE-2026-44223

Published: Mag 12, 2026 Last Modified: Mag 12, 2026
ExploitDB:
Other exploit source:
Google Dorks:
MEDIUM 6,5
Attack Vector: network
Attack Complexity: low
Privileges Required: low
User Interaction: none
Scope: unchanged
Confidentiality: none
Integrity: none
Availability: high

Description

AI Translation Available

vLLM is an inference and serving engine for large language models (LLMs). From to before 0.20.0, the extract_hidden_states speculative decoding proposer in vLLM returns a tensor with an incorrect shape after the first decode step, causing a RuntimeError that crashes the EngineCore process. The crash is triggered when any request in the batch uses sampling penalty parameters (repetition_penalty, frequency_penalty, or presence_penalty). A single request with a penalty parameter (e.g., 'repetition_penalty': 1.1) is sufficient to crash the server. This vulnerability is fixed in 0.20.0.

131

Incorrect Calculation of Buffer Size

Draft
Common Consequences
Security Scopes Affected:
Integrity Availability Confidentiality
Potential Impacts:
Dos: Crash, Exit, Or Restart Execute Unauthorized Code Or Commands Read Memory Modify Memory
Applicable Platforms
Languages: C, C++, Memory-Unsafe
View CWE Details
704

Incorrect Type Conversion or Cast

Incomplete
Common Consequences
Security Scopes Affected:
Other
Potential Impacts:
Other
Applicable Platforms
Languages: C, C++, Memory-Unsafe, Not Language-Specific
View CWE Details
https://github.com/vllm-project/vllm/pull/38610
https://github.com/vllm-project/vllm/security/advisories/GHSA-83vm-p52w-f9pw