CVE-2026-44222

Published: Mag 12, 2026 Last Modified: Mag 12, 2026
ExploitDB:
Other exploit source:
Google Dorks:
MEDIUM 6,5
Attack Vector: network
Attack Complexity: low
Privileges Required: low
User Interaction: none
Scope: unchanged
Confidentiality: none
Integrity: none
Availability: high

Description

AI Translation Available

vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.

129

Improper Validation of Array Index

Draft
Common Consequences
Security Scopes Affected:
Integrity Availability Confidentiality
Potential Impacts:
Dos: Crash, Exit, Or Restart Modify Memory Read Memory Execute Unauthorized Code Or Commands
Applicable Platforms
Languages: C, C++, Not Language-Specific
View CWE Details
https://github.com/vllm-project/vllm/issues/32656
https://github.com/vllm-project/vllm/security/advisories/GHSA-hpv8-x276-m59f