CVE-2026-33298

Published: Mar 24, 2026 Last Modified: Mar 24, 2026
ExploitDB:
Other exploit source:
Google Dorks:
HIGH 7,8
Attack Vector: local
Attack Complexity: low
Privileges Required: none
User Interaction: required
Scope: unchanged
Confidentiality: high
Integrity: high
Availability: high

Description

AI Translation Available

llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.

122

Heap-based Buffer Overflow

Draft
Common Consequences
Security Scopes Affected:
Availability Integrity Confidentiality Access Control Other
Potential Impacts:
Dos: Crash, Exit, Or Restart Dos: Resource Consumption (Cpu) Dos: Resource Consumption (Memory) Execute Unauthorized Code Or Commands Bypass Protection Mechanism Modify Memory Other
Applicable Platforms
Languages: C, C++, Memory-Unsafe
View CWE Details
190

Integer Overflow or Wraparound

Stable
Common Consequences
Security Scopes Affected:
Availability Integrity Confidentiality Access Control Other
Potential Impacts:
Dos: Crash, Exit, Or Restart Dos: Resource Consumption (Memory) Dos: Instability Modify Memory Execute Unauthorized Code Or Commands Bypass Protection Mechanism Alter Execution Logic Dos: Resource Consumption (Cpu)
Applicable Platforms
Languages: C, Not Language-Specific
View CWE Details
https://github.com/ggml-org/llama.cpp/releases/tag/b7824
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-96jg-mvhq-q7q7