CVE-2026-34070
HIGH
7,5
Source: [email protected]
Attack Vector: network
Attack Complexity: low
Privileges Required: none
User Interaction: none
Scope: unchanged
Confidentiality: high
Integrity: none
Availability: none
Description
AI Translation Available
LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.
22
Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
StableCommon Consequences
Security Scopes Affected:
Integrity
Confidentiality
Availability
Potential Impacts:
Execute Unauthorized Code Or Commands
Modify Files Or Directories
Read Files Or Directories
Dos: Crash, Exit, Or Restart
Applicable Platforms
Technologies:
AI/ML
https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151…
https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22
https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff…