LangChain core vulnerability allows prompt injection and data exposure

A critical flaw in LangChain Core could allow attackers to steal sensitive secrets and manipulate LLM responses via prompt injection. LangChain Core (langchain-core) is a key Python package in the LangChain ecosystem that provides core interfaces and model-agnostic tools for building LLM-based applications. A critical vulnerability, tracked as CVE-2025-68664 (CVSS score of 9.3), affects the […]

This article has been indexed from Security Affairs

Read the original article: