⬤ LangChain, one of the most popular AI development frameworks, is facing serious scrutiny after researchers uncovered a severe security flaw in LangChain Core. The vulnerability lets attackers extract sensitive secrets and manipulate large language model output through prompt injection. Tagged as CVE-2025-68664 with a CVSS severity score of 9.3, this flaw ranks among the highest-risk vulnerabilities in AI infrastructure. The issue exploits how user data containing "lc keys" gets deserialized and trusted within the system.
⬤ The weakness allows prompt injection to trigger exploits through normal LLM responses. Since the affected data is treated as trusted serialized objects, malicious content hidden in a model's reply can execute during deserialization. This opens a direct path for attackers to steal stored secrets, API keys and potentially hijack application behavior. What makes this particularly dangerous is how easily it can be triggered through regular AI interactions.
⬤ The vulnerability exposes how AI pipelines can be compromised when frameworks automatically trust structured data returned by language models. It's a stark reminder that AI responses themselves can become attack vectors—not just the prompts going in.
⬤ This matters because as AI deployments accelerate across industries, security flaws in foundational frameworks like LangChain could shake confidence and force companies to rethink their risk controls and resilience planning across the entire AI ecosystem.
Peter Smith
Peter Smith