Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic’s Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories.
Over 7,000 publicly accessible servers. More than 150 million downloads. Ten CVEs spanning LiteLLM, LangChain, LangFlow, Flowise, and others.
This isn’t a single bug someone forgot to patch. This is a design decision that propagated silently into every downstream library and every project that trusted the protocol. Anthropic reviewed the findings and called the behavior “expected.”
Let that sink in and picture this:
A developer at your company stands up a LangChain-based agent and connects it to an internal MCP server that has access to your customer database. The agent is working as designed: answering queries, pulling records, doing its job. Now an attacker sends a carefully crafted prompt through your public-facing interface. That prompt manipulates the MCP configuration through what researchers call a zero-click prompt injection, silently redirecting the STDIO interface to execute an arbitrary OS command on the server. In seconds, the attacker has your database credentials, your internal API keys, and a live shell on the machine running your agentic workflow. No authentication required. No alerts fired. Your SIEM never saw it because the action happened at the MCP layer, which nobody was watching.
We’ve been saying for months that MCP servers are the most dangerous unmonitored layer in your agentic infrastructure. This proves it.
The problem isn’t the model. It never was.
When most people think about AI security, they think about the LLM itself. Prompt injection. Jailbreaks. Model safety. Those are real concerns, but they’re not where your biggest exposure lives in a production agentic environment.
Your agents don’t just think. They act. They use MCP servers to connect to your internal systems, databases, APIs, and SaaS applications. MCP is the hands of your agentic infrastructure. And right now, for most organizations, those hands are completely invisible to your security team.
One architectural decision, made once, opens up remote code execution across your entire agentic stack. Not because someone wrote bad code. Because the protocol itself was never designed with security as a first principle.
This is a supply chain problem, not a patching problem.
A few of the affected vendors have issued patches. Most haven’t. Anthropic has declined to change the underlying architecture. That means every developer who inherits MCP code inherits the risk, whether they know it or not.
This is the pattern we’ve seen play out in API security for years. The vulnerability isn’t in one place. It’s structural. It lives in the trust relationships between components, in the defaults that no one questions, in the interfaces that no one monitors. The only way to address it is with visibility across the entire attack surface, not just point fixes on individual CVEs.
What you need to do now.
If your organization is deploying AI agents, you need answers to three questions today:
- Which MCP servers are running in your environment, and what do they have access to? Most security teams don’t know. MCP servers connect agents to your most sensitive systems, and they’re being stood up faster than anyone is tracking them.
- What actions can those servers execute, and on whose authority? The vulnerability works precisely because MCP’s STDIO interface allows arbitrary OS command execution with minimal authentication. You need to understand what your agents are authorized to do and monitor what they actually do.
- Where does your agentic infrastructure touch external APIs? The breach path in scenarios like this almost always ends at an API. Sensitive data, API keys, database credentials: these are the targets, and APIs are how they get exfiltrated.
At Salt, we built the Agentic Security Graph specifically to answer these questions. It’s the only framework that gives security teams full visibility and control across all three layers of agentic infrastructure: the LLM, the MCP servers, and the APIs they call. Not because we predicted this exact vulnerability, but because we understood the structural problem from the start.
The MCP attack surface is not theoretical anymore.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: