An AI Agent Didn’t Hack McKinsey. Its Exposed APIs Did.

This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI.

Not because AI itself is inherently insecure.

But because too many organizations are still thinking about AI security at the model layer, while the real enterprise risk sits in the action layer: the APIs, MCP servers, internal services, and shadow integrations that AI agents can reach, invoke, and manipulate.

That is the part most companies still do not see.

The technical details matter here. Public reporting described an internal AI platform with a broad API footprint, including more than 200 documented endpoints and a set of unauthenticated APIs that could allegedly be reached externally. The same reporting described potential exposure paths to tens of millions of chat messages, hundreds of thousands of files, user accounts, and system prompts. Whether or not every possible impact was realized, the takeaway for security leaders is clear: when internal AI systems are wired into weakly governed APIs, the blast radius can become enormous very quickly.

And this is not an isolated case.

The McDonald’s AI hiring incident points to the same structural problem. Different companies. Different workflow. Same core mistake. Reporting on that case described exposed administrative access, weak authentication practices, and the potential exposure of a massive pool of applicant records. Again, the story was not just about the chatbot. It was about the application and API infrastructure around it.

That is the lesson the market needs to understand.

The real risk is not the LLM. It is what the agent can do.

A lot of the AI security market today is focused on prompts, model behavior, jailbreaks, and output controls.

Those matter.

But they are only one layer.

In the enterprise, AI agents do not create value by talking. They create value by taking action. They retrieve data, call APIs, invoke tools, access systems, trigger workflows, and increasingly operate through MCP servers and connected services.

That means the real blast radius of AI is determined by the action layer.

  • If an internal API is left exposed without authentication, an agent can find it.
  • If a shadow service is internet-accessible, an agent can reach it.
  • If an MCP server is misconfigured, an agent can use it.
  • If sensitive business logic is sitting behind undocumented or forgotten endpoints, an agent can chain those calls together at machine speed.

This is why the industry framing of “AI security” is still too narrow. The attack surface is no longer just the model. It is the full connected system around it.

McKinsey and McDonald’s security breaches are the same story

At first glance, these incidents look different. McKinsey was an internal AI platform. McDonald’s was an AI-powered hiring workflow.

But structurally they are the same. Both point to a growing enterprise reality: organizations are connecting AI systems to internal and external application infrastructure faster than they are securing that infrastructure.

And in many cases, the weakest point is not a sophisticated model exploit. It is a plain old exposed API, weak authentication, forgotten endpoint, misconfigured access control, or third-party integration that quietly became internet reachable.

That is exactly why I believe one of the most dangerous categories emerging right now is shadow APIs connected to agents.

These are internal or lightly governed APIs that were never meant to become part of an external attack surface, but once they are connected to copilots, workflows, MCP servers, browser agents, coding agents, or AI applications, they effectively become part of one.

The company still thinks of them as “internal.” The attacker does not.

The blind spot: shadow APIs plus agent connectivity

This is the gap I worry about most for enterprises today. Every company has APIs it knows about. Many also have APIs it has forgotten, never fully documented, or does not realize are externally reachable.

Now add AI. The moment an agent is connected to those systems, or an MCP server is exposed with access to them, the attack surface expands dramatically.

What used to be obscure, low traffic, and semi-internal becomes:

  • Discoverable
  • Callable
  • Chainable
  • Exploitable at machine speed

That

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Security Boulevard

Read the original article: