How to fix cybersecurity’s agentic AI identity crisis

<p>The rapid adoption of agentic AI is radically shifting how enterprises operate, automate workflows and interact with digital systems. Autonomous <a href=”https://www.techtarget.com/searchenterpriseai/definition/AI-agents”>AI agents</a> — intelligent systems that are capable of executing commands, accessing sensitive data and making decisions on behalf of users — represent both tremendous business opportunities and profound security risks.</p>
<p>AI agents exist in a liminal space between tools and actors. Unlike traditional software applications that operate within clearly defined boundaries, they possess agency, make autonomous decisions and interact with systems using credentials and permissions. This creates a fundamental identity problem and one of the most pressing challenges in enterprise cybersecurity today: Who or what is truly responsible when an agent takes an action? Is it the human who deployed the agent, the organization that owns the infrastructure or the agent itself?</p>
<p>When agents are compromised or manipulated, ambiguity around agent identity and authentication becomes a critical vulnerability. Traditional security models built around human identity and authentication struggle to accommodate digital entities that operate autonomously, learn from interactions and execute actions without real time human oversight. To protect themselves against catastrophic security failures, enterprises must establish clear frameworks governing agent identity, authentication, authorization and accountability.</p>
<div class=”extra-info”>
<div class=”extra-info-inner”>
<h3 class=”splash-heading”>Exhibit A: OpenClaw’s vulnerabilities</h3>
<p>OpenClaw — formerly known as Clawdbot and Moltbot — is an open-source AI agent that runs locally on users’ machines. These agents have deep system access, controlling such functions as terminal commands, file system operations, email, calendar and browsers. Despite launching only in November 2025, OpenClaw rapidly gained viral popularity and, in turn, the attention of security researchers — who uncovered a cascade of critical vulnerabilities.</p>
<p>The OpenClaw architecture created an especially dangerous attack surface because agents run with elevated privileges on users’ host machines, lack sandboxing by default and periodically fetch updates from external sources.</p>
<p>This design enabled prompt injection attacks, supply chain attacks and coordinated compromises across connected instances. Researchers scanning internet-facing OpenClaw deployments found exposed admin interfaces, leaked API keys, OAuth tokens and conversation histories stored in plaintext.</p>
</div>
</div>
<section class=”section main-article-chapter” data-menu-title=”Building a framework for enterprise AI agent security”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Building a framework for enterprise AI agent security</h2>
<p>To secure their agentic AI deployments, enterprises need to implement some fundamental security principles. Agentic identity and authentication must move beyond simple API keys toward robust, verified identity frameworks that establish clear chains of custody and accountability. Consider the following:</p>
<h3>Agent authorization and privilege management</h3>
<p>Permissions should follow <a href=”https://www.techtarget.com/searchsecurity/feature/How-to-implement-zero-trust-security-from-people-who-did-it”>zero-trust principles</a>, granting agents only the minimum necessary access — including time-bounded authorizations that expire automatically — to perform specific, sanctioned tasks. Implement <a href=”https://www.techtarget.com/searchsecurity/definition/role-based-access-control-RBAC”>role-based access control</a> for agents, segregate duties to prevent any single agent from executing high-risk operations independently and maintain AI audit trails that capture every agent action with full context.</p>
<p>Critical operations should require human approval, mandate MFA for sensitive actions and include clear escalation paths in the event of an anomalous request.</p>
<h3>Agent isolation and sandboxing</h3>
<p>Running agents with unrestricted host access carries potentially catastrophic risks. Instead, deploy agents only in isolated containers or VMs with minimal privileges, restricted by network segmentation to limit lateral movement and bound by runtime application self-protection to detect and block malicious behavior. Only execute code in sandboxed environments with strict resource limits, monitored file system access and network connections that prohibit access to unauthorized destinations.</p>
<h3>Prompt injection defenses</h3>
<p>Agents that process externa

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Search Security Resources and Information from TechTarget

Read the original article: