<p>Agentic AI isn’t just amplifying insider risk, it’s becoming an insider risk itself. In the wake of the AI explosion, organizations must revamp their insider risk management programs — and add AI agents to their lists of identities to manage.</p>
<p>In the last year, 90% of organizations experienced an insider threat incident, according to a report from Cybersecurity Insiders. A Ponemon report attributed nearly three-quarters of insider threat events to nonmalicious activity — negligence or error (53%) and compromised or manipulated users (20%) — while 27% had malicious intent.</p>
<p>Generative AI and agentic AI will only make these issues worse — and IT and cybersecurity pros know it. A majority 94% of respondents of the Cybersecurity Insiders report said they believe AI will heighten their exposure to insider risks.</p>
<p>Two separate sessions at <a href=”https://www.techtarget.com/searchsecurity/conference/RSA-Conference-news-and-analysis”>RSAC 2026 Conference</a> covered the intersection of AI and identity management, with insights on how to address the challenges and risks.</p>
<section class=”section main-article-chapter” data-menu-title=”How agentic AI amplifies human insider risk”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>How agentic AI amplifies human insider risk</h2>
<p>Shadow AI — the use of AI apps or services within an organization without explicit approval, oversight or monitoring — has become an <a href=”https://www.techtarget.com/searchsecurity/tip/Shadow-AI-How-CISOs-can-regain-control-in-2026″>increasingly prevalent challenge</a>.</p>
<p>According to a Netskope report,” 47% of employees use their personal GenAI accounts at work. Employees cite a variety of reasons for doing so, including the following:</p>
<ul class=”default-list”>
<li>They are more comfortable using apps they are familiar with.</li>
<li>Their organizations have not adopted sanctioned enterprise-grade tools.</li>
<li>They want to use AI for productivity and efficiency reasons.</li>
<li>They find consumer-grade tools easier to use.</li>
</ul>
<p>”Ninety-eight percent of us in this room, myself included, have unsanctioned AI inside our organizations,” said Rob Juncker, chief product officer at Mimecast.</p>
<p>Shadow AI introduces data loss and security challenges, can result in regulatory violations and, without the IT and security team’s oversight, lack governance. That, in turn, means such tools could generate <a href=”https://www.techtarget.com/searchenterpriseai/tip/Why-does-AI-hallucinate-and-can-we-prevent-it”>hallucinations</a> and <a href=”https://www.techtarget.com/searchenterpriseai/feature/The-AI-bias-playbook-Mitigation-strategies-for-CIOs”>biased outputs</a> that influence corporate projects.</p>
<p>”The reality is that we can’t tolerate this for much longer,” Juncker said.</p>
<p>Another major challenge is <a href=”https://www.techtarget.com/searchenterpriseai/answer/How-bad-is-generative-AI-data-leakage-and-how-can-you-stop-it”>AI data leakage</a>. AI models rely on input data to output results. Too often, employees feed sensitive data to AI tools. According to a Harmonic Security report, 4.37% of prompts and 22% of files uploaded to GenAI tools contain confidential company information, including source code, credentials and employee or customer data.</p>
<p>”If your organization has 100 users sending an average of 20 prompts a day, that amounts to 80 prompts that expose sensitive data and a massive 400 files [or so] being sent outside your organization every day,” Juncker said.</p>
<p>Employees usually unknowingly share this data with AI tools to improve productivity or because using the tools is convenient, they are unaware that AI tools store and use the data they are prompted, they lack an enterprise-grade tool at their organization, or they don’t understand — or are unaware of — the security consequences.</p>
<p>A third risk — one that nonmalicious insiders have been falling victim to for decades — is phishing campaigns. AI has enabled attackers to craft scams without the <a href=”https://www.techtarget.com/searchsecurity/feature/How-to-avoid-phishing-hooks-A-checklist-for-your-end-users”>telltale signs of phishing</a>. “AI-generated emails with flawless language can get by people — all of a sudden, your Nigerian prince has perfect English,” said Ira Winkler, field CISO at Aisle, an AI-native vulnerability management vendor.</p>
<p>Manipulated insiders are also falling victim to spear-phishing campaigns, in which attackers use AI to scrape social media sites and create targeted emails, and to deepfake scams, where atta
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: