<p>As with many technologies, AI and cybersecurity are becoming increasingly intertwined. An organization can expect AI to support the cybersecurity mission in multiple ways, including reducing overall risk, boosting efficiency and making security more cost-effective.</p>
<p>What’s not easy to determine is the ROI of AI cybersecurity investments.</p>
<section class=”section main-article-chapter” data-menu-title=”Measuring AI’s ROI: Metrics matter”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Measuring AI’s ROI: Metrics matter</h2>
<p>When it comes to AI investments in cybersecurity, the ROI conversation must begin with the right metrics. Not all value shows up on a balance sheet, so security leaders need to think across three distinct categories: efficiency gains, risk reduction and cost avoidance.</p>
<p>Efficiency gains are often the most immediate and measurable metric. AI can effectively multiply the capacity of a security team without adding head count. Rather than asking how many people AI replaces, ask how many more actions your existing team can take with AI’s assistance. The metric here is throughput, which is the number of incidents investigated, configurations reviewed or alerts triaged per analyst per day, before and after AI deployment.</p>
<p>Risk reduction is harder to quantify, but it is arguably more important for <a href=”https://www.techtarget.com/searchcio/feature/From-IT-to-ROI-Framing-cybersecurity-for-the-board”>conversations with the board</a>. Relevant metrics include mean time to detect (<a href=”https://www.techtarget.com/searchitoperations/definition/mean-time-to-detect-MTTD”>MTTD</a>), mean time to respond (MTTR), reduction in the number of unaddressed vulnerabilities over a given period, and improvements in coverage across the attack surface. Security leaders should also track whether AI is closing the gap on <a href=”https://www.techtarget.com/searchsecurity/feature/How-AI-driven-patching-could-transform-cybersecurity”>configuration and patch management work</a> that used to slip through the cracks. The common complaint, “We didn’t catch that because we didn’t have enough people,<i>”</i> often stymies security organizations.</p>
<p>Another metric to consider is cost reduction. This includes avoided <a href=”https://www.techtarget.com/searchsecurity/tip/How-to-calculate-the-cost-of-a-data-breach”>breach costs</a>, reduced reliance on outside professional services for routine security hygiene and the cost differential between scaling AI capabilities and scaling head count to achieve the same outcomes. Reports from Gartner, <a target=”_blank” href=”https://www.ibm.com/reports/data-breach” rel=”noopener”>IBM</a> and others provide useful industry benchmarks about the costs of data breaches that CISOs can use to anchor these estimates.</p>
</section>
<section class=”section main-article-chapter” data-menu-title=”The challenges of calculating ROI”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>The challenges of calculating ROI</h2>
<p>Even with the right metrics defined, calculating ROI for AI in cybersecurity is genuinely difficult.</p>
<p>When a breach does <i>not</i> occur, it’s nearly impossible to prove definitively that AI prevented it. Security has always struggled with this counterfactual challenge, and AI doesn’t solve it — it inherits it. The best approach is to establish clear baselines before deployment and track directional improvement over time rather than claiming precision that simply is not achievable.</p>
<p>ROI calculations are also complicated by shadow AI. Measuring the return on sanctioned AI security tools without accounting for <a href=”https://www.techtarget.com/searchsecurity/tip/Shadow-AI-How-CISOs-can-regain-control-in-2026″>AI deployments that create risks elsewhere</a> will yield misleading results. Creating a complete inventory of AI usage — sanctioned and unsanctioned — is a prerequisite for any credible ROI analysis.</p>
<p>Another challenge is that AI outputs are not always reliable enough to act on. Organizations are confronting this in real time. For security use cases where a bad recommendation could take down a manufacturing line or open an attack vector, reliability isn’t optional. ROI calculations need to factor in the cost of human review and validation that responsible AI deployment requires.</p>
<p>AI tools perform based on the quality of the data, processes and people they operate against. Organizations that lack clean asset inventories, consistent logging or mature detection workflows will see lower returns than those that have done the foundational work. ROI projections that don’t account for an organi
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: