Data trust is the hidden reason most AI initiatives fail

Ready, Fire, AI.

Ninety percent of enterprises are already running Enterprise GenAI at scale. That number comes from new research conducted by MIND in partnership with CISO ExecNet, and it should give every security leader pause. Not because AI adoption is surprising. But because of what sits directly beneath it.

Although 90% of organizations are deploying Enterprise GenAI at scale, only 34% of CISOs describe themselves as reasonably confident in their AI data security controls. As a result, only 1 in 5 of those AI initiatives are meeting their intended KPIs.

The adoption curve and the confidence curve are moving in opposite directions. That gap is what this research was built to examine.

Why does AI adoption expose what poor data governance was hiding?

For years, poor data governance was survivable. Files went unclassified. Repositories stayed ungoverned. Access controls were written for human actors who exercised natural judgment about what they touched and when. None of it surfaced as a crisis because no system was scanning everything at once.

AI changed that equation entirely. The moment an Enterprise GenAI tool connects to a data source, it finds everything within reach. Unclassified files, overshared repositories and sensitive data that nobody realized was broadly accessible. At one organization, executive compensation files had been sitting in SharePoint for years with no classification or access controls. When an Enterprise AI tool was deployed, those files became broadly accessible to a wide internal audience overnight. Security by obscurity ended the moment AI came online.

The research puts numbers to this reality.

  • 70% of security leaders struggle to enforce policies on GenAI tools
  • 66% cannot enforce policies on AI agents
  • And 98% are dealing with at least one significant AI security challenge

These aren’t organizations without governance. Boards have been briefed. Policies have been written. Frameworks have been established. But as the research makes clear, governance without technical enforcement is intention without effect. For most organizations, the mechanisms capable of applying those policies against data in motion, at the speed AI demands, simply don’t exist yet.

The deeper issue is structural. Every security framework in the enterprise was built with human actors in mind. Humans can be trained, audited and held accountable. Even privileged users exercise judgment about what they access and share. AI agents inherit the same permissions but operate without any of that judgment. They move at machine speed and find everything within reach, not just what’s relevant. Thirty-two percent of organizations already have unknown agents operating in their environments. The frameworks that were adequate before AI arrived are now being stress-tested at a scale they were never built to handle.

“undefined”

Parrish Gunnels CISO, MVB Bank

What does new research from 124 CISOs reveal about AI success and data trust?

MIND and CISO ExecNet set out to understand exactly where data trust is breaking down and what it means for AI success. The study combined a quantitative survey of 124 senior security leaders with 20 qualitative interviews from CISOs at organizations with more than 1,500 employees or over one billion dollars in annual revenue. All participants held VP-level roles or higher. The seven insights that emerged from the convergence of survey data and practitioner experience represent the strongest and most consistent patterns across the entire research project.

Those insights trace a connected arc.

  • The enforcement gap
  • The data debt problem
  • The structural mismatch between security frameworks designed for human actors and the non-human actors now operating against them
  • The measurable cost of AI initiative failure
  • The growing difficulty of communicating AI risk to a business that is committed to moving fast
  • The competitive advantage that flows to organizations who solve it first.

The central thesis is that data trust is not a security feature. It is the invisible but decisive ingredient that determines whether AI initiatives succeed or fail. When data trust is high, organizations can use data freely to power AI-driven outcomes. When it isn’t, AI innovation slows, scales poorly or introduces risk that most organizations can’t yet see.

MIND isn’t just reporting on this gap. We’re minding the conditions that close it, helping organizations achieve visibility into what data exists, extend governance to non-human actors and build enforcement that operates at AI speed. The organizations that build that foundation now aren’t just reducing exposure. They’re building the only i

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Security Boulevard

Read the original article: