A growing number of employees are using artificial intelligence tools that their companies have never approved, a new report by 1Password has found. The practice, known as shadow AI, is quickly becoming one of the biggest unseen cybersecurity risks inside organizations.
According to 1Password’s 2025 Annual Report, based on responses from more than 5,000 knowledge workers in six countries, one in four employees admitted to using unapproved AI platforms. The study shows that while most workplaces encourage staff to explore artificial intelligence, many do so without understanding the data privacy or compliance implications.
How Shadow AI Works
Shadow AI refers to employees relying on external or free AI services without oversight from IT teams. For instance, workers may use chatbots or generative tools to summarize meetings, write reports, or analyze data, even if these tools were never vetted for corporate use. Such platforms can store or learn from whatever information users enter into them, meaning sensitive company or customer data could unknowingly end up being processed outside secure environments.
The 1Password study found that 73 percent of workers said their employers support AI experimentation, yet 37 percent do not fully foll
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article:
