Workplace AI Tools Now Top Cause of Data Leaks, Cyera Report Warns

 

A recent Cyera report reveals that generative AI tools like ChatGPT, Microsoft Copilot, and Claude have become the leading source of workplace data leaks, surpassing traditional channels like email and cloud storage for the first time. The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive company information through personal, unmanaged accounts.

The research found that 77% of AI interactions in workplace settings involve actual company data, including financial records, personally identifiable information, and strategic documents. Employees frequently copy and paste confidential materials directly into AI chatbots, believing they are simply improving productivity or efficiency. However, many of these interactions occur through personal AI accounts rather than enterprise-managed ones, making them invisible to corporate security systems.

The critical issue lies in how traditional cybersecurity measures fail to detect these leaks. Most security platforms are designed to monitor file attachments, suspicious downloads, and outbound emails, but AI conversations appear as normal web traffic. Because data is shared through copy-paste actions within chat windows rather than direct file uploads, it by

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: