Protecting Sensitive Data When Employees Use AI Chatbots

 

In today’s digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it’s important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.
A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google’s Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 
It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 
In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model’s training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: