Hugging Face ML Models Compromised with Silent Backdoors Aimed at Data Scientists

 

As research from security firm JFrog revealed on Thursday in a report that is a likely harbinger of what’s to come, code uploaded to AI developer platform Hugging Face concealed the installation of backdoors and other forms of malware on end-user machines. 
The JFrog researchers said that they found approximately 100 files that were downloaded and loaded onto an end-user device that was not intended and performed unwanted and hidden acts when they were installed. All of the machine learning models that were subsequently flagged, went undetected by Hugging Face, and all of them appeared to be benign proofs of concept uploaded by users or researchers who were unaware of any potential danger. 
A report published by JFrog researchers states that ten of them were actually “truly malicious” because they violated the users’ security when they were installed, in that they implemented actions that compromised their security.

This blog post aims to broaden the conversation surrounding AI Machine Language (ML) models for security, which has been a neglected subject for a long time and it is important to begin a discussion about it right now. 

The JFrog Security Research team is investigating ways in which machine learning models can be employed to compromise an individual’s environment through executing code to compromise the environment of a Hugging Face user.

The purpose of this post

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: