Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday.

The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases.

According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material.

While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed.

“AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: