The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters.
According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers.
While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said.
“AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article:
