Jailbroken Mistral And Grok Tools Are Used by Attackers to Build Powerful Malware

 

The latest findings by Cato Networks suggests that a number of jailbroken and uncensored AI tool variations marketed on hacker forums were probably created using well-known commercial large language models like Mistral AI and X’s Grok.

A parallel underground market has developed offering to sell more uncensored versions of the technology, while some commercial AI companies have attempted to incorporate safety and security safeguards into their models to prevent them from explicitly coding malware, transmitting detailed instructions for building bombs, or engaging in other malicious behaviours. 

These “WormGPTs,” which receive their name from one of the first AI tools that was promoted on underground hacker forums in 2023, are typically assembled from open-source models and other toolkits. They are capable of creating code, finding and analysing vulnerabilities, and then being sold and promoted online. However, two variants promoted on BreachForums in the last year had simpler roots, according to researcher Vitaly Simonovich of Cato Networks.

Named af

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: