AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware

The research investigates the persistence and scale of AI package hallucination, a technique where LLMs recommend non-existent malicious packages.  The Langchain framework has allowed for the expansion of previous findings by testing a more comprehensive range of questions, programming languages (Python, Node.js, Go,.NET, and Ruby), and models (GPT-3.5-Turbo, GPT-4, Bard, and Cohere).  The aim is […]

The post AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform.

This article has been indexed from GBHackers on Security | #1 Globally Trusted Cyber Security News Platform

Read the original article: