A newly discovered jailbreak technique named “sockpuppeting” successfully forces 11 leading artificial intelligence models, including ChatGPT, Claude, and Gemini, to bypass their safety guardrails. By exploiting a standard application programming interface (API) feature with a single line of code, attackers can trick these models into generating malicious outputs without requiring complex mathematical optimisation. When a […]
The post ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
This article has been indexed from GBHackers Security | #1 Globally Trusted Cyber Security News Platform
Read the original article: