Large language models (LLMs) are vulnerable to attacks, leveraging their inability to recognize prompts conveyed through ASCII art. ASCII art is a form of visual art created using characters from the ASCII (American Standard Code for Information Interchange) character set. Recently, the following researchers from their respective universities proposed a new jailbreak attack, ArtPrompt, that […]
The post Researchers Hack AI Assistants Using ASCII Art appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform.
This article has been indexed from GBHackers on Security | #1 Globally Trusted Cyber Security News Platform