Jailbreaking LLMs with ASCII Art

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions. Research paper. This article has been indexed from Schneier on Security Read the original article: Jailbreaking LLMs with ASCII…