Whispering poetry at AI can make it break its own rules

Malicious prompts rewritten as poems have been found to bypass AI guardrails. Which models resisted and which failed the poetic jailbreak test?

This article has been indexed from Malwarebytes

Read the original article: