IT Security News
Cybersecurity news and articles about information security, vulnerabilities, exploits, hacks, laws, spam, viruses, malware, breaches.

Main menu

Skip to content
  • Social
    • Social Media
    • Daily summary
    • Weekly summary
  • Privacy Policy
  • Legal & Contact
  • Contact
  • Apps
  • Advertising
EN, Schneier on Security

Jailbreaking LLMs with ASCII Art

2024-03-12 12:03

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.

This article has been indexed from Schneier on Security

Read the original article:

Jailbreaking LLMs with ASCII Art

Related

Tags: EN Schneier on Security

Post navigation

← Exploited Building Access System Vulnerability Patched 5 Years After Disclosure
Hackers Advertising FUD APK Crypter that Runs on all Android Devices →
  • Social
    • Social Media
    • Daily summary
    • Weekly summary
  • Privacy Policy
  • Legal & Contact
  • Contact
  • Apps
  • Advertising

Daily Summary

Enter your email address:

GDPR compliance

Categories

Log in

Copyright © 2025 IT Security News. All Rights Reserved. The Magazine Basic Theme by bavotasan.com.