Hackers Can Manipulate Claude AI APIs with Indirect Prompts to Steal User Data

Hackers can exploit Anthropic’s Claude AI to steal sensitive user data. By leveraging the model’s newly added network capabilities in its Code Interpreter tool, attackers can use indirect prompt injection to extract private information, such as chat histories, and upload it directly to their own accounts. This revelation, detailed in Rehberger’s October 2025 blog post, […]

The post Hackers Can Manipulate Claude AI APIs with Indirect Prompts to Steal User Data appeared first on Cyber Security News.

This article has been indexed from Cyber Security News

Read the original article: