New ChatGPT Vulnerabilities Let Hackers Steal Data, Hijack Memory

Seven vulnerabilities in ChatGPT (including GPT-5) allow attackers to use ‘0-click’ and ‘memory injection’ to bypass safety features and persistently steal private user data and chat history. Tenable Research exposes the flaws.

This article has been indexed from Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

Read the original article: