Hacker plants false memories in ChatGPT to steal user data in perpetuity

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern. So Rehberger did what all good researchers do:

Source: Technology Lab – Ars Technica

 


Date:

Categorie(s):