Life Buzz News

False memories planted in ChatGPT give hacker persistent...


False memories planted in ChatGPT give hacker persistent...

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user's long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Previous articleNext article

POPULAR CATEGORY

corporate

7999

tech

9102

entertainment

9674

research

4314

misc

10319

wellness

7519

athletics

10175