Sorry, ChatGPT Is Under Maintenance: Persistent Denial of Service through Prompt Injection and Memory Attacks
Imagine you visit a website with ChatGPT, and suddenly, it stops working entirely!
In this post we show how an attacker can use prompt injection to cause a persistent denial of service that lasts across chat sessions for a user.
Hacking Memories Previously we discussed how ChatGPT is vulnerable to automatic tool invocation of the memory tool. This can be used by an attacker during prompt injection to ingest malicious or fake memories into your ChatGPT.