Video: Prompt Injections - An Introduction
There are many prompt engineering classes and currently pretty much all examples are vulnerable to Prompt Injections. Especially Indirect Prompt Injections are dangerous as we discussed before.
Indirect Prompt Injections allow untrusted data to take control of the LLM (large language model) and give an AI a new instructions, mission and objective.
Bypassing Input Validation Attack payloads are natural language. This means there are lots of creative ways an adversary can inject malicious data that bypass input filters and web application firewalls.