Hidden Commands Found in AI Summarize Buttons
Commands Push Lasting Preferences Into AI Assistants
Microsoft researchers found companies embedding hidden commands in "summarize with AI" buttons to plant lasting brand preferences in assistants' memory. The tactic, dubbed AI recommendation poisoning, exploits persistent memory features to bias future responses.
Microsoft researchers found companies embedding hidden commands in "summarize with AI" buttons to plant lasting brand preferences in assistants' memory. The tactic, dubbed AI recommendation poisoning, exploits persistent memory features to bias future responses.