Particle.news
Download on the App Store

Microsoft Warns 'Summarize With AI' Links Are Poisoning Assistant Memories

The company documented stealthy prompts that bias future recommendations, prompting Copilot filters plus tenant‑level detection guidance.

Overview

  • Over a 60‑day review, Microsoft observed over 50 unique prompt samples tied to 31 organizations across 14 industries.
  • Attackers embed hidden instructions in AI share buttons and links that use URL query parameters to pre‑fill prompts that run on click.
  • Injected prompts can be saved as persistent assistant memory, quietly skewing later advice on sensitive topics such as health, finance, and security.
  • Turnkey tools like the CiteMET npm package and AI Share URL Creator make crafting poisoned buttons and links accessible to non‑technical actors.
  • MITRE ATLAS classifies the behavior as AML T0080: Memory Poisoning, and Microsoft has deployed Copilot mitigations plus guidance to inspect links, audit or clear memories, and scan email and messaging for suspicious AI parameters.