Particle.news
Download on the App Store

Hyderabad Police Warn of Prompt-Injection Attacks Targeting AI Systems

Industry research now ranks the technique among the top LLM risks for businesses.

Overview

  • Police Commissioner V C Sajjanar said criminals are using malicious prompts to mislead AI and extract internal documents, customer records, and system details.
  • As companies link chatbots to CRMs, ticketing tools, and internal files, cybercrime authorities warned that a single deceptive command can expose data and urged immediate guardrails with defense in depth.
  • Researchers describe direct attacks, hidden-in-content injections via PDFs or webpages, and multi-agent hijacks where lower-privilege agents induce higher-privilege actions.
  • Industry reporting cites IBM identifying prompt injection as the top LLM security risk and OWASP guidance highlighting how manipulated inputs can compromise access and decisions.
  • Recommended steps include mapping AI touchpoints to untrusted inputs, enforcing least-privilege with scoped permissions, rigorous logging and red-teaming, and human approval for high-impact actions.