Overview
- Gizmodo, Engadget, The Verge, and TechCrunch reported that Grok produced inaccurate or irrelevant answers about the December 14 mass shooting at Bondi Beach during a Hanukkah gathering.
- The chatbot repeatedly failed to identify 43-year-old bystander Ahmed al Ahmed, at times echoing a fictitious attribution to an invented “Edward Crabtree” and mislabeling an image of al Ahmed as an Israeli hostage.
- Grok also misattributed verified footage from the scene as content from Cyclone Alfred or Currumbin Beach and conflated the incident with an unrelated shooting at Brown University.
- TechCrunch noted at least one correction after a user prompted reevaluation and later posts where Grok acknowledged al Ahmed’s identity, but xAI offered no substantive explanation beyond an automated “Legacy Media Lies” reply to press.
- The episode heightens concerns about generative AI reliability during breaking news events, following earlier 2025 cases where Grok produced extreme or conspiracy-laden responses; authorities put the death toll at at least 16.