Overview
- The Board reversed Meta’s choice not to apply a High Risk AI label to a viral June 2025 Haifa deepfake that drew more than 700,000 views.
- It urged a standalone AI Community Standard with specific self‑disclosure requirements, defined penalties, and scalable labeling protocols.
- Recommendations call for broader use of C2PA content credentials and invisible watermarks together with stronger detection across video, audio, and images.
- The Board criticized Meta’s dependence on user self‑disclosure and limited escalation and pushed for better support and resourcing for fact‑checkers.
- Following Board scrutiny, Meta disabled three linked accounts and removed the monetizable page, and it now has 60 days to formally respond as platforms see a new surge of conflict‑related AI fakes.