Overview
- Alex Bores released an eight-point framework calling for independent safety testing of AI models, confidential disclosures to regulators, and accountability for systems that cause demonstrable harm.
- His plan extends to infrastructure with incentives for data centers that use renewable energy and support for electricity grid upgrades.
- He proposes labor measures requiring companies to report AI-related job losses and creating an AI dividend funded by productivity gains.
- The agenda includes child safety standards granting parents access to minors’ AI interactions, age verification requirements, a national data privacy law, and provenance standards for synthetic media.
- Leading the Future, partly funded by OpenAI co-founder Greg Brockman, has spent more than $1 million attacking Bores over items including his past Palantir work, as Bores counters that the group is distorting his record while he gains backing from Anthropic leaders and other AI workers.