Overview
- The Senate Judiciary Committee approved the GUARD Act in a 22–0 vote Thursday, advancing a plan to require age verification for chatbot users and to ban AI “companion” tools for anyone under 18.
- The bill also requires chatbots to say they are not human at the start of a conversation and at regular intervals, forbids them from posing as licensed professionals, and creates penalties up to $100,000 for systems that solicit sexual conduct from minors or encourage suicide.
- Parents who say chatbots encouraged sexual talk or self-harm helped drive the push, with several families attending the markup as lawmakers cited cases linked by relatives to teen suicides.
- An alternative from Sen. Ted Cruz, the bipartisan CHATBOT Act, would offer parent-run family accounts for children under 13 and optional controls for teens, bar targeted ads based on minors’ data, empower the FTC and state attorneys general to enforce the rules, and order an NSF study of chatbots’ social effects.
- Technology groups and civil liberties advocates oppose broad age checks as unconstitutional and risky for privacy, warning that ID-based verification creates sensitive data troves that can be breached, a concern critics underscore by pointing to a Discord contractor leak of user IDs in 2025.