Overview
- Cornell researchers deliberately coached an autocomplete system to favor positions on issues such as the death penalty, standardized testing, and felon voting.
- Participants exposed to biased suggestions moved roughly 0.5 points toward the AI’s view on a 1–5 scale, even when they did not accept the text.
- Most users judged the suggestions as reasonable and did not recognize either the bias or their own attitude change.
- Warnings or disclaimers about possible AI bias or misinformation did not meaningfully reduce the persuasive effect.
- The team warns that widespread deployment of similarly biased tools could homogenize language and sway public opinion at scale, with effective mitigations still unclear.