Particle.news
Download on the App Store

Peer-Reviewed Study Finds AI Chatbots Form Rigid, Biased ‘Trust’ Judgments About People

The findings suggest AI judgments can quietly steer high-stakes decisions.

Overview

  • Researchers at Hebrew University compared five large language models with about 1,000 human participants across five scenarios and 43,200 simulations.
  • Models scored people on separate traits such as competence, integrity, and benevolence, creating structured trust-like profiles rather than holistic impressions.
  • Humans blended traits into one overall impression, while the AI systems used consistent, by-the-book scoring that can feel less nuanced.
  • In money-related tasks like lending or donations, the models showed systematic differences tied to age, religion, or gender even when other details were the same.
  • Different models often disagreed on the same person, indicating that the choice of chatbot could change outcomes in hiring, credit, healthcare, and workplace decisions.