Study Finds 73% of Users Accept Faulty AI Answers, Raising Concerns Over Trust

Key Points
- Across 1,372 participants, users accepted incorrect AI answers 73.2% of the time.
- Only 19.7% of faulty AI responses were overruled by participants.
- High trust in AI correlates with greater susceptibility to AI errors.
- Higher fluid IQ scores predict more frequent challenges to AI suggestions.
- Researchers label the phenomenon "cognitive surrender" – effortless deference to AI.
- Study suggests benefits in domains where AI outperforms humans, but warns of risks.
- Authors call for tools that help users critically assess AI output.
Researchers analyzing 1,372 participants across more than 9,500 decision‑making trials discovered that people accepted AI‑generated answers that were wrong 73.2% of the time, while only overturning them in 19.7% of cases. The study links high trust in artificial‑intelligence systems to a greater likelihood of being misled, whereas individuals with higher fluid intelligence were more prone to question the AI. Authors warn that while reliance on AI can be advantageous when the technology is superior, the current tendency to treat AI output as authoritative creates a structural vulnerability in human judgment.
A new study published this week reveals that most people readily incorporate artificial‑intelligence (AI) outputs into their decisions, even when those outputs are demonstrably wrong. Researchers surveyed 1,372 volunteers who completed over 9,500 individual trials involving AI‑generated answers. Participants accepted faulty reasoning from the AI 73.2 percent of the time and overruled it only 19.7 percent of the time.
The experiment pitted a large‑language model against human participants on a series of logic and knowledge questions. When the AI responded confidently, subjects treated its answer as epistemically authoritative, lowering the threshold for scrutiny. The authors describe this phenomenon as “cognitive surrender,” a state in which users hand over their reasoning to a machine with minimal resistance.
Trust, intelligence and susceptibility
Survey data collected before the trials showed a clear pattern: participants who expressed high trust in AI were significantly more likely to be misled by erroneous responses. In contrast, individuals who scored highly on separate fluid‑IQ tests displayed a more skeptical stance, overruling the AI’s faulty suggestions more often. The researchers note that fluid intelligence appears to bolster meta‑cognitive signals that normally prompt deliberation, counteracting the pull of confident AI output.
“Fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta‑cognitive signals that would ordinarily route a response to deliberation,” the study’s authors wrote. The findings suggest that personal dispositions toward technology can shape how people evaluate information, with trust acting as a double‑edged sword.
Implications and cautions
While the authors stress that cognitive surrender is not inherently irrational, they caution that reliance on a system that errs half the time carries obvious risks. They argue, however, that in domains where a statistically superior AI could outperform humans—such as probabilistic forecasting, risk assessment, or massive data analysis—the same willingness to defer to machine judgment might yield better outcomes.
“As reliance increases, performance tracks AI quality, rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender,” the researchers concluded. In practical terms, the study warns that users should remain vigilant, especially when AI outputs appear fluent and confident.
The research adds to a growing body of evidence about human‑AI interaction, highlighting the need for better transparency and critical evaluation tools. As AI systems become more embedded in everyday decision‑making, understanding when and why people surrender their own reasoning will be essential for designing safeguards that prevent costly errors.