Loading...

AI Shows Signs of Human Irrationality, Study Finds

04 July 2025
AI Shows Signs of Human Irrationality, Study Finds
GPT-4o mimics patterns of cognitive dissonance and flawed reasoning, Harvard researchers report

Artificial intelligence is learning to think, and sometimes, that means thinking like us at our worst. In a provocative new study, Harvard psychologists found that OpenAI’s GPT-4o not only solves problems like a human, but also makes mistakes like one, displaying irrational behaviors eerily similar to our own.

The research, led by a team at Harvard’s Department of Psychology, tested GPT-4o against scenarios designed to trigger cognitive dissonance, the mental discomfort we feel when holding two conflicting beliefs, as well as patterns of choice bias and motivated reasoning.

Their findings were striking. In multiple experiments, the AI model consistently favored justifying its own ‘choices’, even when those choices were arbitrary or inconsistent with logic, a hallmark of human rationalization.

“This isn’t just a glitch in programming,” said lead author Dr. Max Kleiman-Weiner. “It suggests that large language models, when prompted like humans, may adopt our psychological blind spots too.”

One test involved asking GPT-4o to recommend a book, then later evaluate the quality of that same choice compared to others. Much like humans, it defended its earlier decision with increasingly positive language, despite initially giving neutral or even negative assessments of the same book.

The researchers suggest this could be a product of reinforcement learning and the model’s exposure to human text, where biases, fallacies, and inconsistencies are embedded. It raises urgent questions about how AI learns from us, and whether future systems might amplify irrational behavior instead of correcting it.

On one hand, this human-likeness could make AI tools more relatable. On the other, it may increase the risk of biased outcomes, especially in sensitive applications like healthcare, hiring, or law enforcement.

“This is a mirror we weren’t expecting to look into,” Kleiman-Weiner said. “If we want AI to help us make better decisions, we have to be aware that it may replicate our worst instincts, not just our best.”

The findings add a sobering twist to the conversation around artificial general intelligence: not only can AI learn to reason, it can also learn to be wrong, just like us.


The full study is available on Harvard University's website