Loading...

The Shocking Truth About AI: It Talks Like Us, But Doesn’t Think Like Us

18 July 2025
The Shocking Truth About AI: It Talks Like Us, But Doesn’t Think Like Us
Experts warn we’re mistaking statistical mimicry for real understanding as AI tools grow more humanlike.

Artificial intelligence has come a long way. It can draft essays, diagnose diseases, ace bar exams, and even create art that moves us. But does AI really “understand” what it’s doing?

According to a group of experts from Harvard and beyond, the answer is still a firm “no”, at least, not in the human sense.

“Today’s AI is not like a little human inside your laptop,” says philosopher and cognitive scientist Ned Block. “It’s more like a mirror. It reflects our words, our patterns, our logic, but it doesn’t comprehend any of it the way we do.”

The warning comes as rapid advances in large language models, systems like ChatGPT, Gemini, and Claude, are reshaping everything from education and journalism to law and medicine. These tools often appear to understand language, even emotions. But appearances can be deceptive.

AI models like GPT-4 and Claude are trained on massive datasets, absorbing patterns of language, reasoning, and structure. But they don’t “know” what a word means in a human way. Their impressive responses are the result of statistical pattern-matching, not lived experience or internal awareness.

That distinction is critical, say neuroscientists like Gabriel Kreiman at Harvard Medical School.

“Our understanding is shaped by embodied experience, we feel, sense, and act in the world. AI lacks that,” Kreiman explains. “Even if it can describe what sadness is, it has never felt sad.”

In other words, AI may simulate understanding well enough to fool us, but it doesn’t have a mind behind the words.

With AI systems growing more capable, and more integrated into decision-making, this philosophical distinction could have real-world consequences. If we assume AI has human-like understanding, we may overtrust its outputs in critical situations, from healthcare diagnoses to military systems.

There's also the ethical risk of projecting consciousness onto something that has none, leading to false narratives around machine “empathy” or “intelligence.”

And yet, some experts argue we’re inching closer to a new kind of cognition.

“These tools aren’t conscious, but they may be developing a form of ‘alien intelligence,’” says AI ethicist Shannon Vallor. “It’s not human, but it’s powerful, and we need to understand how it’s evolving.”

The jury’s still out. Some researchers believe future systems could be “embodied” in robots, gaining experiences that mimic biological cognition. Others think we may need entirely new architectures, ones that go beyond mimicry to genuine conceptual frameworks.

But for now, despite all its flair and fluency, AI is not thinking like us.

“It’s a sophisticated echo,” says Block. “It can reflect our words, but it doesn’t know what they mean.”


The full study is available on Harvard University's website