
A lot of interest has been garnered by large language models (LLMs) and their abilities, but there’s one ability that remains solely human. We don’t share it with mammals or machines. That ability is called “theory of mind,” and it’s the mind-reading ability that allows us to coordinate and collaborate with others.
Mind reading sounds like the power of some superhero or super villain. However, the truth is that even babies do it. We learn to predict what others are thinking and how they’ll react. Babies learn this skill around three years of age. They begin to recognize what others do – and don’t – know. While comic books make the super power sound like reading every thought and every memory our everyday human power of mind reading is limited to awareness, lack of awareness, and simple prediction.
As adults, this power allows us to do things like joint cooperation (animals can only use parallel cooperation) – and has allowed us to become the dominant biomass on the planet. In other words, even though we’re not fast, don’t smell or see as well as other animals, have minimal fur and no claws, because we can work together, we control the planet.
Limitations of Language
While LLMs are masters of syntax, they lack any semantic knowledge about the way things are. They have no grounding to reality and no sense of truth other than the patterns of arrangements of symbols. More importantly, they have no way of distinguishing who knows what – even in their own history.
If you ask an LLM to find something, it might fail to find it. When you point out the thing you were looking for and ask why it didn’t find it, the LLM will make up an answer that includes information from the result that it couldn’t have known beforehand. It has no framework for not knowing.
Stated Versus Actual Problem
As humans, we instinctively wonder whether what we’re interpreting the other person to say is what they mean to say. We evaluate whether they’re using a term in the same way we are and how specific they’re being. We try to determine what their perception might be and how that might shape how they view a problem.
While LLMs can tackle the syntactical verification – the intended use of the word – they cannot take the perspective of the user to consider how it might be shaping their views – and how that perspective may or may not be the best way of viewing their actual problem.
Because the lack of theory of mind has been so important, most of the popular LLMs have explicitly trained for these conditions. They’re good at handling single-prompt reasoning, which requires theory of mind, but currently don’t do so well when asked to infer theory of mind across a conversation.
The Curse of Knowledge
Humans face it – but with conscious awareness they can overcome it. It’s the curse of knowledge. That is, once you know something, you can’t willfully “unknow” it. It’s hard, but not impossible, to consider how others without that knowledge might behave. It seems obvious to you – because you have the knowledge. (If you want more about trying not to think about things, much less “unknow” them, see White Bears and Other Unwanted Thoughts.)
The curse of knowledge comes up often in learning and development (L&D) circles, because the point is to identify the knowledge that the student has and what knowledge they’re missing. Learning is then targeted to what they don’t know. The result, in theory, is a customized learning plan that teaches everything the student needs and nothing else. Of course, that only works if you can identify what knowledge is missing. That can be hard to do when you believe “everyone” knows something.
If you’re asking an LLM for an answer about what someone knows or doesn’t know, how they feel or don’t feel, or their perspective on issues, you may need to be the human and check its work.




