Theory of Mind in Human Robot Interactions

Challenges and opportunities that human perception provides to robotic design

This is an essay that I wrote for a course in 2015.

Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley


In this essay I will describe some aspects of human perception and cognition which may be considered to have an impact on human-robot interaction. I would like to discuss the findings on the Uncanny Valley as described in the provided article, Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (Mathur and Reichling, 2015) and the parallels we might draw to other aspects of human-robot interaction. I will consider the application of theory-of-mind to the uncanny valley concept. Furthermore, I would like to discuss the possibility of robots making use of human cognitive and perceptive limitations and biases to pursue their goals more effectively.


Perception is the process that makes sense of the information we receive from our senses (page 30, 2014, Gilhooly et al)..
Illusions are what happens when we misinterpret the signals from our senses by applying certain heuristics and rules for interpretation that don’t always hold true. For example, the illusion of a line with two outward pointing arrows at each end that looks shorter than the line with two inward pointing arrows at each end, although they are both the same length.



Social Perception

Social Perception describes the way that human biology and physiology constrain and influence our social interaction (page 64, 2014, Gilhooly et al). The physical limitations of all the aspects of the human body that produce communicative signals like our facial structures and expressive capabilities, vocal structures and gestural capabilities results in a limited range of communicative input that we can perceive and process. Faces provide valuable information on emotion, age, gender, attractiveness and identity. We are even capable of identifying faces under challenging conditions including low light and at long range. Very specific areas and networks in the brain are engaged in facial recognition – thus we are highly and specifically adapted for it. The composition of features on the face is as important for recognition as the appearance of the features themselves. Social perception leads to social and emotional responses. These inform us about the internal states, motivations and emotions of others.

Biases in human decision making and reasoning

Decision making is the process of choosing between options. (page 325, 2014,Gilhooly et al) There are several theories which model biases that human beings show in our basic decision making processes, including:

Prospect theory, in which we try to avoid losses but ignore gains that might come with risk.

Multi-attribute decision making, which simplifies complex factors in the decision using elimination-by-aspects and satisficing.

Probabilities based decision making, which uses heuristics like availability, representativeness and ignoring of base rates.

Emotions-based decision making, using System 1 instead of reasoning and logic. Reasoning is the process of deriving information from what we already know about a certain topic. We cover two types of reasoning in the text – deductive reasoning which involves drawing logical conclusions from what we already know and applying these rules to other specific information to get new information, and inductive reasoning which involves inferring generalizations from our specific observations.

Here are some biases we have in our reasoning faculties:

  • Illusory inferences, in which we make the error of assuming that something that is likely true, is definitely true
  • Belief bias, in which what we believe to be true overcomes our logic and reasoning about a subject
  • Matching bias, in which to simplify, we make selections based on matching features we have heard in the problem with those which appear in the possible solutions, instead of using logic to solve the problem
  • Confirmation bias, in which we tend to look for evidence to confirm our hypotheses, instead of negating evidence.

Article summary

The Uncanny Valley

The essence of the Uncanny Valley theory, first described by Masahiro Mori in his 1970 paper on the subject, is that human beings dislike imperfection in human-likeness. Mori’s paper describes the effects of robot appearance and robot movement on our affinity to the robot (1970, Mori). In this course we were provided an article. ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (2015, Mathur and Reichling), which deals with the existence of an Uncanny Valley effect in the relationship between robot likeability and trustworthiness versus how mechanical or human the robot’s was appearance was. The study done used several pictures of robots with varying levels of Mechano-humanness. These pictures were obtained on the internet. The images were then rated by study participants for different characteristics. It was discovered that mechanical and human appearance could be treated as one continuous variable.

Reem C Robot from Pal
Pal Reem-C Robot

Study Results

Perceived emotion did not influence the relationship between likeability and Mechano- humanness, however, faces showing more positive emotion appeared to be more likeable.

Very mechanical looking robots were seen to be unlikeable, and likeability increased as the robot faces became less mechanical in appearance. Likeability declined closer to the confusing boundary between mechanical and human-looking robots. Likeability increased dramatically as the robots began to look much more human.


Another experiment was conducted relating to trust and a similar pattern was evident for trust versus mechano-humanness. A very mechanical robot was untrustworthy, but this increased as the robot appearance became less mechanical and then decreased as the robot approached the boundary between mechanical and human looking, with an increase in trust for a very human-looking robot. Robots which showed more positive emotion were seen to be more trustworthy.

Theory of mind

The article describes that we trust others whom we believe to have interests that encapsulate our own. Subjects in the study might have tried to to assess the robot’s interests, possibly applying the theory of mind to the robot, attributing a stance implying thoughts beliefs and desires to them. Or the robots were treated as an inanimate agent of the robot maker. The article continues as follows about the nature of human-robot interaction:

“humans appear to infer trustworthiness from affective cues (subtle facial expressions) known to govern human–human social judgments. These observations help locate the study of human-android robot interaction squarely in the sphere of human social psychology rather than solely in the traditional disciplines of human factors or human–machine interaction”.

Category confusion

Katsyri et al (2015) suggests that the Uncanny Valley effect arises at the boundary between perceptual categories – between non-human and human categories. This confusion is measured experimentally as the additional time it takes to categorize a stimulus. The experimental results in the study confirmed the theory that very human or very mechanical looking faces were more quickly classified than more ambiguous ones. The maximum rating time occurred at the point of minimum likeability. Results also indicated that very human robots were more likeable, but suggest that small faults in their humanness would push them back down the likeability scale into the Uncanny Valley. The article speculates that Uncanny Valley effects on trust may be influenced by robots’ individual characteristics like sex, facial form and perceived emotion.

Own reflections

Application of Uncanny Valley to Theory of Mind

The Uncanny Valley theory in the article selected is applied to the relationship between robot appearance and likeability or trust. It could also be valid to apply the Uncanny Valley model to the theory of mind that human social partners would develop when interacting with robots.

Lack of human reasoning and decision-making biases

Examples of robot-specific cognition characteristics can include the simple lack of the human reasoning and decision making biases we have discussed in the text. For instance, if a robot had enough resources and processing capacity and smart programming, instead of exhibiting human biases in decision making, like applying prospect theory or using emotions-based system 1 decision making, it might apply logic and clear reasoning when making decisions.

Additional robotic cognitive biases and limitations

Specific cognitive biases could occur due to a robot’s hardware and software limitations. Examples of limitations in cognition and perception for robots could include not being able to listen and speak at the same time, slow processing times causing a delay in responses, answering exactly the same answer to the same question every time without originality or creativity, or not understanding humour or metaphors. Such limitations can create a disconnect and serve to remind the human social partners that they are talking to a machine.

Preference for different decision making methods

If a robot does not make use of a typically human decision making method when making a decision, it could serve as a disconnect for the observing human being. For example, in the textbook, multi attribute utility theory is discussed as a way of comparing complex decisions and might be considered a typically human response to the task of choosing where to buy a house. A less typically human approach might be to choose based on a statistical analysis, e.g. similar robots live in houses like this, therefore I should choose this house.

Navigating up and down the walls of the Uncanny Valley

In the article we see that very human looking robots were more likeable but that small defects in their humanness could push them down the valley into unlikeability. Robots that were closer to the boundary between human and mechanical looking could become less likeable when their appearance became uncharacteristically human.

This concept can also be related to the uncanny valley of theory of mind. Imagine a robot with quite mechanical thought patterns. This robot could become very unlikeable by unexpectedly exhibiting thought patterns which are too human. This would be an indirect way of applying the idea of the Uncanny Valley. Instead of a direct effect of visually observing and feeling, in this case we would be considering the accumulated effect of small pieces of information and patterns used to classify the robot over a presumably longer period of time.

Fig. 1. Proposed relationship between robotic and human cognitive style on likeability

Use of cognitive limitations when robots interact with humans

How should robots make use of this knowledge we have about human cognition? Let us imagine that robots have an explicit model of human thinking, and are able to predict the kind of biases we most probably would exhibit in decision making. A robot could use this information to be more successful in serving their goal. For instance, if a robot is a salesperson, it could make use of several of the decision making biases listed above to be more effective at selling e.g using the matching bias, the robot could present a story about some problem a human would have and present his desired solution in the story. When the time comes to select a solution, matching bias can cause a person to select the robots offered solution or product, instead of using reasoning and logic to select a solution.

Human perceptive limitations

Robots could also make use of human perceptive limitations, for instance knowledge of the human visual system could allow a robot to perform an action faster than a human can perceive it. Micro-expressions could be used in this way to create a better (or worse) rapport with human beings. Visual or auditory illusions could also be manufactured with this knowledge.


In this essay, I was provoked by the concept of applying our cognitive processes and limitations to human robot interaction. I posed the question of how robots can make use of the knowledge of human cognition to better reach their goals – this includes the knowledge of human cognition limitations. I also reason that the uncanny valley effect could be applied to more than the appearance of robots – it could be applied to a mental model of robot psychology that human social partners might build. In this case, there may be jarring effects further into the uncanny valley of the robot thinking patterns suddenly seem to be incongruous – either too mechanical or too human. In these ways our understanding of human perception and decision making can be applied to human robot interaction. Furthermore, I postulate that, just as human beings have cognitive peculiarities due to evolution and physiological and psychological reasons, so too do robots have cognitive peculiarities based on their hardware and software.
I have tried to show my understanding of the text by abstracting the essence of the course material and reflecting it onto the mechanical mind. I also reflected on the cognitive processes of human beings from the point of view of robots and robot cognitive processes from the point of view of humans.
Thus the theories of human cognition can be further applied to draw parallels with robot cognition.


Mathur, M. and Reichling, B. (2015). “Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.“ Cognition 146.

Mori, M. (1970). “The Uncanny Valley.” Energy Volume 7 No. 4.

Gilhooly, K., Lyddy, F., Pollick, F.(2014) Cognitive Psychology. London: McGraw Hill Publishing.

One thought on “Theory of Mind in Human Robot Interactions

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

Up ↑

%d bloggers like this: