Robots spotted in the wild: ‘Joy For All’ Robotic Cat

This summer vacation while visiting my sister in the UK and having a lovely visit with her neighbour Janet, I got the opportunity to experience the with a Joy For All robot kitty firsthand.

This robotic pet was created to help people with Dementia and Alzheimers enjoy the soothing experience of having a pet without the great responsibility that it incurs.

Janet bought her cat simply to have something cuddly and lifelike around without having to take care of a pet – something a much more engaging and lovable than a stuffed toy (no offence to stuffed toys!).

This cat was quite enjoyable to interact with, it purrs, meows, is very soft with flexible limbs and movable ears – it even has its own brush.

It does a few tricks like rolling over and raising its paws! Overall it’s a fun and interesting experience and it reminded me that human beings need companionship and comfort and sometimes we can get a little help from technology.

Janet also happened to have this custom designed leopard painted on her shed, complete with movable tail. The design of the mechanism was simple and effective, with a coat hanger and rotating wooden disk. The eyes are led’s and the entire thing can be turned off and on using a Philips Hue plug. It made me wonder – why do I consider this a robot? Although it’s not capable of changing its behaviour based on input, I categorise it somewhere along the lines of Animatronics.

Speaking of animatronics, here is a video of animatronic dinosaurs from our recent visit to the Leiden Naturalis museum and some dinosaur fossils!

Well, thats all for now, I wish for you to also spot many robots in the wild – happy hunting!

Repairing Aldebaran Nao Battery Pack

Our Nao recently stopped working due to a completely flat battery.

My boyfriend Renze and I have had it since 2016 and it’s no longer serviced by the manufacturer once the 4 year warranty is up.

Investing in social robots has proven to be a bit tricky. Companies go bankrupt or are acquired (e.g. our Anki Vector robot) and then strategies change, leaving you with some expensive hardware at worst gathering dust on the shelf and at best needing additional investment to keep it working. You might have the hardware but an important part of the package is often cloud based software or development API’s that won’t be maintained and updated indefinitely.

We’ve not used our Nao for quite a while due to other projects but since I recently started a course on robotics which used the Nao, I thought it would be good to dust it off again, only to find the battery completely flat.

Nao’s manufacturer, the French Robotics company Aldebaran was bought by Softbank in 2012 and since then the company has been sold again to German United Robotics Group and will operate under the original name. The focus had switched from the Nao robots which were primarily research robots to Pepper and Romeo service robots. It should be interesting to see what direction URG takes with the products.

We’ve sent our Nao to Softbank in France to be serviced once before, due to its somewhat fragile fingers breaking.

This time we’ve been lucky and it’s only the battery that’s flat and not something more serious wrong with it.

We found a blog post detailing how to replace the dead cells in Nao’s battery pack:

We first removed the battery pack by unscrewing the hatch, then we tested the cells and found them depleted.

We opened the battery pack and it seemed possible to swap the cells and reconnect the circuit board to them.

Next we had to order new cells online (these are standard 18650 Lithium-ion cells) and buy a mini spot welder to weld the zinc connectors to the cells.

This is me spot welding the cells together

We soldered the connectors onto the circuit board and taped the battery pack together with heat resistant tape.

We did some careful testing by charging the battery with the official charger and then installing it back into Nao. Finally our Nao is back in working order! And we’ve gained some confidence and added some new technical skills (and equipment) to our toolkit.

Theory of Mind in Human Robot Interactions

Challenges and opportunities that human perception provides to robotic design

This is an essay that I wrote for a course in 2015.

Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley


In this essay I will describe some aspects of human perception and cognition which may be considered to have an impact on human-robot interaction. I would like to discuss the findings on the Uncanny Valley as described in the provided article, Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (Mathur and Reichling, 2015) and the parallels we might draw to other aspects of human-robot interaction. I will consider the application of theory-of-mind to the uncanny valley concept. Furthermore, I would like to discuss the possibility of robots making use of human cognitive and perceptive limitations and biases to pursue their goals more effectively.


Perception is the process that makes sense of the information we receive from our senses (page 30, 2014, Gilhooly et al)..
Illusions are what happens when we misinterpret the signals from our senses by applying certain heuristics and rules for interpretation that don’t always hold true. For example, the illusion of a line with two outward pointing arrows at each end that looks shorter than the line with two inward pointing arrows at each end, although they are both the same length.



Social Perception

Social Perception describes the way that human biology and physiology constrain and influence our social interaction (page 64, 2014, Gilhooly et al). The physical limitations of all the aspects of the human body that produce communicative signals like our facial structures and expressive capabilities, vocal structures and gestural capabilities results in a limited range of communicative input that we can perceive and process. Faces provide valuable information on emotion, age, gender, attractiveness and identity. We are even capable of identifying faces under challenging conditions including low light and at long range. Very specific areas and networks in the brain are engaged in facial recognition – thus we are highly and specifically adapted for it. The composition of features on the face is as important for recognition as the appearance of the features themselves. Social perception leads to social and emotional responses. These inform us about the internal states, motivations and emotions of others.

Biases in human decision making and reasoning

Decision making is the process of choosing between options. (page 325, 2014,Gilhooly et al) There are several theories which model biases that human beings show in our basic decision making processes, including:

Prospect theory, in which we try to avoid losses but ignore gains that might come with risk.

Multi-attribute decision making, which simplifies complex factors in the decision using elimination-by-aspects and satisficing.

Probabilities based decision making, which uses heuristics like availability, representativeness and ignoring of base rates.

Emotions-based decision making, using System 1 instead of reasoning and logic. Reasoning is the process of deriving information from what we already know about a certain topic. We cover two types of reasoning in the text – deductive reasoning which involves drawing logical conclusions from what we already know and applying these rules to other specific information to get new information, and inductive reasoning which involves inferring generalizations from our specific observations.

Here are some biases we have in our reasoning faculties:

  • Illusory inferences, in which we make the error of assuming that something that is likely true, is definitely true
  • Belief bias, in which what we believe to be true overcomes our logic and reasoning about a subject
  • Matching bias, in which to simplify, we make selections based on matching features we have heard in the problem with those which appear in the possible solutions, instead of using logic to solve the problem
  • Confirmation bias, in which we tend to look for evidence to confirm our hypotheses, instead of negating evidence.

Article summary

The Uncanny Valley

The essence of the Uncanny Valley theory, first described by Masahiro Mori in his 1970 paper on the subject, is that human beings dislike imperfection in human-likeness. Mori’s paper describes the effects of robot appearance and robot movement on our affinity to the robot (1970, Mori). In this course we were provided an article. ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (2015, Mathur and Reichling), which deals with the existence of an Uncanny Valley effect in the relationship between robot likeability and trustworthiness versus how mechanical or human the robot’s was appearance was. The study done used several pictures of robots with varying levels of Mechano-humanness. These pictures were obtained on the internet. The images were then rated by study participants for different characteristics. It was discovered that mechanical and human appearance could be treated as one continuous variable.

Study Results

Perceived emotion did not influence the relationship between likeability and Mechano- humanness, however, faces showing more positive emotion appeared to be more likeable.

Very mechanical looking robots were seen to be unlikeable, and likeability increased as the robot faces became less mechanical in appearance. Likeability declined closer to the confusing boundary between mechanical and human-looking robots. Likeability increased dramatically as the robots began to look much more human.


Another experiment was conducted relating to trust and a similar pattern was evident for trust versus mechano-humanness. A very mechanical robot was untrustworthy, but this increased as the robot appearance became less mechanical and then decreased as the robot approached the boundary between mechanical and human looking, with an increase in trust for a very human-looking robot. Robots which showed more positive emotion were seen to be more trustworthy.

Theory of mind

The article describes that we trust others whom we believe to have interests that encapsulate our own. Subjects in the study might have tried to to assess the robot’s interests, possibly applying the theory of mind to the robot, attributing a stance implying thoughts beliefs and desires to them. Or the robots were treated as an inanimate agent of the robot maker. The article continues as follows about the nature of human-robot interaction:

“humans appear to infer trustworthiness from affective cues (subtle facial expressions) known to govern human–human social judgments. These observations help locate the study of human-android robot interaction squarely in the sphere of human social psychology rather than solely in the traditional disciplines of human factors or human–machine interaction”.

Category confusion

Katsyri et al (2015) suggests that the Uncanny Valley effect arises at the boundary between perceptual categories – between non-human and human categories. This confusion is measured experimentally as the additional time it takes to categorize a stimulus. The experimental results in the study confirmed the theory that very human or very mechanical looking faces were more quickly classified than more ambiguous ones. The maximum rating time occurred at the point of minimum likeability. Results also indicated that very human robots were more likeable, but suggest that small faults in their humanness would push them back down the likeability scale into the Uncanny Valley. The article speculates that Uncanny Valley effects on trust may be influenced by robots’ individual characteristics like sex, facial form and perceived emotion.

Own reflections

Application of Uncanny Valley to Theory of Mind

The Uncanny Valley theory in the article selected is applied to the relationship between robot appearance and likeability or trust. It could also be valid to apply the Uncanny Valley model to the theory of mind that human social partners would develop when interacting with robots.

Lack of human reasoning and decision-making biases

Examples of robot-specific cognition characteristics can include the simple lack of the human reasoning and decision making biases we have discussed in the text. For instance, if a robot had enough resources and processing capacity and smart programming, instead of exhibiting human biases in decision making, like applying prospect theory or using emotions-based system 1 decision making, it might apply logic and clear reasoning when making decisions.

Additional robotic cognitive biases and limitations

Specific cognitive biases could occur due to a robot’s hardware and software limitations. Examples of limitations in cognition and perception for robots could include not being able to listen and speak at the same time, slow processing times causing a delay in responses, answering exactly the same answer to the same question every time without originality or creativity, or not understanding humour or metaphors. Such limitations can create a disconnect and serve to remind the human social partners that they are talking to a machine.

Preference for different decision making methods

If a robot does not make use of a typically human decision making method when making a decision, it could serve as a disconnect for the observing human being. For example, in the textbook, multi attribute utility theory is discussed as a way of comparing complex decisions and might be considered a typically human response to the task of choosing where to buy a house. A less typically human approach might be to choose based on a statistical analysis, e.g. similar robots live in houses like this, therefore I should choose this house.

Navigating up and down the walls of the Uncanny Valley

In the article we see that very human looking robots were more likeable but that small defects in their humanness could push them down the valley into unlikeability. Robots that were closer to the boundary between human and mechanical looking could become less likeable when their appearance became uncharacteristically human.

This concept can also be related to the uncanny valley of theory of mind. Imagine a robot with quite mechanical thought patterns. This robot could become very unlikeable by unexpectedly exhibiting thought patterns which are too human. This would be an indirect way of applying the idea of the Uncanny Valley. Instead of a direct effect of visually observing and feeling, in this case we would be considering the accumulated effect of small pieces of information and patterns used to classify the robot over a presumably longer period of time.

Fig. 1. Proposed relationship between robotic and human cognitive style on likeability

Use of cognitive limitations when robots interact with humans

How should robots make use of this knowledge we have about human cognition? Let us imagine that robots have an explicit model of human thinking, and are able to predict the kind of biases we most probably would exhibit in decision making. A robot could use this information to be more successful in serving their goal. For instance, if a robot is a salesperson, it could make use of several of the decision making biases listed above to be more effective at selling e.g using the matching bias, the robot could present a story about some problem a human would have and present his desired solution in the story. When the time comes to select a solution, matching bias can cause a person to select the robots offered solution or product, instead of using reasoning and logic to select a solution.

Human perceptive limitations

Robots could also make use of human perceptive limitations, for instance knowledge of the human visual system could allow a robot to perform an action faster than a human can perceive it. Micro-expressions could be used in this way to create a better (or worse) rapport with human beings. Visual or auditory illusions could also be manufactured with this knowledge.


In this essay, I was provoked by the concept of applying our cognitive processes and limitations to human robot interaction. I posed the question of how robots can make use of the knowledge of human cognition to better reach their goals – this includes the knowledge of human cognition limitations. I also reason that the uncanny valley effect could be applied to more than the appearance of robots – it could be applied to a mental model of robot psychology that human social partners might build. In this case, there may be jarring effects further into the uncanny valley of the robot thinking patterns suddenly seem to be incongruous – either too mechanical or too human. In these ways our understanding of human perception and decision making can be applied to human robot interaction. Furthermore, I postulate that, just as human beings have cognitive peculiarities due to evolution and physiological and psychological reasons, so too do robots have cognitive peculiarities based on their hardware and software.
I have tried to show my understanding of the text by abstracting the essence of the course material and reflecting it onto the mechanical mind. I also reflected on the cognitive processes of human beings from the point of view of robots and robot cognitive processes from the point of view of humans.
Thus the theories of human cognition can be further applied to draw parallels with robot cognition.


Mathur, M. and Reichling, B. (2015). “Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.“ Cognition 146.

Mori, M. (1970). “The Uncanny Valley.” Energy Volume 7 No. 4.

Gilhooly, K., Lyddy, F., Pollick, F.(2014) Cognitive Psychology. London: McGraw Hill Publishing.

Robots spotted in the wild: Bellabot by Pudu

In hindsight, I regret not blogging more about robots I’ve discovered on my travels and in working situations. This is where my joy and enthusiasm about the robots really gets fuelled, when I interact with them spontaneously. Recently with friends at our favourite Asian restaurant (Mchi in Ijburg – FYI the food was great and each plate left empty!), we encountered some Bellabots working as assistants to the restaurant staff. I was with my buddy Vikram Radhakrishnan, who is also crazy about robots and my partner Renze de Vries, who has made quite a few robots himself – check out his Youtube channel here). We worked on the Anki Vector project together back in 2019 – time flies when you’re in lockdown. Imagine our excitement to find these amazing robots serving dinner to patrons as if it was the most natural thing in the world! That really made our day!

These robots are also super cute – they have cat ears and when the service screen is not displayed, it shows a cat face. We’re all crazy about cats so our minds were completely blown by these robots.

Vikram interacting with the robot, under supervision from really friendly staff.
Two robots performing different functions

These robots have collision avoidance, they were able to avoid the waiting staff, and they are programmed with table numbers and the dishwasher location for example. One also played happy birthday in Dutch for one of the tables as you can see but not hear properly in the video below:

Well, you know, by now I’m really excited to know more about the technology and company behind these robots so here goes:

Bellabot Youtube Channel

Here are more products from Pudu

I hope you’ve enjoyed meeting this new robot with me.

Thanks and see you next time.

Emotions in Robots – Prof Hatice Gunes for the Royal Institute

I recently subscribed to the Royal Institute lecture series, and when I am able to catch some of the lectures, the content and moderation is always incredibly good. You can watch the Royal Institute’s videos on Science here on Youtube.

Go to their website to watch some of these talks live.

Lately, I saw an excellent lecture on Creating Emotionally Intelligent Technology by Professor Hatice Gunes, the Leader of the Affective Intelligence & Robotics Lab at the University of Cambridge.

Here is a link to the recording if you’d like to watch the video yourself: Vimeo video link

To start off, Prof Gunes does an amazing job of introducing emotions in technology, the work up to now, and why we’d want to achieve this goal. She then covers the work that her own lab has been doing to take the field further, and this is quite interesting.

Emergent behaviour when programming technology that interacts with humans

Prof Gunes draws our attention to the awareness of the emergence of co-behaviours between humans and robots. She explains that when we interact with machines, we create models for behaviour. Since we cannot capture real life accurately in a model, the results are imperfect. When these models are released into the wild and we interact with them, there can be unintended outcomes – we adapt our normal behaviour to what we experience in the model. In this way, robots change human behaviour, and there is emergence of new behaviours which the creator of the robot did not anticipate.

Experiments done by Prof Gunes’ lab:

London eye interaction with human emotions

  • People’s emotions tended to be more complex than what the experiment designers predicted

Robot waiters in a restaurant

  • People tend to be uncomfortable with the robot waiter being behind them

Nao robot being taught complex facial expressions by children

  • How to express complex emotions that look different on each individual’s face?

A set of avatars or artificial characters designed to exhibit specific personality characteristics

  • An annoying character which tries to have a conversation with you but contradicts what you say, trying to elicit an angry response from you.

She suggests that we should adopt an attitude of lifelong learning in working with robots, to learn from our interactions with them and be open. Especially for non safety-related applications she asks that we are more tolerant.


For me, it’s reassuring to know that the field of emotions in technology is progressing although we haven’t seen an increase in social consumer robots in recent years. It was certainly interesting to reflect on emotions and other technologies besides robotics, like AR/VR, virtual computer agents and even a monument like the London Eye. Prof Gunes’ work covers a fascinating breadth of topics which should make for interesting reading for the coming months.

Blog at

Up ↑