This summer vacation while visiting my sister in the UK and having a lovely visit with her neighbour Janet, I got the opportunity to experience the with a Joy For All robot kitty firsthand.
This robotic pet was created to help people with Dementia and Alzheimers enjoy the soothing experience of having a pet without the great responsibility that it incurs.
Janet bought her cat simply to have something cuddly and lifelike around without having to take care of a pet – something a much more engaging and lovable than a stuffed toy (no offence to stuffed toys!).
This cat was quite enjoyable to interact with, it purrs, meows, is very soft with flexible limbs and movable ears – it even has its own brush.
It does a few tricks like rolling over and raising its paws! Overall it’s a fun and interesting experience and it reminded me that human beings need companionship and comfort and sometimes we can get a little help from technology.
Janet also happened to have this custom designed leopard painted on her shed, complete with movable tail. The design of the mechanism was simple and effective, with a coat hanger and rotating wooden disk. The eyes are led’s and the entire thing can be turned off and on using a Philips Hue plug. It made me wonder – why do I consider this a robot? Although it’s not capable of changing its behaviour based on input, I categorise it somewhere along the lines of Animatronics.
Speaking of animatronics, here is a video of animatronic dinosaurs from our recent visit to the Leiden Naturalis museum and some dinosaur fossils!
Well, thats all for now, I wish for you to also spot many robots in the wild – happy hunting!
Our Nao recently stopped working due to a completely flat battery.
My boyfriend Renze and I have had it since 2016 and it’s no longer serviced by the manufacturer once the 4 year warranty is up.
Investing in social robots has proven to be a bit tricky. Companies go bankrupt or are acquired (e.g. our Anki Vector robot) and then strategies change, leaving you with some expensive hardware at worst gathering dust on the shelf and at best needing additional investment to keep it working. You might have the hardware but an important part of the package is often cloud based software or development API’s that won’t be maintained and updated indefinitely.
We’ve not used our Nao for quite a while due to other projects but since I recently started a course on robotics which used the Nao, I thought it would be good to dust it off again, only to find the battery completely flat.
We first removed the battery pack by unscrewing the hatch, then we tested the cells and found them depleted.
We opened the battery pack and it seemed possible to swap the cells and reconnect the circuit board to them.
Next we had to order new cells online (these are standard 18650 Lithium-ion cells) and buy a mini spot welder to weld the zinc connectors to the cells.
This is me spot welding the cells together
We soldered the connectors onto the circuit board and taped the battery pack together with heat resistant tape.
We did some careful testing by charging the battery with the official charger and then installing it back into Nao. Finally our Nao is back in working order! And we’ve gained some confidence and added some new technical skills (and equipment) to our toolkit.
Challenges and opportunities that human perception provides to robotic design
This is an essay that I wrote for a course in 2015.
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
Introduction
In this essay I will describe some aspects of human perception and cognition which may be considered to have an impact on human-robot interaction. I would like to discuss the findings on the Uncanny Valley as described in the provided article, Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (Mathur and Reichling, 2015) and the parallels we might draw to other aspects of human-robot interaction. I will consider the application of theory-of-mind to the uncanny valley concept. Furthermore, I would like to discuss the possibility of robots making use of human cognitive and perceptive limitations and biases to pursue their goals more effectively.
Perception
Perception is the process that makes sense of the information we receive from our senses (page 30, 2014, Gilhooly et al).. Illusions are what happens when we misinterpret the signals from our senses by applying certain heuristics and rules for interpretation that don’t always hold true. For example, the illusion of a line with two outward pointing arrows at each end that looks shorter than the line with two inward pointing arrows at each end, although they are both the same length.
>–<
<–>
Social Perception
Social Perception describes the way that human biology and physiology constrain and influence our social interaction (page 64, 2014, Gilhooly et al). The physical limitations of all the aspects of the human body that produce communicative signals like our facial structures and expressive capabilities, vocal structures and gestural capabilities results in a limited range of communicative input that we can perceive and process. Faces provide valuable information on emotion, age, gender, attractiveness and identity. We are even capable of identifying faces under challenging conditions including low light and at long range. Very specific areas and networks in the brain are engaged in facial recognition – thus we are highly and specifically adapted for it. The composition of features on the face is as important for recognition as the appearance of the features themselves. Social perception leads to social and emotional responses. These inform us about the internal states, motivations and emotions of others.
Biases in human decision making and reasoning
Decision making is the process of choosing between options. (page 325, 2014,Gilhooly et al) There are several theories which model biases that human beings show in our basic decision making processes, including:
Prospect theory, in which we try to avoid losses but ignore gains that might come with risk.
Multi-attribute decision making, which simplifies complex factors in the decision using elimination-by-aspects and satisficing.
Probabilities based decision making, which uses heuristics like availability, representativeness and ignoring of base rates.
Emotions-based decision making, using System 1 instead of reasoning and logic. Reasoning is the process of deriving information from what we already know about a certain topic. We cover two types of reasoning in the text – deductive reasoning which involves drawing logical conclusions from what we already know and applying these rules to other specific information to get new information, and inductive reasoning which involves inferring generalizations from our specific observations.
Here are some biases we have in our reasoning faculties:
Illusory inferences, in which we make the error of assuming that something that is likely true, is definitely true
Belief bias, in which what we believe to be true overcomes our logic and reasoning about a subject
Matching bias, in which to simplify, we make selections based on matching features we have heard in the problem with those which appear in the possible solutions, instead of using logic to solve the problem
Confirmation bias, in which we tend to look for evidence to confirm our hypotheses, instead of negating evidence.
Article summary
The Uncanny Valley
The essence of the Uncanny Valley theory, first described by Masahiro Mori in his 1970 paper on the subject, is that human beings dislike imperfection in human-likeness. Mori’s paper describes the effects of robot appearance and robot movement on our affinity to the robot (1970, Mori). In this course we were provided an article. ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (2015, Mathur and Reichling), which deals with the existence of an Uncanny Valley effect in the relationship between robot likeability and trustworthiness versus how mechanical or human the robot’s was appearance was. The study done used several pictures of robots with varying levels of Mechano-humanness. These pictures were obtained on the internet. The images were then rated by study participants for different characteristics. It was discovered that mechanical and human appearance could be treated as one continuous variable.
Pal Reem-C Robot
Study Results
Perceived emotion did not influence the relationship between likeability and Mechano- humanness, however, faces showing more positive emotion appeared to be more likeable.
Very mechanical looking robots were seen to be unlikeable, and likeability increased as the robot faces became less mechanical in appearance. Likeability declined closer to the confusing boundary between mechanical and human-looking robots. Likeability increased dramatically as the robots began to look much more human.
Trust
Another experiment was conducted relating to trust and a similar pattern was evident for trust versus mechano-humanness. A very mechanical robot was untrustworthy, but this increased as the robot appearance became less mechanical and then decreased as the robot approached the boundary between mechanical and human looking, with an increase in trust for a very human-looking robot. Robots which showed more positive emotion were seen to be more trustworthy.
Theory of mind
The article describes that we trust others whom we believe to have interests that encapsulate our own. Subjects in the study might have tried to to assess the robot’s interests, possibly applying the theory of mind to the robot, attributing a stance implying thoughts beliefs and desires to them. Or the robots were treated as an inanimate agent of the robot maker. The article continues as follows about the nature of human-robot interaction:
“humans appear to infer trustworthiness from affective cues (subtle facial expressions) known to govern human–human social judgments. These observations help locate the study of human-android robot interaction squarely in the sphere of human social psychology rather than solely in the traditional disciplines of human factors or human–machine interaction”.
Category confusion
Katsyri et al (2015) suggests that the Uncanny Valley effect arises at the boundary between perceptual categories – between non-human and human categories. This confusion is measured experimentally as the additional time it takes to categorize a stimulus. The experimental results in the study confirmed the theory that very human or very mechanical looking faces were more quickly classified than more ambiguous ones. The maximum rating time occurred at the point of minimum likeability. Results also indicated that very human robots were more likeable, but suggest that small faults in their humanness would push them back down the likeability scale into the Uncanny Valley. The article speculates that Uncanny Valley effects on trust may be influenced by robots’ individual characteristics like sex, facial form and perceived emotion.
Own reflections
Application of Uncanny Valley to Theory of Mind
The Uncanny Valley theory in the article selected is applied to the relationship between robot appearance and likeability or trust. It could also be valid to apply the Uncanny Valley model to the theory of mind that human social partners would develop when interacting with robots.
Lack of human reasoning and decision-making biases
Examples of robot-specific cognition characteristics can include the simple lack of the human reasoning and decision making biases we have discussed in the text. For instance, if a robot had enough resources and processing capacity and smart programming, instead of exhibiting human biases in decision making, like applying prospect theory or using emotions-based system 1 decision making, it might apply logic and clear reasoning when making decisions.
Additional robotic cognitive biases and limitations
Specific cognitive biases could occur due to a robot’s hardware and software limitations. Examples of limitations in cognition and perception for robots could include not being able to listen and speak at the same time, slow processing times causing a delay in responses, answering exactly the same answer to the same question every time without originality or creativity, or not understanding humour or metaphors. Such limitations can create a disconnect and serve to remind the human social partners that they are talking to a machine.
Preference for different decision making methods
If a robot does not make use of a typically human decision making method when making a decision, it could serve as a disconnect for the observing human being. For example, in the textbook, multi attribute utility theory is discussed as a way of comparing complex decisions and might be considered a typically human response to the task of choosing where to buy a house. A less typically human approach might be to choose based on a statistical analysis, e.g. similar robots live in houses like this, therefore I should choose this house.
Navigating up and down the walls of the Uncanny Valley
In the article we see that very human looking robots were more likeable but that small defects in their humanness could push them down the valley into unlikeability. Robots that were closer to the boundary between human and mechanical looking could become less likeable when their appearance became uncharacteristically human.
This concept can also be related to the uncanny valley of theory of mind. Imagine a robot with quite mechanical thought patterns. This robot could become very unlikeable by unexpectedly exhibiting thought patterns which are too human. This would be an indirect way of applying the idea of the Uncanny Valley. Instead of a direct effect of visually observing and feeling, in this case we would be considering the accumulated effect of small pieces of information and patterns used to classify the robot over a presumably longer period of time.
Fig. 1. Proposed relationship between robotic and human cognitive style on likeability
Use of cognitive limitations when robots interact with humans
How should robots make use of this knowledge we have about human cognition? Let us imagine that robots have an explicit model of human thinking, and are able to predict the kind of biases we most probably would exhibit in decision making. A robot could use this information to be more successful in serving their goal. For instance, if a robot is a salesperson, it could make use of several of the decision making biases listed above to be more effective at selling e.g using the matching bias, the robot could present a story about some problem a human would have and present his desired solution in the story. When the time comes to select a solution, matching bias can cause a person to select the robots offered solution or product, instead of using reasoning and logic to select a solution.
Human perceptive limitations
Robots could also make use of human perceptive limitations, for instance knowledge of the human visual system could allow a robot to perform an action faster than a human can perceive it. Micro-expressions could be used in this way to create a better (or worse) rapport with human beings. Visual or auditory illusions could also be manufactured with this knowledge.
Conclusions
In this essay, I was provoked by the concept of applying our cognitive processes and limitations to human robot interaction. I posed the question of how robots can make use of the knowledge of human cognition to better reach their goals – this includes the knowledge of human cognition limitations. I also reason that the uncanny valley effect could be applied to more than the appearance of robots – it could be applied to a mental model of robot psychology that human social partners might build. In this case, there may be jarring effects further into the uncanny valley of the robot thinking patterns suddenly seem to be incongruous – either too mechanical or too human. In these ways our understanding of human perception and decision making can be applied to human robot interaction. Furthermore, I postulate that, just as human beings have cognitive peculiarities due to evolution and physiological and psychological reasons, so too do robots have cognitive peculiarities based on their hardware and software. I have tried to show my understanding of the text by abstracting the essence of the course material and reflecting it onto the mechanical mind. I also reflected on the cognitive processes of human beings from the point of view of robots and robot cognitive processes from the point of view of humans. Thus the theories of human cognition can be further applied to draw parallels with robot cognition.
References
Mathur, M. and Reichling, B. (2015). “Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.“ Cognition 146.
Mori, M. (1970). “The Uncanny Valley.” Energy Volume 7 No. 4.
Gilhooly, K., Lyddy, F., Pollick, F.(2014) Cognitive Psychology. London: McGraw Hill Publishing.
I have discussed the topic of quality and testing with a few robotics startups and the conversation tends to reach this consensus: formal quality assurance processes have no place in a startup. While I totally appreciate this view, this blogpost provides and alternative approach to quality for robotics startups.
The main priority of many startups is to produce something that will attract investment – it has to basically work well enough to get funding. Investors, customers and users can be very forgiving of quality issues, especially where emerging tech is involved. Startups should deliver the right level of quality for now and prepare for the next step.
In a startup, there is not likely to be any dedicated tester or quality strategy. Developers are the first lines of defence for quality – they must bake it in to the proof of concept code – they might do this with unit tests. The developers and founders probably do some functional validation. They might experience more extreme use cases when demo’ing the functionality. They might do limited testing with real life users.
What are the main priorities of the company at this phase and the matching levels of quality? The product’s main goal, initially, is to fulfil requirements of application development, demo’ing, and to be effective and usable to its early adopters. Based on these priorities, I’ve come up with some quality aspects that could be useful for robotics startups.
A Good quality demo
Here are some aspects of quality which could be relevant for demoing:
Portable setup
Can be transported without damaging the robot and supporting equipment
Is possible to explain at airport security if needed
Works under variable conditions in customer meeting room
Poor wifi connections
Power outlets not available
Outside of company network
Uneven floors
Stairs
Noise
Different lighting
Reflective surfaces
Will work for the duration of the demo
Demo will be suitable for audience
Demo’ed behaviour will be visible and audible from a distance, e.g. in a boardroom
Mode can be changed to a scripted mode for demos
Functionality actually works and can be shown – a checklist of basic functionality can take away the guesswork, without having to come up with heavy weight testcases
Quality for the users and buyers
The robot needs to prove itself fit for operation:
Functionality works
What you offer can be suitably adapted for the customer’s actual scenario
Every business has its own processes and probably the bot will have to adapt to match terminologies workflows and scenarios that fit the users processes
Languages can be changed
Bot is capable of conversing at the level of the target audience (e.g. children, elderly)
Bot is suitable for the context where its intended to work like a hospital or school, will not make sudden movements or catch on cables
Reliability
Users might be tolerant to failures up to a certain extent, until it gets too annoying or repetitive, or if they cannot be recovered from
Failures might be jarring for vulnerable users like the mentally or physically ill
Is the robot physically robust enough to interact with in unplanned ways?
Security
Will port scanning or some other exploitative attacks easily reveal vulnerabilities which can result in unpredictable or harmful behaviour
Can personal data be hijacked through the robot
Ethical and moral concerns
Users might not understand that there is no consciousness interacting with them, thinking the robot is autonomous
There might be users who think their interactions will be private while they might be reviewed for analysis purposes
Users might not realise their data will be sent to the cloud and used for analysis
Legal and support issues
What kind of support agreement does the service provider have with the robot manufacturer and how does it translate to the purchaser of the service?
Quality to maintain, pivot and grow
During these cycles of demoing to prospects, defects will be identified and need to be fixed. Customers will give advice or provide input on what they were hoping to see and features will have to be tweaked or added. The same will happen during research and test rounds at customers, and user feedback sessions.
The startup will want to add features and fix bugs quickly. For this to occur, it will help them to have good discipline with clean code which is maintainable, and at least unit tests to give quick feedback on change quality. They will but hopefully also have some functional (and a few non functional) acceptance tests.
When adoption increases, the startup might have to do a quick pivot to a new application, or to be able to scale to more than one customer or usecase. At this phase, probably a lot of refactoring will happen to make the existing codebase scalable. In this case, good unit tests and component tests will be your best friend, and ensure you are able to maintain the stability of the functionality you already have (as mentioned in this techcrunch article on startup quality).
Social robot companies are integrators – ensure quality of integrated components
As a social robotics startup, if you are not creating your own hardware, OS, or interaction and processing components, you might want to consider becoming familiar with the quality of any hardware or software components you are integrating with. Some basic integration tests will help you keep confident that the basics work when an external API is updated, for instance. It’s also worth consider your liability when something goes wrong somewhere in the chain.
Early days for robot quality
To round up, it seems to indeed be early days to be talking about social robot quality. But it’s good for startups to be aware of what they are getting into because this topic will no doubt become more relevant as their company grows. I hope the above post can help robotics startups now and in the future to ensure they stay in control of their quality as they grow.
Feel free to contact me if you have any ideas or questions about this topic!
Thanks to Koen Hendriks of Interactive Robotics, Roeland van Oers at Ready for Robotics and Tiago Santos at Decos, as well as all the startups and enthusiasts I have spoken to over the past year for input into this article.
Ensure there is economic and societal benefit from robots
Share information on recent advancements in robotics
Reveal new business oppourtunities
Influence decision makers
Promote collaboration within the robotics community
The sessions were organised into workshops, encouraging participants from academia, industry and government to cross boundaries. In fact, many of the sessions had an urgent kind of energy, with the focus on discussions and brainstorming with the audience.
Edinburgh castle at night
Broad spectrum of robotics topics
Topics covered in the conference included: AI, Social Robotics, Space Robotics, Logistics, Standards used in robotics, Health, Innovation, Miniturisation, Maintenance and Inspections, Ethics and Legal considerations. There was also an exhibition space downstairs where you could mingle with different kinds of robots and their vendors.
Nikita the iCub robot from Edinburgh Centre of Robotics
Pal Reem-C Robot
Paro the robot seal that comforts the elderly
Pal’s Tiago robot
The kickoff session on the first day had some impressive speakers – leaders in the fields of AI and robotics, covering business and technological aspects.
Bernd Liepert, the head of EU Robotics covered economic aspect of robotics, stating that the robot density in Europe is around the highest in the world. Europe has 38% of the world wide share of the professional robotics domain, with more startups and companies than the US. Service robotics already makes over half the turnover of industrial robotics. In Europe, since we don’t have enough institutions to develop innovations in all areas of robotics, combining research and transferring to industry is key.
The next speaker was Keith Brown, the Scottish secretary for Jobs, the Economy and Fair Work, who highlighted the importance of digital skills to Scotland. He emphasised the need for everyone to benefit from the growth of the digital economy, and the increase in productivity that it should deliver.
Juha Heikkila from the European Commission explained that, in terms of investment, the EU Robotics program is the biggest in the world. Academia and industry should be brought together, to drive innovation through innovation hubs which will bring technological advances to companies of all sizes.
Raia Hadsell of Deep Mind gave us insight into how deep learning can be applied to robotics. She conceptualised the application of AI to problem areas like speech and image recognition, where inputs (audio files, images) are mapped to outputs (text, labels). The same model can be applied to robotics, where the input is sensor data and the output is an action. For more insight, see this article about a similar talk she did at the Re•Work Deep Learning Summit in London. She showed us that learning time can be reduced for robots by training neural networks in simulation and then adding neural network layers to transfer learning to other tasks.
Deep learning tends to be seen as a black box in terms of traceability and therefore risk management, as people think that neural networks produce novel and unpredictable output. Hadsell assured us, however, that introspection can be done to test and verify each layer in a neural network, since a single input always produces a range of known output.
The last talk in the kickoff, delivered by Stan Boland from Five AI, brought together the business and technical aspects of self driving cars. He mentioned that the appetite for risky tech investment seems to be increasing, with a 5 times growth in investment in the past 5 years. He emphasised the need for exciting tech companies to retain European talent and advance innovation, and reverse the trend of top EU talent migrating to the US.
On the technology side, Stan gave some insight into some advances in perception and planning in self driving cars. In the picture below, you can see how stereo depth mapping is done at Five AI, using input from two cameras and mapping the depth of each pixel in the image. They create an aerial projection of what the car sees right in front of it and use this birds eye view to plan the path of the car from ‘above’. Some challenges remain, however, with 24% of cyclists still being misclassified by computer vision systems.
With that, he reminded us that full autonomy in self driving cars is probably out of reach for now. Assisted driving on highways and other easy-to-classify areas is probably the most achievable goal. To surpass this, the cost to the consumer becomes prohibitive, and true autonomous cars will probably only be sustainable in a services model, where the costs are shared. In this model, training data could probably not be shared between localities, with very specific road layouts and driving styles in different parts of the world (e.g Delhi vs San Francisco vs London).
This slideshow requires JavaScript.
An industry of contrasts
This conference was about overcoming fragmentation and benefitting from cross-domain advances in robotics, to keep the EU competitive. There were contradictions and contrasts in the community which gave the event some colour.
Each application of robotics that was represented seemed to have its own approaches, challenges, and phase of development, like drones, self driving cars, service robotics and industrial robotics. In this space, industrial giants find themselves collaborating with small enterprises – it takes many different kinds of expertise to make a robot. The small companies cannot afford to spend the effort that is needed to conform to the industry standards while the larger companies would go out of business if they did not conform.
A tension existed between the hardware and software sides of robotics – those from an AI background have some misunderstandings to correct, like how traceable and predictable neural networks are. The ‘software’ people had a completely different approach to the ‘hardware’ people as development methodologies differ. Sparks flew as top-down legislation conflicted with bottom-up industry approaches, like the Robotic Governance movement.
The academics in robotics sometimes dared to bring more idealistic ideas to the table that would benefit the greater good, but which might not be sustainable. The ideas of those from industry tended to be mindful of cost, intellectual property and business value.
Two generations of roboticist were represented – those who had carried the torch in less dramatic years, and the upcoming generation who surged forward impatiently. There was conflict and drama at ERF2017, but also loads of passion and commitment to bring robotics safely and successfully into our society. Stay tuned for the next post in which I will provide some details on the sessions, including more on ethics, legislation and standards in robotics!
Nasa Valkyrie Robot at Edinburgh Centre for Robotics