Making social robots work

 

wp-1490820852441.jpg

Mady Delvaux, in her draft report on robotics, advises the EU that robots should be carefully tested in real life scenarios, beyond the lab. In this and future articles, I will examine different aspects of social robot requirements, quality and testing, and try to determine what is still needed in these areas.

Why test social robots?

In brief, I will define robot quality as: does the robot do what it’s supposed to do, and not do what it shouldn’t. For example, when you press the robot’s power button from an offline state, does the robot turn on and the indicator light turn green? If you press the button quickly twice, does the robot still exhibit acceptable behaviour? Testing is the activity of analysis to determine the quality level of what you have produced – is it good enough for the intended purpose?

Since social robots will interact closely with people, strict standards will have to be complied with to ensure that they don’t have unintended negative effects. There are already some standards being developed, like ISO13482:2014 about safety in service robots, but we will need many more to help companies ensure they have done their duty to protect consumers and society. Testing will give insight into whether these robots meet the standards, and new test methods will have to be defined.

What are the core features of the robot?

The first aspect of quality we should measure is if the robot fulfils its basic functional requirements or purpose. For example, a chef robot like the robotic kitchen by Moley would need to be able to take orders, check ingredient availability, order or request ingredients, plan cooking activities, operate the stove or oven, put food into pots and pans, stir, time cooking, check readiness, serve dishes and possibly clean up.

 

A robot at an airport which helps people find their gate and facilities must be able to identify when someone needs help, determine where they are trying to go (perhaps by talking to them, or scanning a boarding pass), plan a route, communicate the route by talking, indicating with gestures, or printing a map, and know when the interaction has ended.

 

With KLM’s Spencer the guide robot at Schiphol airport, benchmarking was used to ensure the quality of each function separately. Later the robot was put into live situations at Schiphol and tracked to see if it was planning movement correctly. A metric of distance travelled autonomously vs non autonomously was used to evaluate the robot. Autonomy will probably be an important characteristic to test and to make users aware of in the future.

Two user evaluation studies were done with Spencer, and feedback was collected about the robot’s effectiveness at guiding people around the airport. Some people, for example, found the speed of the robot too slow, especially in quiet periods, while others found the robot too fast, especially for families to follow.

Different environments and social partners

How can we ensure robots function correctly in the wide variety of environments and interaction situations that we encounter everyday? Amazon’s Alexa, for example, suffers from a few communication limitations, like knowing if she is taking orders from the right user and conversing with children.

At our family gatherings, our Softbank Nao robot, Peppy, cannot quite make out instructions against talking and cooking noises. He also has a lot of trouble determining who to focus on when interacting in a group. Softbank tests their robots by isolating them in a room and providing recorded input to determine if they have the right behaviour, but it can be difficult to simulate large public spaces. The Pepper robots seem to perform better under these conditions. In the Mummer project, tests are done in malls with Pepper to determine what social behaviours are needed for a robot to interact effectively in public spaces.

 

The Pepper robot at the London Science Museum History of Robots exhibition was hugely popular and constantly surrounded by a crowd – it seemed to do well under these conditions, while following a script, as did the Pepper at the European Robotics Forum 2017.

When society becomes the lab

Kristian Esser, founder of the Technolympics, olympic games for cyborgs, suggests that in these times, society itself becomes the test lab. For technologies which are made for close contact with people, but which can have a negative effect on us, the paradox is that we must be present to test it and the very act of testing it is risky.

Consider self-driving vehicles, which must eventually be tested on the road. The human driver must remain aware of what is happening and correct the car when needed, as we have seen in the case of Tesla’s first self driving car fatality: “The … collision … raised concerns about the safety of semi-autonomous systems, and the way in which Tesla had delivered the feature to customers.” Assisted driving will probably overall reduce the number of traffic-related fatalities in the future and that’s why its a goal worth pursuing.

For social robots, we will likely have to follow a similar approach, first trying to achieve a certain level of quality in the lab and then working with informed users to guide the robot, perhaps in a semi-autonomous mode. The perceived value of the robot should be in balance with the risks of testing it. With KLM’s Spencer robot, a combination of lab tests and real life tests are performed to build the robot up to a level of quality at which it can be exposed to people in a supervised way.

Training robots

Over lunch the other day, my boss suggested the idea of teaching social robots as we do children, by observing or reviewing behaviour and correcting afterwards. There is research supporting this idea, like this study on robots learning from humans by imitation and goal inference. One problem with letting the public train social robots, is that they might teach robots unethical or unpleasant behaviour, like in the case of the Microsoft chatbot.

To ensure that robots do not learn undesirable behaviours, perhaps we can have a ‘foster parent’ system – trained and approved robot trainers who build up experience over time and can be held accountable for the training outcome. To prevent the robot accidentally picking up bad behaviours, it could have distinct learning and executing phases.

The robot might have different ways of getting validation of its tasks, behaviours or conclusions. It would then depend on the judgement of the user to approve or correct behaviour. New rules could be sent to a cloud repository for further inspection and compared with similar learned rules from other robots, to find consensus. Perhaps new rules should only be applied if they have been learned and confirmed in multiple households, or examined by a technician.

To conclude, I think testing of social robots will be done in phases, as it is done with many other products. There is a limit to what we can achieve in a lab and there should always be some controlled testing in real life scenarios. We as consumers should be savvy as to the limitations of our robots and conscious of their learning process and our role in it.

Understanding Social Robotics

Pepper, Jibo and Milo make up the first generation of social robots, leading what promises to be a cohort with diverse capabilities and applications in the future. But what are social robots and what should they be able to do? This article gives an overview of theories that can help us understand social robotics better.

What is a social robot?

dreamstime_xs_72413274

I most like the definition which describes social robots as robots for which social interaction plays a key role. So these skills should be needed by the robot to enable them to perform some kind of function. A survey of socially interactive robots (5) defines some key characteristics which summarise this group very well. A social robot should show emotions, have capabilities to converse on an advanced level, understand the mental models of their social partners, form social relationships, make use of natural communication cues, show personality and learn social capabilities.

Understanding Social Robots (1) offers another interesting perspective of what a social robot is:

Social robot = robot + social interface

In this definition, the robot has its own purpose outside of the social aspect. Examples of this can be care robots, cleaning robots in our homes, service desk robots at an airport or mall information desk or chef robots in a cafeteria. The social interface is simply a kind of familiar protocol which makes it easy for us to communicate effectively with the robot. Social cues can give us insight into the intention of a robot, for example shifting gaze towards a mop gives a clue that the robot is about to change activity, although they might not have eyes in the classical sense.

These indicators of social capability can be as useful as actual social ability and drivers in the robot. As studies show, children are able to project social capabilities onto simple inanimate objects like a calculator. A puppet becomes an animated social partner during play. In the same way, robots only have to have the appearance of sociability to be effective communicators. An Ethical Evaluation of Human–Robot Relationships confirms this idea. We have a need to belong, which causes us to form emotional connections to artificial beings and to search for meaning in these relationships. 

How should social robots look?

Masahiro Mori defined the Uncanny Valley theory in 1970, in a paper on the subject. He describes the effects of robot appearance and robot movement on our affinity to the robot. In general, we seem to prefer robots to look more like humans and less like robots. There is a certain point at which robots look both human-like and robot-like, and it becomes confusing for us to categorise them. This is the Uncanny Valley – where robot appearance looks very human but also looks a bit ‘wrong’, which makes us uncomfortable. If robot appearance gets past that point, and looks more human, likeability goes up dramatically.

In Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (2) we learn that there is a similar effect between robot appearance and trustworthiness of a robot. Robots that showed more positive emotions were also more likeable. So it seems like more human looking robots would lead to more trust and likeability.

Up to this point we assume  that social robots should be humanoid or robotic. But what other forms can robots take? The robot should at least have a face (1) to give it an identity and make it into an individual. Further, with a face, you can indicate attention and imitate the social partner to improve communication. Most non verbal cues are relayed through the face, and creates expectation of how to engage with the robot.

dreamstime_xs_64957796The appearance of a robot can help set people’s expectations of what they should be capable of, and limit those expectations to some focused functions which can be more easily achieved. For example, a bartender robot can be expected to be able to have a good conversation and serve drinks, take payment, but probably it’s ok if it can only speak one language, as it only has to fit the context it’s in (1).

In Why Every Robot at CES Looks Alike, we learn that Jibo’s oversized, round head is designed to mimic the proportions of a young animal or human to make it more endearing. It has one eye to prevent it from triggering the Uncanny Valley effect by looking too robotic and human at the same time. Also, appearing too human-like creates the impression that the robot will respond like a human, while they are not yet capable.

Another interesting example is of Robin, a Nao robot being used to teach children with diabetes how to manage their illness (6). The explanation given to the children is that Robin is a toddler. The children use this role to explain any imperfections in Robin’s speech capabilities.

Different levels of social interaction for robots

A survey of socially interactive robots (5) contains some useful concepts in defining levels of social behaviour in robots:

  • Socially evocative: Do not show any social capabilities but rely on human tendency to project social capabilities.
  • Social interface: Mimic social norms, without actually being driven by them.
  • Socially receptive: Understand social input enough to learn by imitation but do not see social contact.
  • Sociable: Have social drivers and seek social contact.
  • Socially situated: Can function in a social environment and can distinguish between social and non-social entities.
  • Socially embedded: Are aware of social norms and patterns
  • Socially intelligent: Show human levels of social understanding and awareness based on models of human cognition.

Clearly, social behaviour is nuanced and complex. But to come back to the previous points, social robots can still make themselves effective without reaching the highest levels of social accomplishment.

Effect of social robots on us

To close, de Graaf poses a thought-provoking question (4):

how will we share our world with these new social technologies and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?

It seems that we will first and foremost shape robots by our own human social patterns and needs. But we cannot help but be changed as individuals and a society when we finally add a more sophisticated layer of robotic social partners in the future.

References

  1. Understanding Social Robots (Hegel, Muhl, Wrede, Hielscher-Fastabend, Sagerer, 2009)
  2. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (Mathur and Reichling, 2015)
  3. The Uncanny Valley (Mori, 1970)
  4. An Ethical Evaluation of Human–Robot Relationships (de Graaf, 2016)
  5. A survey of socially interactive robots (Fong, Nourbakhsh, Dautenhahn, 2003)
  6. Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective (Cañamero, Lewis, 2016)

Blog at WordPress.com.

Up ↑