I recently subscribed to the Royal Institute lecture series, and when I am able to catch some of the lectures, the content and moderation is always incredibly good. You can watch the Royal Institute’s videos on Science here on Youtube.
Go to their website to watch some of these talks live.
Lately, I saw an excellent lecture on Creating Emotionally Intelligent Technology by Professor Hatice Gunes, the Leader of the Affective Intelligence & Robotics Lab at the University of Cambridge.
Here is a link to the recording if you’d like to watch the video yourself: Vimeo video link
To start off, Prof Gunes does an amazing job of introducing emotions in technology, the work up to now, and why we’d want to achieve this goal. She then covers the work that her own lab has been doing to take the field further, and this is quite interesting.
Emergent behaviour when programming technology that interacts with humans
Prof Gunes draws our attention to the awareness of the emergence of co-behaviours between humans and robots. She explains that when we interact with machines, we create models for behaviour. Since we cannot capture real life accurately in a model, the results are imperfect. When these models are released into the wild and we interact with them, there can be unintended outcomes – we adapt our normal behaviour to what we experience in the model. In this way, robots change human behaviour, and there is emergence of new behaviours which the creator of the robot did not anticipate.

Experiments done by Prof Gunes’ lab:
London eye interaction with human emotions
- People’s emotions tended to be more complex than what the experiment designers predicted
- People tend to be uncomfortable with the robot waiter being behind them
Nao robot being taught complex facial expressions by children
- How to express complex emotions that look different on each individual’s face?
A set of avatars or artificial characters designed to exhibit specific personality characteristics
- An annoying character which tries to have a conversation with you but contradicts what you say, trying to elicit an angry response from you.
She suggests that we should adopt an attitude of lifelong learning in working with robots, to learn from our interactions with them and be open. Especially for non safety-related applications she asks that we are more tolerant.
Conclusions:
For me, it’s reassuring to know that the field of emotions in technology is progressing although we haven’t seen an increase in social consumer robots in recent years. It was certainly interesting to reflect on emotions and other technologies besides robotics, like AR/VR, virtual computer agents and even a monument like the London Eye. Prof Gunes’ work covers a fascinating breadth of topics which should make for interesting reading for the coming months.
Reblogged this on Thosha's AI Blog.
LikeLike