Tony Belpaeme’s work on educational robotics

This past Friday I spoke to esteemed Human Robot Interaction Professor Angelo Cangelosi.

Angelo’s work is centred around human robot interaction, developmental robotics, robot understanding of language and real world concepts, and the use of neuromorphic systems for robot learning. He is the coordinator of the Horizon 2020 APRIL program (Applications of Personal Robotics through Interaction and Learning) amongst many other responsibilities in the robotics space. My Roborabbit activities allow me to meet truly amazing people!

Tony Belpaeme’s work on educational robots

Angelo gave me some great leads for me to research. The first was the work of Tony Belpaeme, professor at Gent and Plymouth Universities. The paper Social robots for education: A review (Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka, 2018) does a an analysis and collation of studies on the use of robots in children’s education. It examines the research from the perspective of robots used as teacher, peer and novice (in which the robot is taught by the child). Robots are helpful in an educational setting to aid in learning, having an effectiveness almost equal to that of human beings. Embodiment, meaning that the robot, unlike a virtual assistant, has a body, helps in learning by causing social reactions in humans that help in learning. Robots could allow for more enriched and individualised learning – these are challenges in the modern classroom.

“…physical robots have enhanced learning and affected later behavioral choice more substantially than virtual agents. Compared with instructions from virtual characters, videos of robots, or audio only lessons, robots have produced more rapid learning in cognitive puzzles…

There are many challenges to using a robot for education including:

  • Robustness of speech recognition for children
  • Other means of input like tablet input creating a disconnect
  • Building an understanding of learning progress in the robot
  • Choosing a next activity to challenge and engage the child
  • Knowing when to switch activities to engage the child
  • Generating a combination of verbal and nonverbal output and coordinating those
  • The question how far we want the teaching of our children to be delegated to robots
  • The question if we are restricting child development based on the capabilities of the technologies and not giving children what they really need

These are questions that Roborabbit will need to answer when we enter the social robot educational applications space. The paper also raised the idea that robots could be used in adult education. This could be an interesting avenue for investigation for the future.




Social robots, software and education

After doing some research on existing social robots that have for instance appeared at CES, including Rosie from Aeolus, Buddy from Blue Frog Robotics and Jibo, I settled on the idea that what social robots need to really be successful is a killer app.

I believe that robots are one of the most impactful platforms of our near future, along with AR/VR and our own bodies in terms of transhumanism. Good applications will be the one way for such platforms to succeed – just like with the mobile phone, along with access to the internet and processing capabilities. What is missing now is convincing and engaging communications, and value adding apps and functions at a reasonable price in a range of about €0-2000.

With RoboRabbit-Labs I want to create applications that feel natural in terms of interaction and prove the value of service robots to normal people. This means robots that communicate effectively for human beings, with a natural interaction flow.

There is opportunity in the area of children’s games and entertainment, in terms of adding more value during play and helping children learn.

How can we best make use of the particular attributes of a robot during learning and play? A robot can be a smart companion, engaging the child with interesting and fun interaction. It should use its body and other output capabilities and enrich play and recall by interacting through voice, gestures and visual aids.

Cleaning robot prototyping

Join me in my ongoing journey to produce a functional cleaning robot, starting from scratch, using the cheapest of components.

My first attempt at building a useful and functional robot is a cleaning robot. It might seem like a boring start but I conducted a survey and many people selected bathroom cleaning as a task they would prefer to automate. Once I had a mental picture of a robot cleaning my bathroom, I simply couldn’t let it go anymore. Consider that the most boring and endlessly futile tasks are the ones that we should be striving to automate first.

I like to start things in a modular and practical way, so I designed the most obvious and simple robot I could imagine that could perform the functions of bathroom cleaning. I defined a set of features to break the work down into streams:

High level feature groupings include:

  • Main robot body structure
  • Locomotion of main robot body
  • Navigation
  • Vision and feature identification
  • Planning of cleaning action
  • Evaluation of improvement or goal reached, cleaned status
  • Cleaning capabilities
  • Object avoidance

Under each of these I have a set of milestones that builds in an incremental way. For example, when the robot takes some kind of action, we can build it up in steps from 0 autonomy and total reliance on a human being to complete autonomy.

Within these milestones, I run basic experiments which build the robot but also ramp up my knowledge of robotics and AI. Along the way, I am gaining skills in Arduino, Beaglebone, computer vision, prototyping, sensors and actuators, and ROS. I will blog about the learning experience in my personal blog. Having a real goal is really accelerating my learning. But having such a large and varied goal means that I only learn a little in each area before having to move to the next. More to come about the experiments in future posts!

Blog at

Up ↑