ERF2019 Robotics and AI

ERF2019 took place at the Marriott hotel in Bucharest. As usual, the event was divided into workshops and an exhibition area with different robot-related organisations represented, including European organisations, robot and parts manufacturers, technology hubs, universities and governmental institutions. Check out my post on ERF2017 here.

The main topics for this year included:

  • Robotics and AI
  • Robotics in industry, logistics and transport
  • Collaborative robots
  • Ethics, liability, safety, standardization
  • Marine, aerial, space, wearable robotics

Robotics and AI

ERF2019 and ERF2017 were miles apart in terms of the awareness of AI. The EU has identified AI as a key area to remain competitive with the US and China, and have allocated a large amount of funding to AI. Lighthouse domains for investment include agrifood, inspection and maintenance and healthcare.

They seek to build partnerships across Europe, identify the key players and increase synergies between member states. They have setup a collaboration with the Big Data Value Association to create a strategy which will make AI development for robotics thrive.

AI has been identified as a way to get robots out of the cage and interacting directly with people. They need to be easier to configure, learn, program, use. It should not take weeks of training to introduce robots into health facilities, for instance. Robotics is AI embodied and brings its own challenges like application of learning and algorithms in real life chaotic and unpredictable environments. What are the value exchange points between AI and robotics? Can robots create a feedback loop to help tune AI algorithms? We need data to take the tech forward – and we need to be able to apply data and learnings across domains and applications. We also need to ensure that users and companies understand what their data is being used for and are comfortable with the contribution they are making.

AI4EU

AI4EU is a Horizon 2020 project which will create a platform to encourage AI sharing, gather resources, algorithms, datasets and AI knowledge in the EU. There are 3 open calls for startups, SME’s and AI talent, plus a technology transfer program. It will be the focal point in Europe for all AI resources. There will also be a search engine with a knowledge graph optimised to help search for ai-specific resources.

The platform will be as useful as we, the contributors make it. Especially in robotics we need to actively contribute and drive the content, because there is such a long way to go to independent robots. Submissions must be of high quality – safe, effective, rated by the user community, and fully specified. This could also result in financial remuneration for contributors. It will have two sandbox environments to make it easier to develop AI. A methodology will be provided to design AI components respecting EU values – inclusive and gender neutral.

The project started on 1 January 2019, and will deliver version 0 at the end of June 2019, V1 January 2020, V2 January 2021 and V3 December 2021. Industry will be represented by companies such as SAP and Siemens.

Startups

I noticed that there were few startups represented at the ERF. There is a startup competition, but I haven’t heard startups represented in the workshops where we share ideas. My guess is that it’s too expensive for startups to attend without providing immediate commercial value. But I also wonder if it’s because the pace of contribution of the EU projects is too slow for startups – all the projects take years to months and the outcomes are usually on too high a level to be used immediately. I think the EU can only harness the power of innovation in AI and Robotics by including the risk taking startup innovation layer and making it easier for us to understand and consume the vast amounts of research done. Then we need to be able to give feedback and improve the entire cycle together. As a startup, projects like this should be helping us to accelerate our development. But on the other hand we should be getting involved and demanding representation so that our needs can be addressed.

Robot app prototyping on Anki Vector

This past weekend, Vikram Radhakrishnan, Lukas Jelinek and myself, Thosha Moodley got together to try some robot app prototyping on the Anki Vector. The Vector is an adorable robot created by Anki, and a follow up to their first model, Cozmo. We are using it to produce a robot app prototype as a proof of concept.

20190216_160658.jpg
Vikram, Lukas and Thosha with Anki Vector

Learning on the Vector has been great as it’s easy to get started and pretty cute. The main hassle is that the API requires Python 3.6 and not everything we want to work with is 3.6-ready. Also the API is in alpha state so still some incomplete features and bugs.

Our use case is based on the idea of a house sitter app which runs on the Vector or other robot hardware and helps keep an eye on important locations in the home. To get this working we had to get familiar with Vector’s camera and mapping features.

20190216_162217.jpg
Anki Vector creating a map by exploring

Vector builds a map by exploring the surrounding space. In the picture above you can see different colour codes on the map that represent what Vector knows about that space:

  • Dark grey – unknown
  • Green – no obstacles
  • Dark green – cliff
  • Red – obstacle cube
  • Orange – obstacle proximity
  • Yellow-green – obstacle proximity explored
  • Dark red – obstacle unrecognised
  • Black – cliff
  • Yellow – interesting edge
  • Dark yellow – non-interesting edge

Vector also has a remote_control app in the code samples – watch Vikram controlling it here:

Well, that’s all for now, be sure to stay tuned for updates on our robot application mission!

What I did today – steps on the cleaning robot

 

The first usecase that I am focusing on is cleaning the bathroom counter and sink. To prove that this would work, I used a little motor with a disc on the end, covered in cleaning wipes. The motor turned the cleaning pad to clean a surface.

Next I designed a basic schematic of a cleaning arm:

Screenshot 2018-11-09 22.52.50

I broke the building of the arm into experiments, the first of which uses a linear actuator, Arduino Uno, distance sensor and pressure sensor (fsr). The (vertical) actuator moves downward on a guide until it gets close to the counter. It keeps extending slowly until the pressure sensor detects the desired level of force to be able to clean the counter.

20180327_105509.jpg

Today I dismantled this setup to add it onto an aluminium frame, onto an arm I have been working on for some time.

The arm is composed of a the horizontal linear actuator that I made with v-slot format aluminium extrusions and a nema 17 stepper motor. The linear actuator moves a gantry on which the vertical linear actuator is mounted.

I have two stepper motors which I’m trying to attach to the arm to rotate the cleaning head and the arm itself, as seen in the schematic.

20181109_112855.jpg

My new tool organiser – highlight of the day! That’s all for today – until next time.

Tony Belpaeme’s work on educational robotics

This past Friday I spoke to esteemed Human Robot Interaction Professor Angelo Cangelosi.

Angelo’s work is centred around human robot interaction, developmental robotics, robot understanding of language and real world concepts, and the use of neuromorphic systems for robot learning. He is the coordinator of the Horizon 2020 APRIL program (Applications of Personal Robotics through Interaction and Learning) amongst many other responsibilities in the robotics space. My Roborabbit activities allow me to meet truly amazing people!

Tony Belpaeme’s work on educational robots

Angelo gave me some great leads for me to research. The first was the work of Tony Belpaeme, professor at Gent and Plymouth Universities. The paper Social robots for education: A review (Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka, 2018) does a an analysis and collation of studies on the use of robots in children’s education. It examines the research from the perspective of robots used as teacher, peer and novice (in which the robot is taught by the child). Robots are helpful in an educational setting to aid in learning, having an effectiveness almost equal to that of human beings. Embodiment, meaning that the robot, unlike a virtual assistant, has a body, helps in learning by causing social reactions in humans that help in learning. Robots could allow for more enriched and individualised learning – these are challenges in the modern classroom.

“…physical robots have enhanced learning and affected later behavioral choice more substantially than virtual agents. Compared with instructions from virtual characters, videos of robots, or audio only lessons, robots have produced more rapid learning in cognitive puzzles…

There are many challenges to using a robot for education including:

  • Robustness of speech recognition for children
  • Other means of input like tablet input creating a disconnect
  • Building an understanding of learning progress in the robot
  • Choosing a next activity to challenge and engage the child
  • Knowing when to switch activities to engage the child
  • Generating a combination of verbal and nonverbal output and coordinating those
  • The question how far we want the teaching of our children to be delegated to robots
  • The question if we are restricting child development based on the capabilities of the technologies and not giving children what they really need

These are questions that Roborabbit will need to answer when we enter the social robot educational applications space. The paper also raised the idea that robots could be used in adult education. This could be an interesting avenue for investigation for the future.

 

 

 

Social robots, software and education

After doing some research on existing social robots that have for instance appeared at CES, including Rosie from Aeolus, Buddy from Blue Frog Robotics and Jibo, I settled on the idea that what social robots need to really be successful is a killer app.

I believe that robots are one of the most impactful platforms of our near future, along with AR/VR and our own bodies in terms of transhumanism. Good applications will be the one way for such platforms to succeed – just like with the mobile phone, along with access to the internet and processing capabilities. What is missing now is convincing and engaging communications, and value adding apps and functions at a reasonable price in a range of about €0-2000.

With RoboRabbit-Labs I want to create applications that feel natural in terms of interaction and prove the value of service robots to normal people. This means robots that communicate effectively for human beings, with a natural interaction flow.

There is opportunity in the area of children’s games and entertainment, in terms of adding more value during play and helping children learn.

How can we best make use of the particular attributes of a robot during learning and play? A robot can be a smart companion, engaging the child with interesting and fun interaction. It should use its body and other output capabilities and enrich play and recall by interacting through voice, gestures and visual aids.

Cleaning robot prototyping

Join me in my ongoing journey to produce a functional cleaning robot, starting from scratch, using the cheapest of components.

My first attempt at building a useful and functional robot is a cleaning robot. It might seem like a boring start but I conducted a survey and many people selected bathroom cleaning as a task they would prefer to automate. Once I had a mental picture of a robot cleaning my bathroom, I simply couldn’t let it go anymore. Consider that the most boring and endlessly futile tasks are the ones that we should be striving to automate first.

I like to start things in a modular and practical way, so I designed the most obvious and simple robot I could imagine that could perform the functions of bathroom cleaning. I defined a set of features to break the work down into streams:

High level feature groupings include:

  • Main robot body structure
  • Locomotion of main robot body
  • Navigation
  • Vision and feature identification
  • Planning of cleaning action
  • Evaluation of improvement or goal reached, cleaned status
  • Cleaning capabilities
  • Object avoidance

Under each of these I have a set of milestones that builds in an incremental way. For example, when the robot takes some kind of action, we can build it up in steps from 0 autonomy and total reliance on a human being to complete autonomy.

Within these milestones, I run basic experiments which build the robot but also ramp up my knowledge of robotics and AI. Along the way, I am gaining skills in Arduino, Beaglebone, computer vision, prototyping, sensors and actuators, and ROS. I will blog about the learning experience in my personal blog. Having a real goal is really accelerating my learning. But having such a large and varied goal means that I only learn a little in each area before having to move to the next. More to come about the experiments in future posts!

Blog at WordPress.com.

Up ↑