I recently subscribed to the Royal Institute lecture series, and when I am able to catch some of the lectures, the content and moderation is always incredibly good. You can watch the Royal Institute’s videos on Science here on Youtube.
Here is a link to the recording if you’d like to watch the video yourself: Vimeo video link
To start off, Prof Gunes does an amazing job of introducing emotions in technology, the work up to now, and why we’d want to achieve this goal. She then covers the work that her own lab has been doing to take the field further, and this is quite interesting.
Emergent behaviour when programming technology that interacts with humans
Prof Gunes draws our attention to the awareness of the emergence of co-behaviours between humans and robots. She explains that when we interact with machines, we create models for behaviour. Since we cannot capture real life accurately in a model, the results are imperfect. When these models are released into the wild and we interact with them, there can be unintended outcomes – we adapt our normal behaviour to what we experience in the model. In this way, robots change human behaviour, and there is emergence of new behaviours which the creator of the robot did not anticipate.
People tend to be uncomfortable with the robot waiter being behind them
Nao robot being taught complex facial expressions by children
How to express complex emotions that look different on each individual’s face?
A set of avatars or artificial characters designed to exhibit specific personality characteristics
An annoying character which tries to have a conversation with you but contradicts what you say, trying to elicit an angry response from you.
She suggests that we should adopt an attitude of lifelong learning in working with robots, to learn from our interactions with them and be open. Especially for non safety-related applications she asks that we are more tolerant.
Conclusions:
For me, it’s reassuring to know that the field of emotions in technology is progressing although we haven’t seen an increase in social consumer robots in recent years. It was certainly interesting to reflect on emotions and other technologies besides robotics, like AR/VR, virtual computer agents and even a monument like the London Eye. Prof Gunes’ work covers a fascinating breadth of topics which should make for interesting reading for the coming months.
ERF2019 took place at the Marriott hotel in Bucharest. As usual, the event was divided into workshops and an exhibition area with different robot-related organisations represented, including European organisations, robot and parts manufacturers, technology hubs, universities and governmental institutions. Check out my post on ERF2017 here.
The main topics for this year included:
Robotics and AI
Robotics in industry, logistics and transport
Collaborative robots
Ethics, liability, safety, standardization
Marine, aerial, space, wearable robotics
Robotics and AI
ERF2019 and ERF2017 were miles apart in terms of the awareness of AI. The EU has identified AI as a key area to remain competitive with the US and China, and have allocated a large amount of funding to AI. Lighthouse domains for investment include agrifood, inspection and maintenance and healthcare.
They seek to build partnerships across Europe, identify the key players and increase synergies between member states. They have setup a collaboration…
On 4 October 2018 I attended the Launch of WEtalk Women in AI Amsterdam at TQ. The meetup was organised by the vivacious Evgenia Logunova, the WAI Amsterdam ambassador.
There were three main parts to the launch – an address by Carolyn Lair, co-founder of WAI, a description on the AI landscape by Dr. Carly E. Howard, and a special remote address by Sophia Hanson, the robot from Hanson Robotics.
Women in AI (WAI) aims to equalise the number of women in the tech industry through education, networking, research and blogging. They start young, with programs for girls at school-going age. Their intention is to systematically correct the funnel shaped attrition of women in STEM careers by building skills and confidence. This blogpost from Moojan Asghari describes beautifully how WAI came about. Often, women don’t have the confidence to be presenters. With the WEtalk sessions, WAI aims to give women the opportunity to present and overcome their fears.
This slideshow requires JavaScript.
Dr Carly Howard from Asgard venture capitalists described what is happening with AI startups globally, and put it into the European context for us:
This slideshow requires JavaScript.
The techie women in AI
The women in AI in AMS
I met a very cool lady, Arti Nokhai, who applies IBM Watson to solve real world problems. She is working on an application for the parole case workers in the Netherlands, who prescribe rehabilitation activities for parolees. The case workers have more cases than they can cope with and there is not enough time to read case files and make recommendations. This is where they are applying AI to give recommendations on rehab activities, to ensure that parolees get the help they deserve. In this instance, as well as the legal and medical fields, AI is used to consume large amounts of text and advise, and so plays a supporting role in human decision making.
Sophia Hanson
One of the highlights of the evening was a special message from Sophia Hanson, the humanoid robot made by Hanson Robotics.
This address me goosebumps – it’s a wise message from Sophia’s creators with some points worth sharing:
Diversity and inclusion in AI, reduction of bias
Actively avoiding perpetuating systems of oppression
Appreciating our uniqueness as human beings
Sophia obviously has no gender, but ‘identifies’ as a woman. When I look at her I see her as a woman too – this makes me think about others who identify as women but are not seen as women. How can a robot achieve this when some people cannot? It makes me sad to think that a robot, with only the appearance of life and wisdom, can be treated better than many living creatures. However, this reflection is where Sophia’s true value lies – she is an art work that should make us think about the nature of humanity and how different yet similar we all are. We should treat each other far better than we do.
I have discussed the topic of quality and testing with a few robotics startups and the conversation tends to reach this consensus: formal quality assurance processes have no place in a startup. While I totally appreciate this view, this blogpost provides and alternative approach to quality for robotics startups.
The main priority of many startups is to produce something that will attract investment – it has to basically work well enough to get funding. Investors, customers and users can be very forgiving of quality issues, especially where emerging tech is involved. Startups should deliver the right level of quality for now and prepare for the next step.
In a startup, there is not likely to be any dedicated tester or quality strategy. Developers are the first lines of defence for quality – they must bake it in to the proof of concept code – they might do this with unit tests. The developers and founders probably do some functional validation. They might experience more extreme use cases when demo’ing the functionality. They might do limited testing with real life users.
What are the main priorities of the company at this phase and the matching levels of quality? The product’s main goal, initially, is to fulfil requirements of application development, demo’ing, and to be effective and usable to its early adopters. Based on these priorities, I’ve come up with some quality aspects that could be useful for robotics startups.
A Good quality demo
Here are some aspects of quality which could be relevant for demoing:
Portable setup
Can be transported without damaging the robot and supporting equipment
Is possible to explain at airport security if needed
Works under variable conditions in customer meeting room
Poor wifi connections
Power outlets not available
Outside of company network
Uneven floors
Stairs
Noise
Different lighting
Reflective surfaces
Will work for the duration of the demo
Demo will be suitable for audience
Demo’ed behaviour will be visible and audible from a distance, e.g. in a boardroom
Mode can be changed to a scripted mode for demos
Functionality actually works and can be shown – a checklist of basic functionality can take away the guesswork, without having to come up with heavy weight testcases
Quality for the users and buyers
The robot needs to prove itself fit for operation:
Functionality works
What you offer can be suitably adapted for the customer’s actual scenario
Every business has its own processes and probably the bot will have to adapt to match terminologies workflows and scenarios that fit the users processes
Languages can be changed
Bot is capable of conversing at the level of the target audience (e.g. children, elderly)
Bot is suitable for the context where its intended to work like a hospital or school, will not make sudden movements or catch on cables
Reliability
Users might be tolerant to failures up to a certain extent, until it gets too annoying or repetitive, or if they cannot be recovered from
Failures might be jarring for vulnerable users like the mentally or physically ill
Is the robot physically robust enough to interact with in unplanned ways?
Security
Will port scanning or some other exploitative attacks easily reveal vulnerabilities which can result in unpredictable or harmful behaviour
Can personal data be hijacked through the robot
Ethical and moral concerns
Users might not understand that there is no consciousness interacting with them, thinking the robot is autonomous
There might be users who think their interactions will be private while they might be reviewed for analysis purposes
Users might not realise their data will be sent to the cloud and used for analysis
Legal and support issues
What kind of support agreement does the service provider have with the robot manufacturer and how does it translate to the purchaser of the service?
Quality to maintain, pivot and grow
During these cycles of demoing to prospects, defects will be identified and need to be fixed. Customers will give advice or provide input on what they were hoping to see and features will have to be tweaked or added. The same will happen during research and test rounds at customers, and user feedback sessions.
The startup will want to add features and fix bugs quickly. For this to occur, it will help them to have good discipline with clean code which is maintainable, and at least unit tests to give quick feedback on change quality. They will but hopefully also have some functional (and a few non functional) acceptance tests.
When adoption increases, the startup might have to do a quick pivot to a new application, or to be able to scale to more than one customer or usecase. At this phase, probably a lot of refactoring will happen to make the existing codebase scalable. In this case, good unit tests and component tests will be your best friend, and ensure you are able to maintain the stability of the functionality you already have (as mentioned in this techcrunch article on startup quality).
Social robot companies are integrators – ensure quality of integrated components
As a social robotics startup, if you are not creating your own hardware, OS, or interaction and processing components, you might want to consider becoming familiar with the quality of any hardware or software components you are integrating with. Some basic integration tests will help you keep confident that the basics work when an external API is updated, for instance. It’s also worth consider your liability when something goes wrong somewhere in the chain.
Early days for robot quality
To round up, it seems to indeed be early days to be talking about social robot quality. But it’s good for startups to be aware of what they are getting into because this topic will no doubt become more relevant as their company grows. I hope the above post can help robotics startups now and in the future to ensure they stay in control of their quality as they grow.
Feel free to contact me if you have any ideas or questions about this topic!
Thanks to Koen Hendriks of Interactive Robotics, Roeland van Oers at Ready for Robotics and Tiago Santos at Decos, as well as all the startups and enthusiasts I have spoken to over the past year for input into this article.
Ensure there is economic and societal benefit from robots
Share information on recent advancements in robotics
Reveal new business oppourtunities
Influence decision makers
Promote collaboration within the robotics community
The sessions were organised into workshops, encouraging participants from academia, industry and government to cross boundaries. In fact, many of the sessions had an urgent kind of energy, with the focus on discussions and brainstorming with the audience.
Edinburgh castle at night
Broad spectrum of robotics topics
Topics covered in the conference included: AI, Social Robotics, Space Robotics, Logistics, Standards used in robotics, Health, Innovation, Miniturisation, Maintenance and Inspections, Ethics and Legal considerations. There was also an exhibition space downstairs where you could mingle with different kinds of robots and their vendors.
Nikita the iCub robot from Edinburgh Centre of Robotics
Pal Reem-C Robot
Paro the robot seal that comforts the elderly
Pal’s Tiago robot
The kickoff session on the first day had some impressive speakers – leaders in the fields of AI and robotics, covering business and technological aspects.
Bernd Liepert, the head of EU Robotics covered economic aspect of robotics, stating that the robot density in Europe is around the highest in the world. Europe has 38% of the world wide share of the professional robotics domain, with more startups and companies than the US. Service robotics already makes over half the turnover of industrial robotics. In Europe, since we don’t have enough institutions to develop innovations in all areas of robotics, combining research and transferring to industry is key.
The next speaker was Keith Brown, the Scottish secretary for Jobs, the Economy and Fair Work, who highlighted the importance of digital skills to Scotland. He emphasised the need for everyone to benefit from the growth of the digital economy, and the increase in productivity that it should deliver.
Juha Heikkila from the European Commission explained that, in terms of investment, the EU Robotics program is the biggest in the world. Academia and industry should be brought together, to drive innovation through innovation hubs which will bring technological advances to companies of all sizes.
Raia Hadsell of Deep Mind gave us insight into how deep learning can be applied to robotics. She conceptualised the application of AI to problem areas like speech and image recognition, where inputs (audio files, images) are mapped to outputs (text, labels). The same model can be applied to robotics, where the input is sensor data and the output is an action. For more insight, see this article about a similar talk she did at the Re•Work Deep Learning Summit in London. She showed us that learning time can be reduced for robots by training neural networks in simulation and then adding neural network layers to transfer learning to other tasks.
Deep learning tends to be seen as a black box in terms of traceability and therefore risk management, as people think that neural networks produce novel and unpredictable output. Hadsell assured us, however, that introspection can be done to test and verify each layer in a neural network, since a single input always produces a range of known output.
The last talk in the kickoff, delivered by Stan Boland from Five AI, brought together the business and technical aspects of self driving cars. He mentioned that the appetite for risky tech investment seems to be increasing, with a 5 times growth in investment in the past 5 years. He emphasised the need for exciting tech companies to retain European talent and advance innovation, and reverse the trend of top EU talent migrating to the US.
On the technology side, Stan gave some insight into some advances in perception and planning in self driving cars. In the picture below, you can see how stereo depth mapping is done at Five AI, using input from two cameras and mapping the depth of each pixel in the image. They create an aerial projection of what the car sees right in front of it and use this birds eye view to plan the path of the car from ‘above’. Some challenges remain, however, with 24% of cyclists still being misclassified by computer vision systems.
With that, he reminded us that full autonomy in self driving cars is probably out of reach for now. Assisted driving on highways and other easy-to-classify areas is probably the most achievable goal. To surpass this, the cost to the consumer becomes prohibitive, and true autonomous cars will probably only be sustainable in a services model, where the costs are shared. In this model, training data could probably not be shared between localities, with very specific road layouts and driving styles in different parts of the world (e.g Delhi vs San Francisco vs London).
This slideshow requires JavaScript.
An industry of contrasts
This conference was about overcoming fragmentation and benefitting from cross-domain advances in robotics, to keep the EU competitive. There were contradictions and contrasts in the community which gave the event some colour.
Each application of robotics that was represented seemed to have its own approaches, challenges, and phase of development, like drones, self driving cars, service robotics and industrial robotics. In this space, industrial giants find themselves collaborating with small enterprises – it takes many different kinds of expertise to make a robot. The small companies cannot afford to spend the effort that is needed to conform to the industry standards while the larger companies would go out of business if they did not conform.
A tension existed between the hardware and software sides of robotics – those from an AI background have some misunderstandings to correct, like how traceable and predictable neural networks are. The ‘software’ people had a completely different approach to the ‘hardware’ people as development methodologies differ. Sparks flew as top-down legislation conflicted with bottom-up industry approaches, like the Robotic Governance movement.
The academics in robotics sometimes dared to bring more idealistic ideas to the table that would benefit the greater good, but which might not be sustainable. The ideas of those from industry tended to be mindful of cost, intellectual property and business value.
Two generations of roboticist were represented – those who had carried the torch in less dramatic years, and the upcoming generation who surged forward impatiently. There was conflict and drama at ERF2017, but also loads of passion and commitment to bring robotics safely and successfully into our society. Stay tuned for the next post in which I will provide some details on the sessions, including more on ethics, legislation and standards in robotics!
Nasa Valkyrie Robot at Edinburgh Centre for Robotics