European Robotics Forum 2017

The European Robotics Forum (ERF2017) took place between 22 and 24 March 2017 at the Edinburgh International Conference Centre.

The goals were to:

  • Ensure there is economic and societal benefit from robots
  • Share information on recent advancements in robotics
  • Reveal new business oppourtunities
  • Influence decision makers
  • Promote collaboration within the robotics community

The sessions were organised into workshops, encouraging participants from academia, industry and government to cross boundaries. In fact, many of the sessions had an urgent kind of energy, with the focus on discussions and brainstorming with the audience.

wp-1490897576308.jpg
Edinburgh castle at night

Broad spectrum of robotics topics

Topics covered in the conference included: AI, Social Robotics, Space Robotics, Logistics, Standards used in robotics, Health, Innovation, Miniturisation, Maintenance and Inspections, Ethics and Legal considerations. There was also an exhibition space downstairs where you could mingle with different kinds of robots and their vendors.

The kickoff session on the first day had some impressive speakers – leaders in the fields of AI and robotics, covering business and technological aspects.

Bernd Liepert, the head of EU Robotics covered economic aspect of robotics, stating that the robot density in Europe is around the highest in the world. Europe has 38% of the world wide share of the professional robotics domain, with more startups and companies than the US. Service robotics already makes over half the turnover of industrial robotics. In Europe, since we don’t have enough institutions to develop innovations in all areas of robotics, combining research and transferring to industry is key.

The next speaker was Keith Brown, the Scottish secretary for Jobs, the Economy and Fair Work, who highlighted the importance of digital skills to Scotland. He emphasised the need for everyone to benefit from the growth of the digital economy, and the increase in productivity that it should deliver.

Juha Heikkila from the European Commission explained that, in terms of investment,  the EU Robotics program is the biggest in the world. Academia and industry should be brought together, to drive innovation through innovation hubs which will bring technological advances to companies of all sizes.

wp-1490898806984.jpg

Raia Hadsell of Deep Mind gave us insight into how deep learning can be applied to robotics. She conceptualised the application of AI to problem areas like speech and image recognition, where inputs (audio files, images) are mapped to outputs (text, labels). The same model can be applied to robotics, where the input is sensor data and the output is an action. For more insight, see this article about a similar talk she did at the Re•Work Deep Learning Summit in London. She showed us that learning time can be reduced for robots by training neural networks in simulation and then adding neural network layers to transfer learning to other tasks.

Deep learning tends to be seen as a black box in terms of traceability and therefore risk management, as people think that neural networks produce novel and unpredictable output. Hadsell assured us, however, that introspection can be done to test and verify each layer in a neural network, since a single input always produces a range of known output.

The last talk in the kickoff, delivered by Stan Boland from Five AI, brought together the business and technical aspects of self driving cars. He mentioned that the appetite for risky tech investment seems to be increasing, with a 5 times growth in investment in the past 5 years. He emphasised the need for exciting tech companies to retain European talent and advance innovation, and reverse the trend of top EU talent migrating to the US.

On the technology side, Stan gave some insight into some advances in perception and planning in self driving cars. In the picture below, you can see how stereo depth mapping is done at Five AI, using input from two cameras and mapping the depth of each pixel in the image. They create an aerial projection of what the car sees right in front of it and use this birds eye view to plan the path of the car from ‘above’. Some challenges remain, however, with 24% of cyclists still being misclassified by computer vision systems.

With that, he reminded us that full autonomy in self driving cars is probably out of reach for now. Assisted driving on highways and other easy-to-classify areas is probably the most achievable goal. To surpass this, the cost to the consumer becomes prohibitive, and true autonomous cars will probably only be sustainable in a services model, where the costs are shared. In this model, training data could probably not be shared between localities, with very specific road layouts and driving styles in different parts of the world (e.g Delhi vs San Francisco vs London).

This slideshow requires JavaScript.

An industry of contrasts

This conference was about overcoming fragmentation and benefitting from cross-domain advances in robotics, to keep the EU competitive. There were contradictions and contrasts in the community which gave the event some colour.

Each application of robotics that was represented seemed to have its own approaches, challenges, and phase of development, like drones, self driving cars, service robotics and industrial robotics. In this space, industrial giants find themselves collaborating with small enterprises – it takes many different kinds of expertise to make a robot. The small companies cannot afford to spend the effort that is needed to conform to the industry standards while the larger companies would go out of business if they did not conform.

A tension existed between the hardware and software sides of robotics – those from an AI background have some misunderstandings to correct, like how traceable and predictable neural networks are. The ‘software’ people had a completely different approach to the ‘hardware’ people as development methodologies differ. Sparks flew as top-down legislation conflicted with bottom-up industry approaches, like the Robotic Governance movement.

The academics in robotics sometimes dared to bring more idealistic ideas to the table that would benefit the greater good, but which might not be sustainable. The ideas of those from industry tended to be mindful of cost, intellectual property and business value.

Two generations of roboticist were represented – those who had carried the torch in less dramatic years, and the upcoming generation who surged forward impatiently. There was conflict and drama at ERF2017, but also loads of passion and commitment to bring robotics safely and successfully into our society. Stay tuned for the next post in which I will provide some details on the sessions, including more on ethics, legislation and standards in robotics!

Update on latest Artificial Intelligence API’s and services

The past few years has seen a blur of software giants releasing AI and Machine Learning themed APIs and services. I was, frankly, surprised at how many options there currently are for developers and companies. I think it’s a positive sign for the industry that there are multiple options from reputable brands when it comes to topics like visual and language recognition – these have almost become commodities. You also see strong consolidation into very typical categories like machine learning for building general predictive models, visual recognition, language and speech recognition, conversational bots and news analysis.

  • Google
    • Google Cloud Platform
      • Google Prediction API
        • Hosted Models (Demo)
          • Language Identifier
            • Spanish, English or French
          • Tag Categoriser
            • Android, appengine, chrome youtube
          • Sentiment Predictor
            • Positive or negative label for comments
        • Trained Models
          • Train your own model
      • Google Cloud Vision API
        • Label Detection
        • Explicit Content Detection
        • Logo Detection
        • Landmark Detection
        • Optical Character Recognition
        • Face Detection
        • Image Attributes
      • Cloud Speech API
        • Audio to text
        • >80 languages
        • Streaming Recognition
        • Inappropriate Content Filtering
        • Real-time or Buffered Audio Support
        • Noisy Audio Handling
      • Google Translate API
        • Text Translation
        • Language Detection
    • Tensor Flow
      • Open Source graph-based numerical computation and model building

 

  • Facebook
    • Bot for messenger
      • Ability to build a chat bot for your company that chats via facebook messenger

 

  • IBM
    • Bluemix
      • Alchemy
        • Alchemy Language
          • Keyword Extraction
          • Entity Extraction
          • Sentiment Analysis
          • Emotion Analysis
          • Concept Tagging
          • Relation Extraction
          • Taxonomy Classification
          • Author Extraction
        • Alchemy Data News
          • News and blog analysis
          • Sentiment, keyword, taxonomy matching
      • Concept Insights
        • Linking concepts between content
      • Dialog
        • Chat interaction
      • Language Translation
      • Natural Language Classifier
        • Phrase classification
      • Personality Insights
        • Social media content analysis to predict personal traits
      • Relationship Extraction
        • Finds relationships between objects and subjects in sentences
      • Retrieve and Rank
        • Detects signals in data
      • Speech To Text, Text to Speech
      • Tone Analyzer
        • Emotion analysis
      • Tradeoff Analytics
        • Decision making support
      • Visual Recognition
      • Cognitive Commerce
        • Support for driving commerce, recommendations etc
      • Cognitive Graph
        • Creates a knowledge graph of data thats typically difficult for a machine to understand
      • Cognitive Insights
        • Personalised commercial insights for commerce

 

 

  • Microsoft
    • Microsoft Cognitive Services
      • Vision
        • Categorise images
        • Emotion recognition
        • Facial detection
        • Anaylze video
      • Speech
        • Speech to text, text to speech
        • Speaker recognition
      • Language
        • Spell checking
        • Natural language processing
        • Complex linguistic analysis
        • Text analysis for sentiment, phrases, topics
        • Models trained on web data
      • Knowledge
        • Relationships between academic papers
        • Contextual linking
        • Interactive search
        • Recommendations
      • Search
        • Search
        • Search autosuggest
        • Image and metadata search
        • News search
        • Video search
      • Bot Framework
      • Content Moderator
      • Translator
      • Photo DNA Cloud Service

Did I miss something from this list? Comment and let me know!

Amazon Machine Learning at a Glance

Here is a brief summary of Amazon’s machine learning service in AWS.

awsml
AWS ML helping to analyse your data. Slides here

Main function:

  • Data modelling for prediction using supervised learning
  • Try to predict characteristics like
    • Is this email spam
    • Will this customer buy my product

Key benefits:

  • No infrastructure management required
  • Democratises data science
    • Wizard-based model generation, evaluation and deployment
    • Does not require data science expertise
    • Built for developers
  • Bridges the gap between developing a predictive model and building an application

Type of prediction model:

  • Binary classification using logistic regression
  • Multiclass classification using multinomial logistic regression
  • Regression using linear regression

Process:

  • Determine what question you want to answer
  • Collect labelled data
  • Convert to csv format and upload to Amazon S3
  • Cleanup and aggregate data with AWS ML assistance
  • Split data into training and aggregation sets using the wizard
  • Wait for AWS ML to generate a model
  • Evaluate and modify the model with the wizard
  • Use the model to create predictions on a batch or single api call basis

Pricing: Pay as you go

Useful links:

How Booking.com uses Machine Learning to Inspire Travellers

Applied Machine Learning Meetup

This blog post is about the Applied Machine Learning Meetup which I attended at Booking.com’s Amsterdam office. They described their use of Latent Dirichlet Allocation to identify patterns in travel and booking data and suggest travel destinations to their users.

Athanasios Noulas, the presenter, has published this paper on the topic, along with Mats Einarsen, where you can find all the details if needed.

Interestingly, the Latent Dirichlet Allocation method was first described in this paper, with one of its contributors being Andrew Ng, whose Stanford University machine learning MOOC course led to the founding of Coursera and who once worked for Google and now works for Baidu in AI research.

I have found this blog post on the topic by Edwin Chen to be a good introduction to the approach.

The method itself is used to analyse text data to identify subject groupings or topics. The method traverses documents, and assigns probabilities of words in the document belonging to certain topics. It then refines these probabilities (learning) iteratively until it has a reasonable topic grouping of words.

At Booking.com they capture endorsements provided by travellers for destinations they have visited. This takes the format of cities like London or Bangkok and activities people enjoyed there like shopping, dining or surfing.

User Engagement through Topic Modelling in Travel http://www.ueo-workshop.com/wp-content/uploads/2014/04/noulasKDD2014.pdf

The document for analysis in this case is the combination of a single booking with the related endorsements the user has made afterwards. The end result of the modelling process is groups of characteristics and cities that match these characteristics. These groupings are ultimately used to make recommendations to people based on a grouping they fit into.

User Engagement through Topic Modelling in Travel http://www.ueo-workshop.com/wp-content/uploads/2014/04/noulasKDD2014.pdf

This data is later used on the Booking.com website and in emails with some success to provide people with destinations they might enjoy.

I found the talk pretty interesting and especially so to learn about LDA (yes, we’re on a first-name basis now) and generally about AI methods we encounter in everyday life. Also interesting was the number of people who signed up (133) and attended the meetup (I counted about 60, could be more) which I thought was pretty impressive. It seems like interest in AI is spiking and it’s quite intriguing to see what the future will hold for this domain.

Blog at WordPress.com.

Up ↑