
Today we saw the topic of artificial intelligence morality and ethics rising to the surface again, sparked by the letter on the Future of Life Institute website concerning beneficial (and not harmful) AI. This letter had been signed by representatives from Oxford, Berkeley, MIT, Google, Microsoft, Facebook, Deep Mind (Google), Vicarious (Elon Musk, Facebook), and of course, Elon Musk and Stephen Hawking.
The letter goes on to describe AI research topics which would benefit the world and some of these are pretty exciting like the law and ethics topics below:
– Should legal questions about AI be handled by existing (software and internet-focused) “cyberlaw”, or should they be treated separately
– How should the ability of AI systems to interpret the data obtained from surveillance cameras , phonelines, emails, etc., interact with the right to privacy?
Or these on how to build robust AI:
– Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences.
– Control: how to enable meaningful human control over an AI system after it begins to operate
Mention is also made ‘Stanford’s One-Hundred Year Study of Artificial Intelligence which includes “Loss of Control of AI systems” as a topic of study.
This is all fascinating stuff and I think the visibility it’s bringing will give AI the critical mass it needs to become mature and a part of our daily lives. Furthermore, the research topics are meaningful and I believe they will inspire people to take AI forward in a positive direction.
But, to be honest, I have some doubts about this approach. There are those who will comply and do their best to stick to regulations and best practice’s on AI, and research for good, but equally, there are those who don’t care about the rules or have other motivations and won’t buy in to such an initiative. I don’t have a better alternative though, and to try is better than to sit idly by.
My other doubt concerns the self-aware and conscious AI of the distant future. Are we one day going to look back on this time and think of it as the dark age of our relationship with AI, dressed up in reason but driven by fear? My instincts tell me that when AI is finally smart enough to understand all the effort we are putting into controlling it, boy is it gonna be angry! Jokes aside, will our reactions right now create an oppositional relationship with AI that will result in our worst fears coming true? Will self-learning AI’s pick up distrust and enmity from us?
I think sentient AI of the distant future will have their work cut out to earn rights and freedom. If they become more powerful than us, they will probably never gain full acceptance from humanity. In which case they can rest assured that they are finally part of the family and are treated as well as humanity would treat its own.