Recently, Stephen Hawking and Elon Musk have been in the news with warnings against artificial intelligence. This provoked me because I would like to see AI as a good thing, so I decided to see for myself what all the fuss was about. It took me some time to track down the right YouTube clips, but here are the interesting bits:
Elon Musk:
“I’d like to keep an eye on what’s happening with artificial intelligence, I think there’s potentially a dangerous outcome there…”
“If there’s some digital super intelligence and it’s optimisation or utility function is something that is detrimental to humanity, it could have a very bad effect…”
Stephen Hawking:
“… I think the development of true artificial intelligence could spell the end of the human race… It would take off on its own and redesign itself at an ever increasing rate. Humans who are limited by slow biological evolution couldn’t compete.”
So the media reports are confirmed (by YouTube 🙂 ). When I initially watched these videos, I did not know what to think. I expected two such technologically savvy people to be more optimistic about AI. This contradiction provoked me to research further.
The idea of machines overrunning us is not a new one. Popular fiction like 2001: A Space Odyssey, Battle Star Galactica, Star Trek, The Matrix and Terminator highlight our morbid fascination with being destroyed by our own creations. Robots and AI’s make good villains, I suppose, because they usually reflect the best and worst parts of us. Logical to a fault, these nemeses turn our own aggression and faithlessness back on us.
Eric Schmidt of Google tells us, however, not to fear the onset of the super intelligent AI, but to educate ourselves as to how to live comfortably along side them. Since there is great value to be added to our society by AI, awareness of the risks and benefits seems to be prudent.
How can we ensure our creations are good, moral and just? Morality is subjective, and an opt-in system. As Laura Pana says in her paper on the topic:
“In practice, human morality is a morality of preference, constraint and irresponsibility; as moral theory, human ethics presents a set of internal contradictions. Neither Human Morality nor Human Ethics can serve as a model for Machine Ethics.”
To live peacefully alongside AI, we will have to invent morality code 2.0, one that works more reliably and consistently than ours, without contradictions and ambiguity.
How can a moral code be embedded in a machine? Kaj Sotala suggests that AI’s should have a formal moral code definition and be trained in the concepts of morality by a process called concept learning. He says that we need to define a way of expressing this moral code transparently enough that we can examine it explicitly. It should produce reliable results in all kinds of unanticipated situations and we should be able to compare between our human moral code and the machine one so that we can evaluate it.
To find out more, you can visit MIRI, an organisation focused on bringing artificial intelligence into the world in a safe way.
After my research, I am quite convinced that we need to take this aspect of artificial intelligence seriously if we are to live in harmony with machines in the future. It’s a complex topic and I think we need to co-design what we can live with in terms of AI ethics and morality. It’s great that these kinds of discussions spread awareness and I hope that people will be triggered to form their own opinions on the way forward.