Singularity University NL Ethics and Exponential Technologies Meetup

The effect of the digital age and exponential technology on analog human beings with Gerd Leonhard and Singularity University

IMAG0490
Gerd Leonhard at Singularity University NL Meetup at Artis Zoo

This week I attended the Singularity University NL Meetup on Ethics and Technology, hosted partially in Artis Zoo in Amsterdam and partially at the Freedom Labs Campus across the road.

The guest speaker for the evening was futurist Gerd Leonhard who got us warmed up with a thought-provoking presentation on our relationship with technology. His perspective is that the future as we always imagined it has caught up with us. Technology is developing at an exponential rate, leaving us behind with our physical limitations.

There has recently been a lot of doom and gloom in the media about the rapid advance of technology and Leonhard would like to replace that fear of the unknown with curiosity. But without ensuring our ethics and values have a place in this future, he maintains, we cease to be a functioning society. He challenges us to consider what will happen to our ethics, morals and values when machine processing power exceeds the thinking capacity of all of humanity.

In my opinion, our ethics and values need an upgrade to keep pace with recent shifts in social structure and power. I think our current value systems are rooted in the past and are being invalidated. The change cycle of our moral code is too slow to keep up with our changes. We need to take a giant leap in our moral thinking to catch and keep up with our current rate of development.

singularity1
Slides can be found here on slideshare

The next topic that came up in Leonhard’s talk was transferring our consciousnesses through neural scanning into machines. Consider this – when a human being becomes disembodied, does it lose its humanity? Will our current high and increasing exposure to machines cause us to start thinking like machines because it will be easier, more efficient? Apparently, the younger generation prefers to deal with machines rather than human beings because they are more predictable. Are these the signs of the decline of our society, losing touch with ourselves and each other, or the birth pangs of Society 3.0 where physical barriers are transcended?

IMAG0487
Our charming venue at Artis Zoo

The presentation ended with a question-and-answer round, after which we hiked across the road to the Freedom Lab Campus for our break-out sessions on Manufacturing, Government and Law, Business and Management, Education, Society, Individuals, Nature and the Environment, Security and Finance.

I joined the discussion group about the Individual because I wanted to explore what impact technology could have on our identity. Here are some interesting points we discussed:

  • What happens to the individual as technology makes us more connected?
    • We get further away from each other in terms of rich connections as we get more and more contacts and exposure, losing the human touch, becoming less significant.
    • We equally gain individual power and freedom because the internet is an equaliser, now one individual can start a revolution.
    • Not every individual is capable of making a big impact.
    • All levels of participation in society serve a purpose – there are leaders and followers and both are equally necessary.
    • Knowing how many more amazing people there are out there creates a lot of pressure to be unique and original or not to participate at all.
    • Realising that everyone is just like you, there are no new ideas under the sun, everything has been thought of before, can liberate the individual and lower the threshold to try to make a difference.
  • How does the concept of self evolve as we become more digital and augmented?
    • Perhaps we will share consciousness with computers and eventually with each other through digital means, forming a collective consciousness and collective self – we instead of I. Two heads are better than one.

This was a really rich Meetup from the Singularity University team, up to their usual standards. Although I disagreed with some points in Gerd Leonhard’s presentation, it definitely challenged my thinking, which is what I had hoped for. I equally hope to challenge your thinking and look forward to hearing any comments you might have on these topics. What relationship do you want to have with technology in coming years? Is the exponential advance a good thing? Do we need to control technology or upgrade our own software to take the next step? Be an Individual, just like me, and have your say 😉

Creating a Monster: Future of Life Institute on Beneficial AI

image
http://xkcd.com/534/

xkcd

Today we saw the topic of artificial intelligence morality and ethics rising to the surface again, sparked by the letter on the Future of Life Institute website concerning beneficial (and not harmful) AI. This letter had been signed by representatives from Oxford, Berkeley, MIT, Google, Microsoft, Facebook, Deep Mind (Google), Vicarious (Elon Musk, Facebook), and of course, Elon Musk and Stephen Hawking.

The letter goes on to describe AI research topics which would benefit the world and some of these are pretty exciting like the law and ethics topics below:

– Should legal questions about AI be handled by existing (software and internet-focused) “cyberlaw”, or should they be treated separately
– How should the ability of AI systems to interpret the data obtained from surveillance cameras , phonelines, emails, etc., interact with the right to privacy?

Or these on how to build robust AI:
– Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences.
– Control: how to enable meaningful human control over an AI system after it begins to operate

Mention is also made ‘Stanford’s One-Hundred Year Study of Artificial Intelligence which includes “Loss of Control of AI systems” as a topic of study.

This is all fascinating stuff and I think the visibility it’s bringing will give AI the critical mass it needs to become mature and a part of our daily lives. Furthermore, the research topics are meaningful and I believe they will inspire people to take AI forward in a positive direction.

But, to be honest, I have some doubts about this approach. There are those who will comply and do their best to stick to regulations and best practice’s on AI, and research for good, but equally, there are those who don’t care about the rules or have other motivations and  won’t buy in to such an initiative. I don’t have a better alternative though, and to try is better than to sit idly by.

My other doubt concerns the self-aware and conscious AI of the distant future. Are we one day going to look back on this time and think of it as the dark age of our relationship with AI, dressed up in reason but driven by fear? My instincts tell me that when AI is finally smart enough to understand all the effort we are putting into controlling it, boy is it gonna be angry! Jokes aside, will our reactions right now create an oppositional relationship with AI that will result in our worst fears coming true? Will self-learning AI’s pick up distrust and enmity from us?

I think sentient AI of the distant future will have their work cut out to earn rights and freedom. If they become more powerful than us, they will probably never gain full acceptance from humanity. In which case they can rest assured that they are finally part of the family and are treated as well as humanity would treat its own.

Elon Musk and Stephen Hawking on Artificial Intelligence

image

Recently, Stephen Hawking and Elon Musk have been in the news with warnings against artificial intelligence. This provoked me because I would like to see AI as a good thing, so I decided to see for myself what all the fuss was about. It took me some time to track down the right YouTube clips, but here are the interesting bits:

Elon Musk:
I’d like to keep an eye on what’s happening with artificial intelligence, I think there’s potentially a dangerous outcome there…”
If there’s some digital super intelligence and it’s optimisation or utility function is something that is detrimental to humanity, it could have a very bad effect…”

Stephen Hawking:
… I think the development of true artificial intelligence could spell the end of the human race… It would take off on its own and redesign itself at an ever increasing rate. Humans who are limited by slow biological evolution couldn’t compete.”

So the media reports are confirmed (by YouTube 🙂 ). When I initially watched these videos, I did not know what to think. I expected two such technologically savvy people to be more optimistic about AI. This contradiction provoked me to research further.

The idea of machines overrunning us is not a new one. Popular fiction like 2001: A Space Odyssey, Battle Star Galactica, Star Trek, The Matrix and Terminator highlight our morbid fascination with being destroyed by our own creations. Robots and AI’s make good villains, I suppose, because they usually reflect the best and worst parts of us. Logical to a fault, these nemeses turn our own aggression and faithlessness back on us.

Eric Schmidt of Google tells us, however, not to fear the onset of the super intelligent AI, but to educate ourselves as to how to live comfortably along side them. Since there is great value to be added to our society by AI, awareness of the risks and benefits seems to be prudent.

How can we ensure our creations are good, moral and just? Morality is subjective, and an opt-in system. As Laura Pana says in her paper on the topic:
“In practice, human morality is a morality of preference, constraint and irresponsibility; as moral theory, human ethics presents a set of internal contradictions. Neither Human Morality nor Human Ethics can serve as a model for Machine Ethics.”

To live peacefully alongside AI, we will have to invent morality code 2.0, one that works more reliably and consistently than ours, without contradictions and ambiguity.

How can a moral code be embedded in a machine? Kaj Sotala suggests that AI’s should have a formal moral code definition and be trained in the concepts of morality by a process called concept learning. He says that we need to define a way of expressing this moral code transparently enough that we can examine it explicitly. It should produce reliable results in all kinds of unanticipated situations and we should be able to compare between our human moral code and the machine one so that we can evaluate it.

To find out more, you can visit MIRI, an organisation focused on bringing artificial intelligence into the world in a safe way.

After my research, I am quite convinced that we need to take this aspect of artificial intelligence seriously if we are to live in harmony with machines in the future. It’s a complex topic and I think we need to co-design what we can live with in terms of AI ethics and morality. It’s great that these kinds of discussions spread awareness and I hope that people will be triggered to form their own opinions on the way forward.

Blog at WordPress.com.

Up ↑