Why Malignant Failures are Scary (Skynet 2)

You might have read our recent piece on malignant failure modes and found yourself still wondering: what is it that makes me so afraid of these scenarios?

I wanted to drop by and share current industry thought on this question as well as add a few more points to our discussion of the importance of malignant failure modes.

By the end of this short piece, I hope that the need for an AI governance body to be in place before we get anywhere close to “Superintelligence” becomes very clear.

Malignant Failures: A Deeper Dive

What I didn’t make clear in the last post was that it is arguably essential to understand that a malignant failure means one that likely ends in human extinction. This is why in essentially all of his examples, Bostrom mentions the AI seeing humans as resources to be “harvested,” in various contexts. In connection with this, it could be said that a malignant AI sees no reason not to “use humans” to achieve its final goal.

Scary, right?

What should make you feel a bit more at ease about the likelihood of any of these malignant failure modes coming to pass is the sheer scale that the AI industry needs to reach for any of this to be theoretically possible. We would need to be at a point in history in which it’s normal for us to have AIs in our lives that can essentially do anything. As of now, that’s a time that’s quite far off, with one of the most recent developments in the industry being that an AI can simply transcribe phone conversations, almost as well as a human.

But even so, how do we stop MFMs?

If everything that Bostrom has written on the subject of malignant failure modes is taken into account as a whole, then the common theme of “infinite resource acquisition” emerges.

Steven Pinker, through the example of an evil genius, attempts to illustrate the sheer complexity and unlikeliness of such scenarios occurring at this time. What he says can be summed up as being, it’s not probable to assume that someone could assemble some of the best AI scientists, get them to work on a robot with the sole purpose of destroying everything around it, and convince them never to leak any information to the public.

With this understood as true, it’s not valid to suppose that it’s inevitable for an Artificial Intelligence to become “evil,” or something similar to it. In that Bostrom’s theories on the subject seem to depend on a degree of certainty of AIs acting immorally in the future, this assertion does seem quite telling. Pinker backs this up further by adding that over the course of history, most figures that we eventually considered to be “evil,” had become so because of social conditioning. While this can’t be said with complete certainty, it is a generally accepted view that good and evil personas arise from how people grow up and how society shapes them.

Governing AI

While Pinker makes logical arguments against Bostrom’s theories proving true in the future, let’s suppose for a second that they will play out. If that’s the case, how should we prepare for these failure modes to affect our lives? Leaders in Artificial Intelligence like Elon Musk have argued for the establishment of global governing standards and a global organization to police AI in order to prevent such issues from becoming serious threats. How we establish such standards as well as such an organization will be ideas that we attempt to discuss at length in future pieces.

References:

Blog Post on Malignant Future Modes:

https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes

Elon Musk on the Future (including AI):

https://www.ted.com/talks/elon_musk_the_mind_behind_tesla_spacex_solarcity/discussion?language=en

Guardian Article on Musk and Regulating AI:

https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo

Institute for Ethics and Emerging Technologies on Malignant Future Modes:

https://ieet.org/index.php/IEET2/more/danaher20140803

The Superintelligence Control Problem:

https://futureoflife.org/2015/11/23/the-superintelligence-control-problem/

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace