Compassionate Machine Intelligence: An Introduction

Certain researchers have theorized that for AIs to get close to a human level of intelligence, we need to begin by teaching them compassion. To even suggest such a path, however, is not without significant obstacles.

How do we even begin to conceptualize the road toward making machines “human?” Cindy L. Mason, the author of our primary source below, claims that a good start would be to teach AI systems to think irrationally. Saying so, however, puts us back at our previous question. How do we even begin to do that?

Mason’s answer to this appears to be that we could teach AI systems practices related to what she calls mind training. What this seems to primarily include are various forms of meditation, like those in Mindfulness as well as those more native to the traditional Buddhist practice. Next, Mason posits that because logical examples of diagramming out these processes exist, we could possibly train AI systems to experience certain feelings like compassion and calmness at fairly high levels. In another sense, this means that we could use medical research on the effects of meditation on the human brain to help AIs grasp the importance of emotions.

Still, even suggesting this brings up a slew of questions, one of which is: how can such a process be mapped out in Machine Learning terms?

To adequately understand Mason’s suggestions on the subject, we need to move on to the section in which she begins to explain her thoughts on the central theoretical framework of this study. In other words: how does Mason begin to conceptualize this in Machine Learning terms?

First and foremost, she uses a buzzword sort of explanation to say that an AI system with human level intelligence is essentially equal to the E=MC squared version of today’s AI, if today’s AI is thought of as E=MC.

While this does not really tell us anything, we can actually find some substance in the section where Mason attempts to clarify the idea of an E=MC squared AI system in easier terms. In her mind, the new features of such an AI include: “an irrational new form of inference called affective inference, separate and explicit representations of feelings and beliefs about a proposition and the ability to represent multiple mental states or agents.” In the end, unfortunately, Mason never seems to fully clarify these findings.

Because of this, it would be logical for other researchers to build upon this work and ask: what then, is the real chance of achieving these suggested features?

Expect future posts to include deeper dives into teams who are doing just that.

Resources:

Primary Resource: http://www.aaai.org/Papers/Workshops/2008/WS-08-07/WS08-07-023.pdf

Further Reading: http://www-formal.stanford.edu/cmason/circulation-ws07cmason.pdf 

https://medium.com/@jackkrupansky/how-close-is-ai-to-human-level-intelligence-here-in-april-2018-9a6ceaff2f9d

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace