Addressing the Question of Ethical AI

Machine Learning and its’ ability to drive facial and image recognition is nothing new at this point. Even so, companies working in any area involving AI or Machine Learning are all still struggling with solving the same problem. How do we infuse ethics into AI projects? Is it even possible to do so? Through the case of the controversy involving Clarifai and Google last year, some of the difficulty with tackling this issue has come to light.

According to a New York Times article that was published on Friday, Clarifai’s CEO announced that the company would be hiring the equivalent of an Ethics czar last year, but to date, that has not come to pass. Reportedly, the aim behind such an idea was to continue to promote the company’s vision of developing Artificial Intelligence projects that both benefit humanity and consistently improve in terms of their perceived quality.

To understand why this is a problem that positions like this have not really taken hold across the space, it is important to take in the chaos that has been surrounding Clarifai. In the months following the announcement of their new ethics position, the company reportedly began to delve more and more into the military and government related applications of it’s platform.

Eventually, they became involved with the same Pentagon project as Google, which ended up causing a significant media backlash because of its’ goal of optimizing the identification of people and objects in drone videos. Because of China’s recent application of various technologies to give people “social scores,” parallels began to be drawn between this and the growing fear that the United States would soon follow suit.

In response to subsequent protests from their own employees, Google pulled out of the project, which is still on-going. Clarifai followed suit, but actually became more involved in government-funded AI projects.

With this, an important question comes to light.

How far is too far? More specifically, when do we start questioning the ethics of AI companies? The problem with answering either of these questions with any reasonable degree of certainty is simple. No one seems to have proposed a reliable code of ethics for AI developers that they are willing to adopt as an industry standard. Why can’t consensus be reached? What should be considered to be causing harm to humanity and therefore, off-limits to AI teams?

Last December, in response to both community and in-house backlash, Google formed a team with the aim of researching and publishing guidelines on how to keep AI “ethical.” It’s important to note that this occurred after they had already changed their company-wide ethics policy.

The gist of Google’s code of ethics related to AI is easy to grasp. According to a post from their CEO, all of their AI efforts need to clearly benefit society and consistently “be accountable” to people. On the other side of things, the same post clearly states that they will not be involved in any sort of project that causes harm to the global populace in any way.

To date, neither of these efforts seem to have snowballed into any sort of industry standard for what AI teams should or should not be involved in. To make matters worse, as mentioned above, companies like Clarifai are still moving deeper into military-related applications of AI.

At this point in time, Clarifai’s position on this issue does not seem to have changed, though no further reports seem to have surfaced since last December. However matters shake out related to their specific involvement in harmful applications of AI, it can be argued that their specific case illustrates the true urgency of developing an ethical, regulatory framework for all AI development.

If this is not done soon, it is reasonable to expect that more and more controversial situations involving AI teams and the types of projects they get into will occur. What bodes well for the prevention of such events is that in reality, other proposed ethical frameworks for the AI industry do exist, like Accenture’s example below.

Knowing that, it’s easy to wonder: why haven’t any of these frameworks been widely adopted by different AI development teams? Judging by a mid-2018 report from the World Economic Forum, not only are all of these proposed frameworks in very early stages and forms, it appears that any implementation of an industry-wide code of ethics would be led by the United States and China. According to the report from the WEF, both are significantly leading the way in terms of capital raised by AI startups. Despite this, the WEF shies away from directly suggesting that this means any code of ethics will definitely begin within their borders.

In the end, they seem to conclude that any successful industry standard will be developed through a massive, global effort involving many governments and AI teams. Still, how an effort like this might begin is not clear. Until this question is answered, it is reasonable to expect that the AI space will continue to operate in a relative grey area in terms of what types of projects AI teams choose to involve themselves with.

If you’re interested in educating yourself further on any of these events, check out the list of resources below.

Resources:

https://www.weforum.org/agenda/2018/07/we-know-ethics-should-inform-ai-but-which-ethics-robotics/

https://www.accenture.com/gb-en/company-responsible-ai-robotics

https://qz.com/1300160/googles-new-ai-ethics-rules-allow-more-government-contracts/

https://www.blog.google/technology/ai/ai-principles/

https://thenextweb.com/artificial-intelligence/2018/07/18/a-beginners-guide-to-ai-computer-vision-and-image-recognition/

https://clarifai.com/

https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html

https://blog.clarifai.com/why-were-part-of-project-maven

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace