Google Does Away with Its’ AI Ethics Board

In my past post on Project Maven, I mentioned that Google has developed an in-house AI code of ethics, due to the backlash it received from both employees and customers for its’ involvement in the controversial endeavor.

This Thursday, it was announced that they have decided to forgo the establishment of an ethics board related to this code, which bears some explaining.

First, this news does not mean that they have eliminated their code of ethics, but only the group of people who were supposed to make sure it was being followed. According to Vox, which broke the story on Thursday, the board fell apart after only being operational for about a week.

The main reason was the tech giant’s apparent failure to do its’ due diligence on the pasts of several of its’ board members, one of which is the current president of the Heritage Foundation, which is known as an outspoken, conservative think-tank. Though Vox’s report speculates on the pasts of other board members, only one conclusion is clear at this point.

Google employees quickly realized what was going on and once again, affected a change in their organization. Just like with Project Maven, they did their research and quickly called for the situation to be fixed. Judging by the reports on the subject, the only fix that is likely to satisfy these employees is Google going back to the drawing board.

What this means is that for now, no ethics board will exist until Google can form one that includes people that its’ workforce and possibly, clients as well, widely accept.

Behind this failure lies another important point, which is mentioned by MIT’s Technology Review and others. Developing and monitoring AI systems based on a set of rules that determine what is ethical and what isn’t will be a tall order.

Not only is the idea of being ethical too illogical and irrational for an AI system to even begin to compute, the question of who should form such a code is almost impossible to answer. How do we truly build and monitor AIs ethically if a small group of people decide what that means?

Looking to the blockchain space, perhaps the answer lies in involving one or more decentralized communities that specialize in innovative forms of crowd-sourcing. Taking this route, however, poses its’ own issues. Which chooses what communities get to participate and which don’t? How do we make sure the code gets formed in a decentralized fashion? What governance frameworks should be involved, if any?

Thinking about Google’s recent bump in the road leaves us with more questions than answers for now, except that creating an industry standard for ethical AI is perhaps the hardest project that any AI professional has ever undertaken.

If Google can’t seem to make it happen, then who will? Likely, the answer lies in a level of collaboration that is unprecedented.

Resources:

https://www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-be-harder-than-people-think/

https://www.irishtimes.com/business/technology/google-announces-new-ai-code-of-ethics-1.3528115

https://www.vox.com/future-perfect/2019/4/3/18292526/google-ai-ethics-board-letter-acquisti-kay-coles-james

https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board

https://www.theguardian.com/technology/2019/apr/04/google-ai-ethics-council-backlash

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace