Ethical Issues in Artificial Intelligence

Will Artificial Intelligence Pose an Existential Threat to Humans
Will Artificial Intelligence Pose an Existential Threat to Humans

Some of the main ethical issues with the creation of artificial intelligence have been aptly covered in some of the great science-fiction movies from Hollywood. There is the palpable fear that the creation of artificial general intelligence may lead to the generation of “super-intelligence”, that is, machines that are autonomous and that are more intelligent than human beings. What will govern the relationships between these machines and real humans?

But the ethical issues raised with respect to artificial intelligence go beyond just an apocalyptic vision of a world run by “living” and thinking machines. There is concern on what artificial intelligence, the narrow variant, will mean for the future of work: will AI supplant human talent and render many skills obsolete, leading to mass unemployment? What roles will humans play in a future where their expertise has been automated by AI systems?

Should the government step in, like in stem cell research, to define boundaries of artificial intelligence? AI regulation is one of the main ethical questions in this field. For the creation of intelligent and autonomous machines might drive innovation into the realms of the unknown. There are already concerns about the “Black Boxes” in machine learning, the lack of accountability and transparency in the decision-making process of some machine learning systems. These are some of the main ethical questions that will hover over the theory and application of artificial intelligence in industry and our daily lives.

Would AI Lead to the Creation of “Superintelligence”?

In popular culture and general conversations, this is the most talked about ethical issue on AI developments. The creation of super-intelligence or even human-level intelligence in machines will create a whole new moral dilemma when organic human life forms begin sharing the physical and moral space with machine life forms. If we are able to create an intellect that is far better than some of the human brains, what impact will that have on society? If we lose our singularity, as the most intelligent life forms, what impact will that have on our existence?

Singularity

Most of us assume we can just “pull the plug” when machines begin to pose an existential threat but an intelligent machine will be “autonomous” and will have mechanisms of evading human threats. That means that for the first time since the emergence of the universe, we will be sharing our space-physical, moral, intellectual- with other intelligent “life forms”. In artificial intelligence, this is what is referred to as technological singularity, a hypothetical point in the future when humans will lose their dominance (to machines) as the most intelligent life forms in the universe.

What will be the role of human beings, in creation and innovation, when we have super-intelligent machines that will be capable of independent initiative? Some futurists have argued that the emergence of superintelligence may see an end to human innovation. For the evolution of general artificial intelligence will see the emergence of more advanced super-intelligence and these machines may be much better at performing and driving scientific research and innovation than human beings. The fear is that if we keep on testing the limits of artificial intelligence with ever complex algorithms and machine learning, we might lose control of the superficial AI of data and scientific modeling and create a “Black Box” that is impossible to unravel and control!

Machine Rights

Should humans treat a machine with intelligence or “consciousness” as something of moral significance with rights and deserving of humane treatment? If future machines will be indistinguishable from humans, with feelings and consciousness, should those machines be treated the same as human beings? This will further raise philosophical questions on whether life or consciousness as we know it today should only be limited to organic life forms. Perhaps, in the future, we might be able to replicate our lives, consciousness and memories and continue living as machines long after our “organic” lives have faded away?

Artificial Intelligence and the Future of Work

Many companies are already operationalizing artificial intelligence (machine learning) in their work processes but they are only doing this to the extent that AI is able to amplify human effort and abilities. However, as we delegate more autonomy to machines to do work that would rather be done by humans, we are going to create another moral and social crisis: unemployment and unemployable human beings. Most people are employed in the so called “mundane” jobs that scientists and industry want to automate with artificial intelligence.

Yet artificial intelligence could also be beneficial for the future of work. A collaboration between the human mind and artificial intelligence will create a “centaur” that will lead to greater efficiency and hyper-productivity.

 Artificial Intelligence and Regulation

Some have argued that the state should step in to regulate the work being done around artificial intelligence in order to prevent any “unintended” consequences. However, many players in the tech industry are advocating for a less radical self-regulation and self-policing. One of the features of the tech industry is that many of the players are able to come together on issues of mutual interest and lay the groundwork on guiding principles/modalities. Many feel that may be the better approach than the heavy boot of state regulation which may stifle AI innovation. Politicians who are not scientists and who may be impressionable to some of the “dark” myths surrounding AI development may in the future want to push legislation that would basically slow down innovation in one of the greatest frontiers of scientific innovation.

However, developments in AI/machine learning have not reached the level that would raise alarms in the minds of politicians and policy makers. For now, and for all involved, we are at a “wait-and-see” phase of AI innovation.

Coding Fairness into Machine Learning Systems

Human beings, being the super-intelligent life-forms, are able to make judgment calls and tell right from wrong. But machines are just machines. One of the ethical dilemmas in the application of artificial intelligence is how to build fairness into these systems and make them blind to some of our social prejudices.

One of the worst manifestations of this dilemma was in 2015 when a Google photo service labeled some black people as gorillas. Some image processing artificial intelligence systems have been shown to learn and amplify gender bias in spite of the fact that these systems are built to be “blind”.

The fairness ethical dilemma in AI will become even more critical as governments move to implement artificial intelligence in the criminal justice system and many industries move to operationalize the technology.

As more machine learning systems become less interpretable and morph into “black boxes”, more ethical questions are going to arise in the future. Some people feel that technology shouldn’t just cross certain lines.

References

Newsweek: http://www.newsweek.com/ai-apocalypse-scientists-simulate-superintelligence-video-game-and-ai-takes-775146

Futurism.com: https://futurism.com/father-artificial-intelligence-singularity-decades-away/

Wired.co.uk: http://www.wired.co.uk/article/elon-musk-artificial-intelligence-scaremongering

Discover Magazine: http://blogs.discovermagazine.com/crux/2017/12/05/human-rights-robots/

The Hill: http://thehill.com/opinion/technology/375606-should-the-government-regulate-artificial-intelligence-it-already-is

Entrepreneur.com: https://www.entrepreneur.com/article/304467