What is the Machine Learning Black Box?

Artificial Intelligence Black Box
Artificial Intelligence Black Box

As artificial intelligence systems get smarter and more complex, one of the mysteries or fears that has taken root is the artificial intelligence or machine learning “black box”. Traditionally, a black box is an indestructible flight recorder that is designed for discoverability or interpretability after aircraft accidents. It records the last few minutes of a flight before a crash and helps air crash investigators to unravel the series of events that led to the tragedy.

However, the artificial intelligence Black Box is just the opposite and refers to the opacity in the decision-making process of some of the most advanced machine learning programs. These programs can take inputs but increasingly, it is not easy or possible to predict the outcomes and the data scientists behind these systems are unable to unravel the decision-making process that leads to a particular AI output.

This obscurity in the decision-making process of some smart artificial intelligence systems is what is referred to as the black box. The Black Box in an artificial intelligence system can, therefore, be regarded as the “unseeable” space or process through which an artificial intelligence system takes in an input and delivers an output. It’s the realm of un-discoverability that many AI programmers or scientists encounter when they design complex artificial intelligence systems.

Beyond the Realms of Predictability

Logically, machine learning is based on predictive modeling. That a machine will develop a predictable decision-making process if it is made to repeatedly analyze and process millions of data points. In the Artificial Intelligence Black Box, we provide the computers with data inputs and the machine learns and gives us output in a way that we cannot comprehend or predict. After a learning process, the computer somehow develops a “mind of its own”. We are unable to understand how it arrives at certain decisions.

To some, this is the essence of general Artificial Intelligence but it can also spell a scary prospect. We want machines that are predictable and which we can control especially in mission-critical applications such as computerized medical diagnosis.

The obscurity of the inner workings of the machine learning decision-making processes adds a layer of complexity in the design of applied artificial intelligence systems. Self-learning machines raise the questions of “autonomy”, responsibility and the decision-making capacity of the artificial intelligence system. In application, this also raises the question of accountability and audit-ability. If something goes wrong in an autonomous “irrational” AI machine or algorithm, it is almost an impossible task trying to work backward to unravel the particular behavior that triggered an event especially if that behavior is hidden away deep out of the realms of discoverability-inside the AI Black Box.

Machine Learning Algorithms are Becoming Incomprehensible as they Get More Complex

This is the challenge of the machine learning Black Box. In artificial intelligence, the normal course of a machine learning algorithm is to learn patterns of behavior from a collection of data points and use the data sets for predictions and to make informed decisions.

But artificial intelligence systems are not always perfectly logical predictors of events. It is possible, for example, for the design and builds of an artificial intelligence algorithm to produce bias, prejudice or unfairness depending on the data choices and programming done in the system. In fact, some scientists contend that it is inevitable that AI systems will encode our own biases, limitations and the unfairness of human society.

Are Black Boxes that Scary?

The uncertainty about the lack of accountability in some AI systems certainly poses a problem when it comes to critical applications. But how bad is it? Some have argued that the lack of transparency in the AI or machine learning decision-making process more or less mimics human limitations and that the artificial intelligence black boxes are more like the human decision-making process or even better! While we can make calls or hunches, we don’t understand the complex underlying basis through which a decision was arrived at.

Human Intelligence is like an AI Black Box

A typical specialist is more like an artificial intelligence algorithm. They will analyze a set of data points to draw a conclusion based on training and experience and make a call, which is often accurate. We humans don’t understand our own decision-making process: we use data sets to give the strongest evidence-based decisions. This is what an AI system does with a considerable degree of accuracy. Some have argued that if we can trust our own human “Black Boxes”, then we shouldn’t have a problem trusting an AI Black Box which is based on a more accurate and more comprehensive predictive modeling. In the larger scheme of things, the AI Black Box is actually the more transparent one.

If we are going to use these systems in critical areas, there is still a need for oversight and auditing to ensure the transparency of AI systems but it shouldn’t degenerate into scare-mongering. Instead, we must look for ways of harnessing complex machine learning to ensure that it augments our own intuition and intelligence.

References

New York Times https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html

Fast Company https://www.fastcompany.com/40536485/now-is-the-time-to-act-to-stop-bias-in-ai