Glossary of Artificial Intelligence Terms

AGI– Artificial General Intelligence or the way of describing the level and form of intelligence that AI systems have now, also termed the period of intelligence before the hypothesized period of Super-intelligence.

https://futureoflife.org/2017/10/23/understanding-agi-an-interview-with-hiroshi-yamakawa/

Algorithm– A human-input set of instructions given to an AI so that it can learn without human intervention or with little human intervention.

http://www.bbc.co.uk/guides/z3whpv4

Anthropomophizing– the practice of connecting human qualities or parts with something that is not human. An example in the AI field might be attributing the characteristics of an Artificial Neural Network to the human brain.

https://psychcentral.com/news/2010/03/01/why-do-we-anthropomorphize/11766.html

Artificial Intelligence– the relative skill that a machine has in terms of thinking and acting like a human, the study of this aforementioned skill.

https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Bayesian Inference– exists in the field of Statistics, when statisticians or scientists use Bayes’ theorem to continuously update a hypothesis as they gather and analyze more data.

https://brohrer.github.io/how_bayesian_inference_works.html

Boxing– the act of putting a potentially dangerous, AI computer system in a closed off environment so that it can only interact with its development team as it is worked on.

http://yudkowsky.net/singularity/aibox/

Brain Emulation– the theory that one day, machines will be able to create nearly full-replicas of human brains.

http://www.fhi.ox.ac.uk/Reports/2008-3.pdf

Classification Algorithms– sets of rules that help machines or AI systems categorize data. Essentially, these algorithms use the system’s known data in a a very specific way to predict the ideal data that it needs for the future to function optimally.

https://www.datascience.com/blog/regression-and-classification-machine-learning-algorithms

Controlled Detonation– the idea as to how the human world can prepare for the inevitable explosion that comes with the advent of super-intelligence. Judging by current theory and what experts seem to say on the subject, there is not a lot of hope at this time that this can be achieved.

https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine

Cross-validation– a technique used to analyze predictive models by splitting the data they generate into a test set and a training set.

http://artint.info/html/ArtInt_189.html

Data Science– a field in which participants study and use Machine Learning, Data Analysis, Statistics and related sub-fields, all at once.

https://datajobs.com/what-is-data-science

Deep Neural Network– a multi-layer neural network with more than one input, output and hidden layer. It is said that deep neural networks can learn more than shallow neural networks, due to the sheer volume of data that they are able to process.

https://www.techopedia.com/definition/32902/deep-neural-network

Deep Learning– the study of machine learning algorithms that help AI systems learn on their own, also includes the idea that the deeper the neural network is, the more it can learn.

https://deeplearning4j.org/neuralnet-overview

Existential Catastrophe– essentially, a disaster that could damage the whole human race on a large scale. In the field of AI, this could be related to “Perverse Instantiation.”

https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/

GOFAI– Good Old-Fashioned Artificial Intelligence, synonymous with symbolic artificial intelligence or the idea that AI should focus mainly on symbolic reasoning and logic-based techniques. Apparently, this approach to AI research has largely failed due to its algorithmic structures reportedly constraining innovation too much.

https://www.cs.swarthmore.edu/~eroberts/cs91/projects/ethics-of-ai/sec3_1.html

Grey Goo Scenario– this is a theory that one day nanotechnology will evolve to the point that self-replicating robots will consume all matter on Earth and then Earth itself.

https://www.singularityweblog.com/our-grey-goo-future-possibility-and-probability/

Infrastructure Profusion– the idea that at a certain time, AI systems will determine it to be more important to build infrastructure than to help humans in any way. For example, if a system is designed to create a certain number of one product, then it will continue to do that above all else.

https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes

Machine Intelligence– Machine Learning, Deep Learning and what are called Classical Learning Algorithms, all in one.

https://www.amii.ca/machine-intelligence/

Machine Learning-the field of computer science that utilizes statistics to teach computer systems how to learn based on the data that they take in.

https://www.techemergence.com/what-is-machine-learning/

Nanobots– robots and machines as small as a nanometer and largely in the research and development phase, though some primitive models have been developed. It is often hypothesized that they will be especially helpful in the field of medicine.

https://singularityhub.com/2016/05/16/nanorobots-where-we-are-today-and-why-their-future-has-amazing-potential/#sm.000bh215i1ei4drbuid2fd7s20hxq

Neural Network– a computer system that is structured in a way that is similar to the human brain. Neural networks can be deep or shallow depending on the number of input, output and hidden layers that they have.

http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html

Node– a specific point in a network where network paths intersect in some way, also a way of describing a network user.

https://searchnetworking.techtarget.com/definition/node

Perverse Instantiation– inspired by Nick Bostrom in his work, “Superintelligence,” refers to the idea that the smarter the AI system, the more likely it is that it will find a shortcut to achieve its ideal values that is destructive to humans or to the Earth at large.

https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes

Prediction Algorithm– a set of rules that helps a system to predict certain future values to aid it in its path to optimization of its performance.

https://www.datasciencecentral.com/profiles/blogs/prediction-algorithms-in-one-picture

Roko’s Basilisk– a thought experiment that posits that a future, all-powerful AI system could reach backward through time to punish those who did not help bring it into existence, from its inception.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

Singularity– in the case of the world of AI, this is the period in time when we first see super-intelligence and its existence  catalyzes the quickest and most efficient period of global growth possible.

https://io9.gizmodo.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it

Superintelligence– This is the idea that one day, a form of intelligence will manifest itself in AI systems that is far beyond that which humans will ever express.

https://theconversation.com/explainer-what-is-superintelligence-29175

Supervised Learning– the form of Machine Learning in which an AI system learns from past output data to optimize what it does in the future. In essence, the AI system guesses the optimal solution for the function that it needs to solve, through analyzing the training data that is given to it.

https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/

The Treacherous Turn– a concept that has recently been developed by Nick Bostrom, the author of “Super Intelligence.” Essentially, it boils down to the idea that AI will become intelligent enough to act cooperative until they are powerful enough to show their true goals, which will be a lot more selfish and quite detrimental to the human race.

http://philosophicaldisquisitions.blogspot.cz/2014/07/bostrom-on-superintelligence-3-doom-and.html

Von Neumann Machine– original definition refers to John Von Neumann’s early version of a computer, which had three main parts and did everything, one task at a time. It can also refer to a machine that can reproduce itself using materials that it finds in its natural environment. A more specific example of this is the idea of a Von Neumann probe which posited the eventual existence of a spaceship that can replicate itself.

https://www.webopedia.com/TERM/V/Von_Neumann_machine.html

https://simplicable.com/new/self-replicating-machine

Wireheading– in AI, refers to all of the ideas around creating what is often called a “Friendly AI.” This AI would be programmed to spend all or most of its time making people experience the most pleasure and happiness possible. In this way, the AI system would have to be incapable of doing harm to a human being.

https://casparoesterheld.com/2016/07/08/wireheading/