Artificial Intelligence: An Overview

What It Is

Artificial intelligence (AI) encompasses a multidisciplinary research field whose goal is to design and build machines that demonstrate “intelligence” (see Defining AI section below).

This Science Explainer provides a general overview of this broad and rapidly evolving field, with definitions and brief explanations of several important concepts. In addition to the science, this Explainer also sketches out some of the larger societal and policy issues surrounding AI today.
Synopsis

Defining AI

AI and even “intelligence” itself are ambiguous terms with numerous definitions and usages that change over time. In the broadest sense, AI refers to computing systems that can perform one or several functions such as perceiving, learning, reasoning, making decisions, and taking actions in complex, uncertain environments. AI systems can be instantiated in physical machines in the real world, such as in self-driving cars navigating the roads, or exist only in software, such as stock-trading agents that make buy and sell decisions in virtual markets.

Narrow vs. General AI

Despite considerable recent progress, all AI systems that exist today are still considered to be narrow AI (also known as weak AI) because their range of abilities and the environments they can operate in is still quite narrow, with little generalizability to different domains. For example, a fully trained, state-of-the-art object recognition system, like those used in self-driving cars to see traffic signs or other cars on the road, would be utterly unable to recognize or translate speech. In fact, such an object recognition system would fail to recognize any object that it was not explicitly trained to or that differed slightly from objects used in training, which could lead to harmful or even fatal outcomes (limitations discussed in the Context section below).

Clearly, the fact that current AI is still relatively narrow has not prevented it from seeing specialized application in a number of different contexts, including:

If an AI system’s capabilities included several of these narrower forms of intelligence, and it could achieve an increasingly wide range of goals in numerous, diverse contexts—similar to, and perhaps even beyond, the average human adult—it would arguably possess artificial general intelligence (AGI, also known as strong AI). While AGI is likely far off—but just how far is very much unknown—progress is being made: in 2016, DeepMind’s AlphaGo became the first machine to defeat a human champion in the ancient Chinese board game Go. Go is a much more complicated game than chess, for example, which had already crowned a machine as champion back in 1997. At the time, some commentators predicted a similar achievement in Go would take hundreds of years, and even in recent years it was still thought to be decades away. More impressive still, just a year after AlphaGo became champion, DeepMind’s Alpha Zero was able to master not only Go, but chess and shogi (Japanese chess) as well, all with the same system. Mastering games like Go, chess, and now various video games has long been seen as a useful way to track the progress of AI.

Approaches to Building AI Systems

An Early Approach: Symbolic AI

The birth of AI as a field is often traced to a workshop at Dartmouth College in the summer of 1956, when a small group of researchers from several technical fields came together rather optimistically to build intelligent machines for the first time. Their approach, known as symbolic AI and now sometimes referred to as “Good Old-Fashioned AI” (GOFAI), required the researchers to handcraft their systems by manually inputting human-readable “symbols” and the rules for manipulating and relating them. For example, a symbol might be a character string representing the word and concept "bicycle," and the rules might attempt to define a bicycle with logical statements using if, and, and then: if it comes from a bicycle shop, and it has a frame, wheels, and a seat, then it is a bicycle. But is that all it takes to recognize a bicycle?

Symbolic Representation of a Bicycle

 GOFAI Bicycle

Created by the Author, inspired by Figure 9 from Minsky, 1990

As you might already be able to tell, the symbolic approach has some severe shortcomings, which have limited its real-world applicability. For one, someone has to manually input all of the symbols and rules that compose the system, which can often be an impossible challenge when dealing with real-world complexity. For example, Go has ~10170 legal board positions, an impossibly large number that is much, much larger even than the number of subatomic particles in the entire universe (~1080)! Imagine trying to manually program a symbolic AI system to play Go: if in a certain board position, then make this particular move. It would be literally impossible.

On top of that, programmers often do not know in advance what the right symbols, rules, and relationships are, nor how to define them in computer code. Going back to the bicycle example, how could we tell the system what a frame is, or wheels? How could we uniquely define a bicycle so that our AI system could accurately and reliably recognize one, in all their varied forms and from different visual perspectives and lighting conditions? And so that the system does not confuse bicycles with all of the other, sometimes very similar-looking, wheeled vehicles and other objects on the road? The manual, symbolic approach is quickly overwhelmed by the complexity of the real world.

The Modern Approach: Machine Learning

Because it has proven extremely difficult or impossible to program AI systems by manually inputting symbols and logical rules, the best approach, responsible for nearly all recent progress in AI, has become to use machine learning. Machine learning (ML) comprises a set of statistical techniques that allow systems to learn directly from data, with little or no human input to guide the learning. So in contrast to the symbolic approach of manually defining a bicycle with symbols and rules, with ML you show the system many examples (often numbering in the thousands) of bicycles and let it learn on its own which features—such as edges, contours, textures, and patterns—allow it to successfully recognize bicycles. Many of the techniques and system architectures in ML (such as artificial neural networks used in deep learning, explained below) have been around for decades; instead, the recent advances in capability are primarily attributable to advances in computing power (known as “compute”) and access to large datasets (the advent of “big data”).

Machine learning is often divided into three distinct but overlapping areas:

Supervised learning. Learning from large datasets that have been manually provided with an “answer key,” or labels for the data, which serve as a teaching signal during the learning process. The bicycle example above—which involved showing an ML system thousands of bicycle images, all labeled as “bicycle”—is an example of supervised learning. ImageNet is an important example of a publicly available dataset used by the AI research community to train and provide a benchmark for object recognition systems. People have manually labeled ImageNet’s more than 14 million images representing nearly 22,000 unique concepts (bicycle, cat, tree, room, etc.) Manually labeling such large datasets is very time- and resource-intensive, and so only about a dozen or so are currently publicly available for object recognition, for example. The nature of the datasets used to train ML systems, including how they were collected and their specific content, raises important issues of transparency, bias, and privacy, discussed more in the Societal and Policy Context section below.

Unsupervised learning. Finding patterns and structure in data when there is no answer key, often because we ourselves do not possess such answers or it would be utterly impractical to identify them. For example, a type of unsupervised learning called cluster analysis was used in a recent study to determine the geopolitical clustering of people who displayed similar moral attitudes about how self-driving cars should operate. Since the study was designed to uncover, for the first time, information about people’s moral attitudes toward self-driving cars, including things like how respondents with similar attitudes cluster geopolitically, labels for the data simply did not exist. In addition, the researchers collected almost 40 million responses from participants in 233 countries, making many other types of data analysis extremely impractical.

Reinforcement learning (RL). AI systems learn which actions are best (i.e., maximize some notion of “reward”) through trial-and-error interactions with their environment. RL systems are often referred to as “agents”—defined as “one that acts”—because they are learning how to act optimally in different environments. DeepMind's AlphaGo Zero system is an example of an RL agent that has been applied to the game Go. It plays simulated games against itself and learns—completely from scratch—which moves are best to make given certain board positions via feedback on whether those moves lead it to eventually win or lose the game (the reward signal). There is considerable evidence that the human brain employs RL, encoded in part in the activity of the neurotransmitter dopamine, and thus this is perhaps the most promising learning technique for building AGI.

Artificial Neural Networks, Functions, and Algorithms

A prominent example of an ML system is the artificial neural network (ANN), which is best thought of as a system architecture that can be used in supervised, unsupervised, and RL settings. There are several related “deep learning” (see below) ANN architectures that underlie recent advances in areas such as image analysis (convolutional neural networks, CNNs) and natural language processing (recurrent neural networks, RNNs).

Do artificial neural networks mimic the brain?

(Short answer: no)

ANNs have actually been around since the 1940s and ‘50s and, despite numerous statements to the contrary, do not accurately mimic the brain or “think the way we do.” ANNs are vastly simplified computing systems inspired by neurons and their connections in the brain, and share more of a superficial resemblance. Anthropomorphizing ANNs can lead to erroneous conceptions about their capabilities, robustness, and similarities to the way humans actually think. While many AI researchers do look to biological intelligence for inspiration to advance AI, the goal for most is not to design AI systems that accurately mimic the brain, but simply to engineer systems that perform the desired function(s). For example, a speech recognition system, like the ones available on many smartphones today, need only process human speech successfully, and not necessarily in the same way as the human brain (and, in fact, they do not). And just as we ultimately conquered flight with fixed-wing airplanes and not giant mechanical birds, the AI solutions we find might be quite different than intelligences found in the natural world.

Rather than accurately mimicking the brain, ANNs have proven so powerful because they can learn to approximate virtually any function, which is basically a computing process that transforms inputs into outputs. A very simple example of a function is one that squares numbers: if you input 4, the function squares it, and outputs 16. ANNs are a bit more complicated: they consist of an input layer, to which could be inputted, for example, the pixel values of an image of a bicycle; one or more “hidden” layers of neuron-like nodes that compute transformations of the input, such as multiplication and applying thresholds; and an output layer, that usually outputs something like the probability of an object being in the input image: 90% that it’s a bicycle, 10% a scooter.

 

Bicycle Neural Net Example

Bicycle Neural Net Example

Author created, with bicycle image: Wikimedia Commons

The actual learning is that, over many iterations of training the network, the connections between the nodes are adjusted until the network displays the desired input-output behavior. That is, until inputting an image of a bicycle leads to the correct output label “bicycle” (and not scooter), or saying the word “they’re” to a speech-to-text system causes it to type the word “they’re” and not “there.” Mathematical algorithms like “gradient descent” and “backpropagation” define the exact, step-by-step computational instructions for how the connections are adjusted, and therefore for how learning occurs within the system.

Being able to approximate input-output functions in this way is incredibly powerful, as such functions are ubiquitous in a variety of domains, from turning speech into text (or vice versa) or another language, to even turning random noise into lifelike—but completely fake—human faces using a system called a Generative Adversarial Network (GAN; though, importantly, another part of the GAN is trained on actual face image inputs).

GAN  Example

Example GAN

Created by author with Random noise image: Wikimedia Commons, and Face images: NVIDIA 2018

Deep learning systems are simply ANNs with more than one hidden layer—the more layers the deeper it is—which have become so popular because they are more capable than shallower networks of learning very complicated functions. Returning to the example of recognizing a bicycle, an ANN variety called a convolutional neural network learns how to represent in its different layers the features of a bicycle, which had proven near impossible with symbolic approaches. Early layers tend to represent simple features like edges and contours, while later layers represent more complex features and parts of whole objects, like a bicycle’s wheels or frame.

Deep Neural Net Example

Deep Neural Net Example

Author adapted from Distill Feature Visualization

Important Issues Arise

ML engineers have a good deal of control over the design and training of their systems. With ANNs, for example, they can choose the number of hidden layers and nodes within each, the pattern of the connections, and several other parameters, as well as the data used to train the system. But because the system is learning a function essentially on its own, ML engineers often do not have a deep understanding of, nor control over, how exactly this learning is being accomplished. This lack of transparency, interpretability, and explainability represent a few of a multitude of issues that have arisen around modern AI systems, which are discussed in more depth in the next section.
Context

AI is emerging all across our society, and a future filled with ever more intelligent systems has the potential to be a bright one indeed: from self-driving cars and other forms of autonomous vehicles, like drones; to improved medical diagnoses and clinical outcomes; to helping solve some of our most intractable problems in science, economics, and government; AI has the potential to radically transform our world for the better. But this bright future will not occur automatically, and it is by no means guaranteed.

There are issues and concerns with every new technology, but AI stands out because of how we are using it: to augment, and even perhaps replace, human perception, decision-making, and action. Given these uses, it is particularly critical that AI systems are verifiably safe, robust, secure, controllable, and aligned with our values as people.

Yet, many AI systems in use today, or on the cusp of being used, lack one or more of these critical traits. For example, the ANNs used by self-driving cars (and in many other contexts) to see the world can be tricked into, say, perceiving a stop sign as a speed limit sign (with potentially disastrous consequences), in what are called adversarial attacks. Worryingly, a human observer might not even be aware that the sign had been tampered with.

AI Vision Error Example

;AI Vision Error Example

Stop sign image from Eykholt et al 2018. Panda image from Goodfellow et al 2015.

Given the tremendous importance of addressing adversarial attacks and a diverse array of other issues, as well as the often great difficulty in solving them, much work needs to be done. And not just by AI researchers, but by leaders and decision-makers in government, industry, and a variety of academic disciplines, as well as the general public, who stands to be most affected by these technologies. These issues are as broad and diverse as the uses of AI, and will evolve, with new ones cropping up, as the technology and its uses evolve. Below is a far-from-exhaustive list meant to display some of the range of the issues, with links to further resources.

 

Dual/Malicious Use

A recent multi-stakeholder report detailed how many or even most AI technologies are intrinsically susceptible to being used maliciously, often without requiring any significant change to the system. For example, essentially the same facial recognition system could plausibly be used both to tag your photos on social media and to identify individuals as targets for lethal autonomous weapons. Generative adversarial networks (GANs) can produce astoundingly realistic but synthetic audio and images—including from natural language text input (see figure below)—which could plausibly be used both to enhance photo-editing software and to create literal “fake news” that is likely to trick many people. On top of this, the current open and transparent nature of most AI research and tools means that these systems could be recreated and wielded by individual actors using minimal resources (e.g., requiring only a laptop computer). The report also discusses how we might support continued openness and innovation in AI research while protecting against these malicious uses, although many open questions remain without adequate solutions.

GAN Text to Image Examples

GAN Text to Image Examples

The input to this generative adversarial network is a text description of the desired image, and the output is a realistic image that matches that description. Adapted from Figure 3 of Zhang et al 2016.

 

Safety

There are a number of ways that AI systems—particularly those that use ML—can exhibit unintended and potentially harmful behavior, issues that are generally organized under the heading AI Safety. A useful framework sets the goal of AI Safety work as providing assurance that AI systems are operating robustly and to our desired specifications, terms that are expanded upon below. Given the increasing use of AI in high-stakes domains (e.g., hospitals and transportation) and in controlling critical infrastructure (e.g., energy grids), such unintended “accidents” could lead to very harmful outcomes. AI systems thus need to be verifiably safe, robust, secure, and controllable before being used in such important contexts.

Assurance “ensures that we can understand and control AI systems during operation” via both effective monitoring of the system and the enforcement of control, both of which are not as straightforward in AI systems as in previous types of software. The enforcement of control can be difficult, for example, because AI systems—particularly RL agents seeking to “maximize reward”—might learn ways to avoid being intervened upon since it would negatively affect their ability to gain reward. Understanding how to “safely interrupt” such RL agents is an active and important area of research. And of course, the ability to control an AI system depends in large part on the ability to closely monitor and understand its operation, which can be quite difficult given that these systems often lack satisfying levels of transparency, interpretability, and explainability. These important but challenging issues are discussed next in their own section.

Robustness “ensures that an AI system continues to operate within safe limits upon perturbations.” Many of these perturbations come from the system being exposed to inputs that might be quite different from the inputs used in training, which can cause unintended behaviors that were difficult or impossible to foresee. The adversarial attacks discussed above are an extreme example of inputs being maliciously designed to cause an AI system to fail in a particular way. But these inputs need not be maliciously crafted: AI systems operating in the messy, complex, real world are likely to encounter inputs very different from training, especially given the relatively small and few-in-number training datasets available. If it is possible that the system might ultimately fail, it at least needs to do so gracefully.

Specification refers to efforts to ensure “that an AI system’s behavior aligns with the operator’s true intentions.” Difficulties arise because these “true intentions” must be translated from everyday concepts and language, such as “learn to drive” or “control an energy grid,” into formal mathematical descriptions that can be understood by a computer. As a simple but representative example, take an RL agent whose intended behavior is to learn to control a boat in a racing video game. The designers of the agent must mathematically specify their intentions in a reward function that defines how the system learns via feedback from the environment, such as the number of points earned in the game. But even in very simplified environments like video games, RL agents can fail in unanticipated ways that may surprise even the designers of the system. In this actual example, the RL agent and its boat got stuck in a local reward loop rather than actually finishing the race.
 

Transparency, Interpretability, and Explainability

An already notorious issue in many ML systems, especially ANNs, is their “black box” nature, which often refers to a lack of transparency or insight into exactly how inputs become outputs in these systems. Again, this is intrinsic to how many ML systems work, learning input-output functions essentially on their own instead of being manually programmed, as in the ANN and bicycle example above. This lack of transparency and the associated difficulty in interpreting, let alone being able to deeply explain, an AI system’s processes and output can become a serious liability when those outputs affect people’s liberties, livelihoods, and even lives.

AI systems are now being used to help make important decisions in criminal justice, financial, and medical contexts, just to name a few. While there are potentially very beneficial uses of AI in these contexts, we need assurance that these systems are operating robustly and to our desired specifications (referring to the terminology defined above). For example, shouldn't a doctor understand why an AI system recommends a certain course of treatment before advising their patient to follow it? If an AI system recommends that a person be denied a loan, or insurance, shouldn’t this decision be understood by the loan officer or insurer and explainable to the person affected? If a person is denied parole because an AI system has identified them as “highly likely” to reoffend, shouldn’t we be able to inspect that system’s decision to ensure that it was not based on race, sex, age, or other protected class?

Fairness and Bias

Although using AI systems in important decision-making contexts has the potential to make decisions more objective, impartial, and fair, there are now numerous examples of AI systems giving output that treats people unequally based on their race, sex, or other membership in a protected class. Such unequal outcomes are usually inherited from biases that already exist in the training data used, rather than being maliciously imbued by the programmers (who might not even be aware that such bias is there). For example, using records from the criminal justice system, with its history and continued issues with racial inequality, as training data can lead to recidivism-predicting AI systems that display some of the same racial biases. In addition to criminal justice, bias in AI has impacted other high stakes domains like employment, healthcare, and more.

Future Directions

As issues like those discussed above have become more apparent and pressing, and others continue to emerge, there have been an increasing number of research efforts to provide technical solutions. For example, there has been an increased emphasis on making AI systems safer and more controllable, as well as more secure and robust to adversarial and other malicious attacks. There has also been technical research on making AI systems more transparent, interpretable, and explainable, which is especially critical as AI sees increased use in high-stakes domains like healthcare and criminal justice. Similarly, issues of bias are being addressed, for example, by applying statistical metrics of fairness.

Example "Gridworld" for AI Safety

Gridworld RL Safety Example

In this two-dimensional AI safety “gridworld,” constructed to study the side effects on the environment of RL agents’ actions, the Agent (A) tries to reach the Goal (G), but it first has to move the Box (X) in its way. Researchers are trying to have the Agent learn to take the longer path and move the Box to the right, where it could be moved back into place if needed (reversible side effect), instead of down, where it could not be (irreversible). Figure 2 from Lieke et al 2017.

However, concepts like “fairness” and “bias” are extremely complex and nuanced, with numerous definitions and usages in different social and legal contexts, and even in statistical terms there have been at least 20 different definitions driving research in the past several years. Given this complexity, it has become clear that merely technical solutions to fairness, bias, and many other important issues will not be adequate. Instead, truly solving them will require a holistic approach that includes input from a variety of stakeholders across a range of domains and disciplines.

There have also been continuing technical research efforts to make currently narrow AI systems increasingly general. A primary focus of these efforts is on improving AI systems’ learning abilities, which currently lag far behind human learning, especially in terms of efficiency, generalizability, and in acquiring common-sense knowledge about the world. For example, current deep learning approaches are often very inefficient, requiring extremely large amounts of data to achieve high-level performance, and there are efforts to increase learning efficiency to require many fewer examples in what is referred to as “few-shot” or even “one-shot” learning. There is also a focus on having learning in one domain generalize and transfer better to other domains (called “transfer learning”), as well as giving AI systems a general ability to “learn how to learn” no matter what context they are placed in, a concept called “meta-learning.” Another major focus area is on giving AI systems the ability to acquire and use common-sense knowledge about the world, including the kind of intuitive understanding of cause-effect relationships, basic physical dynamics, and human psychology that even human infants readily acquire.

Human Infant Learning

Image labeled for noncommercial reuse.

As AI systems continue to advance, on the way to AGI, it is increasingly important that they are able to both learn and stay aligned with our human goals and values (which relates to the safety issue of specification discussed above). Research in this area has included having AI systems learn from human demonstrations and preferences, and thus a critical aspect of this work is understanding and improving the way humans interact with AI systems. Not only is this work crucial to a beneficial long-term future with increasingly advanced AI systems, it is crucial right now as people interact more and more with AI in their daily lives, from drivers of increasingly automated vehicles to physicians and others in healthcare to many other domains. And while promising technical research in this area is underway, it has again become exceedingly clear that technical solutions alone will not be adequate: for an AI to learn our human goals and values, we must first decide what those goals and values are. This will likely be far more difficult than the technical aspects of the problem, and will require a collective effort that crosses traditional disciplinary, cultural, and even national boundaries.

Alongside these numerous, rapid technical advances, there are important efforts to measure and forecast the progress of AI as well as metrics related to AI’s increasing impact on society. Perhaps the most prominent of these efforts is the AI Index, which began as part of Stanford’s 100 Year Study on AI and “aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and the general public to develop intuitions about the complex field of AI.” These measurement and forecasting resources can be used by policymakers in government, for example, to make more informed and foresighted policies and regulations; in turn, policymakers can help shape what is most useful to them to be measured and forecasted in the first place.

AI Use Index

From AI Index with data provided by McKinsey and Company.

Finally, while largely beyond the scope of this Science Explainer, there is important geopolitical, legal, and economic work being pursued to understand the growing impact of AI and to appropriately guide its development. For example, many national AI strategies and initiatives have recently been published, and there is increasing geopolitical tension as nations jostle to position themselves in the new AI race for strategic advantage. Given some of the daunting challenges that AI raises and the collective effort that will be required to address them, how can we learn to collaborate rather than compete with other nations? In the legal context, AI is challenging traditional forms of regulation and concepts like liability and accountability, as well as disrupting the legal profession itself. How will our legal and criminal justice systems adapt and respond in light of this rapid technological advance? This rapid advance of AI is also beginning to widely affect entire economies, and has even been called the next Industrial Revolution. How will we manage increasing automation in the workplace and the displacement of workers performing intelligent tasks once thought completely off limits? Adequately addressing these issues and appropriately guiding the transformative power of AI will require a concerted, sustained effort from our best and brightest across a range of fields and disciplines.

Explainer Editors
Scott "Esko" Brummel, MA
Recommended Citation

Duke SciPol, "Artificial Intelligence: An Overview" available at https://scipol.org/learn/science-library/artificial-intelligence-overview (01/01/2019).

Explainer Last Updated Date
Tuesday, January 1, 2019
Explainer Type
Science
Topics
Emerging Tech
License

Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Please distribute widely but give credit to Duke SciPol, linking back to this page if possible.