The term “artificial intelligence” (AI) is popping up everywhere from e-discovery to beer brewing. What do you think of when you hear it? Is it sexy and cutting-edge? This article explores what AI really is.
Virtually every computer program can be viewed as taking some input, applying some procedure, and generating some output. A program might take a number as input and output the square root of that number. It might take the history of moves so far in a chess game and output the optimal next move to give the best chance of wining the game. It may take a question voiced by a person as input and give an audible answer as output. What is special about a program that makes it count as AI?
One of the most iconic books on AI surveyed various attempts to define AI and categorized them in the figure below.
Let’s focus on the bottom left quadrant of the figure where a program is considered to be AI if it acts like a human. The famous Turing Test would be an example of that. Would a computer program that computes square roots be considered AI under Kurzweil’s definition from the figure above? Do you know any unintelligent people that could compute the square root of an arbitrary number to several digits quickly? It seems like the program for computing square roots would qualify, though I doubt many people would intuitively peg it as AI. On the other hand, the definition by Rich and Knight from the same quadrant would not consider the square root calculator to be AI because the task is not one where humans currently beat computers. There isn’t agreement about what AI is within the same quadrant, let alone between different quadrants.
The definition by Rich and Knight touches on an important idea known as the AI effect, which says that once computers are good at doing a particular task, it’s not really considered AI anymore — it is then seen as “just a computation.” That means the things that are considered to be AI are always changing. Facial recognition is generally recognized as being AI today. It involves analyzing an image to determine which person is in it. The superficially similar process of analyzing an image to determine what text is in it, known as optical character recognition (OCR) and used heavily in e-discovery when scanned paper documents are encountered, was declared to no longer be considered to be AI by Roger Schank in a 1991 report because OCR worked too well to be interesting at that point.
With all of this ambiguity about what AI is, why are people so eager to slap the term on everything these days? Well, there have been some impressive and highly publicized accomplishments with cutting edge technologies recently and it can feel good to have a little of the hype rub off on you. Many of those accomplishments have been achieved with something called deep learning. The artificial neural network, which is a classifier that performs computations using a network structure that vaguely resembles the human brain, has been around for about 50 years but it didn’t work very well until some algorithms for effectively using a large number of layers, known as deep learning, were invented in 2006. Deep learning is able to accurately classify things even when the important relationships between the inputs are very complex as long as there is enough training data to nail down the parameters that describe all of that complexity.
In e-discovery we want to accurately classify documents as either responsive or non-responsive with as little human document review effort as possible. We often use a supervised machine learning process that is referred to as technology-assisted review (TAR) in this context. Little human effort means little training data, which is not optimal for deep learning. Additionally, it’s not clear that the task of classifying text documents involves the type of complexity where deep learning has a real advantage, so it is doubtful that deep learning would be beneficial compared to other classification algorithms for analyzing text documents in TAR (identifying objects in photos may be a more appropriate use). Technologies used for classification in TAR are typically simpler, older, and less computationally demanding than deep learning. For example, SVM (invented in 1963), logistic regression (roughly 1951), and kNN (1951) are often used. Of course, people in the e-discovery world have put a lot of effort into tuning the classifiers and their inputs to give good results for e-discovery in recent years, but the core classification algorithms are pretty old.
Labeling everything as AI gives the impression that it is similar to the highly successful (and complicated) deep learning stuff that is in the headlines, whereas in reality the algorithms used in many industries and contexts are much older and simpler.