Blog

AI vs. machine learning

Posted by Sohrob Kazerounian on Apr 26, 2018 2:54:47 PM

Find me on:

“The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” – Alan Turing

Definitions of artificial intelligence (AI) are notoriously difficult to pin down. Goal or task-based definitions are often subject to shifting goalposts. Every time a computer or algorithm achieves some goal that was thought to require intelligence, we tend to stop thinking of that goal as ever having required intelligence.

For example, chess-playing systems constituted a good portion of early AI research and yet, after the IBM chess-playing computer Deep Blue beat grandmaster Gary Kasparov in 1997, opinion shifted from the belief that chess required true intelligence to the notion that it was simply solved through brute-force search techniques. Artificial-Intelligence-Logo-Part3-013018 (2)

On the other hand, definitions of AI that tend to focus on procedural or structural grounds often get bogged down in fundamentally unresolvable philosophical questions about mind, emergence and consciousness. These definitions do not further our understanding of how to construct intelligent systems or help us describe systems we have already made.

The Turing test, despite often being portrayed as a test of machine intelligence, was Alan Turing’s attempt to avoid the question. The notion of intelligence is semantically vague and underdetermined. It doesn’t matter if we refer to a machine as intelligent.

In the end, it is a matter of convention, not terribly different than debating whether we should refer to submarines as swimming or planes as flying. For Turing, what really mattered was the limits of what machines are capable of, not how we refer to those capabilities.

To that end, if you want to know if machines can think like humans, your best hope is to measure how well the machine can fool other people into thinking that it thinks like humans.

Following Turing and the definition provided by the organizers of the first workshop on AI in 1956, we similarly hold that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

For an arbitrary task that a human is asked to do, AI should be able to simulate human performance or behavior to an arbitrary degree of precision. The Turing test aimed to determine this by seeing how well a computer or machine could fool an observer into thinking it was human, through unstructured discourse. Turing’s version of the test required the machine to fool the observer into thinking it was female.

However, in recent years powerful machine learning techniques and the availability of massive data sets for training made it possible for algorithms to become conversational with little to no understanding required.

Worse yet, minor tricks that no human observer would refer to as markers of true intelligence, like the use of random spelling mistakes and grammatical errors, help to make algorithms increasingly convincing as an ersatz human.

Newer proposals for testing human-level understanding like the Winograd Schemas, propose instead to query a machine about its knowledge of the world, the uses and affordances of objects that would be readily known by any human.

If asked this question: “The trophy didn’t fit on the shelf because it was too big. What was too big?” any person would immediately answer that the trophy was too big. But with a simple substitution we have: “The trophy didn’t fit on the shelf because it was too small.” What was too small?

The answer in this case is clearly the shelf. With an increased precision, this test probes the kind of knowledge a machine should have about the world, where no apparent amount of simple data mining can provide an answer.

Because such a definition requires that an AI can simulate any aspect of human behavior, a meaningful distinction can be made to distinguish this from AI systems that are designed to behave intelligently for specific tasks.

General AI – often referred to as artificial general intelligence (AGI) – is what people most often refer to when they talk about AI. These are the systems that people fear will one day rule the world as our “robot overlords,” the machines that fill our collective imaginations in film and literature.

However, specific, or applied AI systems, are what constitute the bulk of research in the field, from the speech recognition and computer vision systems developed by Google and Facebook, to the cybersecurity AI developed by our team at Vectra.

Applied AI systems typically make use of a variety of algorithms. Most are designed to adapt over time to improve their future performance on specific tasks as new data becomes available to the system.

The ability to adapt or learn in response to newly arriving inputs is a characteristic that defines the field of machine learning. But it’s worth noting that it isn’t a necessary condition for an AI system. Certain AI systems can function on algorithms that don’t require any learning whatsoever, such as Deep Blue playing chess.

However, this typically occurs only in well-defined environments and problem spaces. In fact, the field-of-expert systems, a mainstay of good old-fashioned AI (GOFAI), relies heavily on rule-based knowledge that is preprogrammed rather than learned. It is assumed that AGI, as well as most commonly applied AI tasks, require at least some form of machine learning.

image-6

Figure taken from Deep Learning (Goodfellow, Bengio and Courville)

The figure above shows the relationship between AI, machine learning, and deep learning. Deep learning is a specific form of machine learning, and while machine learning is assumed to be necessary for most advanced AI tasks, it is not on its own a necessary or defining feature of AI.

Machine learning is required to simulate the simplest aspects of human intelligence, not the most complex. For example, the Logic Theorist AI program written by Allen Newell and Herbert Simon in 1955 proved 38 out of 52 of the first theorems in Principia Mathematica, and yet it required no learning whatsoever.

Far more difficult is the task of creating programs that recognize speech or find objects in images, despite being solved by humans with relative ease. This difficulty stems from the fact that although it is intuitively simple for humans, we cannot describe a simple set of rules that would pick out phonemes, letters and words from acoustical data. It’s the same reason why we can’t easily define the set of pixel features that distinguish one face from another.

The figure below – taken from Oliver Selfridge’s 1955 article, Pattern Recognition and Modern Computers – shows the same inputs and can lead to different outputs, depending on the context. Below, the H in THE and the A in CAT are identical pixel sets but their interpretation as an H or an A relies on the surrounding letters rather than the letters themselves.

the cat

For this reason, there has been more success when machines are allowed to learn how to solve problems rather than attempting to predefine what a solution looks like

About the Author

Sohrob Kazerounian is a senior data scientist at Vectra where he specializes in artificial intelligence, deep learning, recurrent neural networks and machine learning. Before Vectra, he was a post-doctoral researcher with Jürgen Schmidhuber at the Swiss AI Lab, IDSIA. Sohrob holds a Ph.D. in cognitive and neural systems from Boston University and bachelor of sciences degrees in cognitive science and computer science from the University of Connecticut.

Topics: AI, machine learning

Subscribe to the Vectra Blog



Recent Posts

Posts by Topic

Follow us