Alan Turing and the birth of machine intelligence

Alan Turing and the birth of machine intelligence

Alan Turing and the birth of machine intelligence

Sohrob Kazerounian
March 15, 2018

“We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…” – Alan Turing

It is difficult to tell the history of AI without first describing the formalization of computation and what it means for something to compute. The primary impetus towards formalization came down to a question posed by the mathematician David Hilbert in 1928.

The challenge, known as the Entscheidungsproblem, German for decision problem, asked if it was possible to construct an algorithm that, when given any formal statement in first-order logic, could answer “yes” or “no” in response to whether the statement is valid.

Just shy of a decade later, a 24-year-old mathematician named Alan Turing ended Hilbert’s hope of finding any such algorithm. Turing’s 1937 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem” – published independently of a proof by Alonzo Church the year prior – formalized the notion of computation and constructed a theoretical machine that would later serve as a model for modern digital computers.

To define his automatic machine, known today as a Turing Machine (TM), Turing drew inspiration from the process undertaken by a computer, which in his sense of the word referred to a human who computes. Deconstructing the human procedure to its constituent elements, Turing wrote:

“We suppose that the computation is carried out on a tape; but we avoid introducing the ‘state of mind’ by considering a more physical and definitive counterpart of it. It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it.”

“If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued. This note is the counterpart of the ‘state of mind.’ We will suppose that the computer works in such a desultory manner that he never does more than one step at a sitting.”

Turing’s a-machine was defined by its use of an infinite tape on which symbols could be written, a head that could read symbols from the tape, a register that would keep track of the state of the machine, and a state table that would tell the machine what to do next (e.g., write or change a symbol on the tape, move to a new position on the tape, etc.) given the state that the machine was in.

Using only these components, Turing theorized that anything that could be effectively calculated could also be computed by a TM. More impressively, Turing went on to show that it is possible to construct a Universal Turing Machine (UTM) that could simulate any other TM by essentially feeding the UTM a full specification of the TM as input.

The UTM, a stored-program computer, was in part the inspiration for John von Neumann’s creation of the first modern digital computers, now referred to as the von Neumann architecture.

Armed with a formalism that defined computing machines, Turing honed-in on what it means for a machine to think. In the article “Computing Machinery and Intelligence” published in 1950, he proposed his infamous test of machine intelligence, now known as the Turing test. He begins by pondering whether it would suffice to just use the common meanings for the words machine and intelligence:

“I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’ The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous.”

“If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd.”

Turing proceeded by proposing a test modeled after the imitation game, in which an interrogator attempts to distinguish which of two players is female, and where each player, hidden from sight, attempts to fool the interrogator into thinking they are the female through written answers to the interrogator’s questions.

In Turing’s version of the game, the male is replaced by a machine that attempts to respond to questions in such a way as to fool the interrogator into thinking it, not the female player, is female.

Although the game was presented as requiring the interrogator to determine which of two hidden players is female, the basic form of the Turing test is based on distinguishing the performance of a machine from that of another person by guessing which is the real human, whether in a game of chess or in unstructured conversation.

Turing also restricted the type of machines under consideration to the following:

“The question which we put in [Section 1] will not be quite definite until we have specified what we mean by the word ‘machine.’ We are the more ready to do so in view of the fact that the present interest in ‘thinking machines’ has been aroused by a particular kind of machine, usually called an ‘electronic computer’ or ‘digital computer.’ Following this suggestion, we only permit digital computers to take part in our game.”

Having restricted the types of machine to digital computers, and defining the measure by which they are be judged, Turing provides his own feelings on the original question, before responding to other objections to it:

“It will simplify matters for the reader if I explain first my own beliefs in the matter. The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

“I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken.”

Incredibly, despite the overwhelming amount of commentary and disagreement generated by the Turing test, Turing himself dismissed the question as meaningless.

Although he admitted that using the term thinking might, by the end of the 20th century, become a natural predicate for people to apply to machines, he was relatively indifferent about that prospect.

For Turing, whether we refer to a machine as thinking or intelligent was irrelevant. All that could be determined was how well the machine could imitate the behavior of a human, and that could best be measured by how well the machine could fool an observer into believing that it too was human.

About the Author

Sohrob Kazerounian is a senior data scientist at Vectra where he specializes in artificial intelligence, deep learning, recurrent neural networks and machine learning. Before Vectra, he was a post-doctoral researcher with Jürgen Schmidhuber at the Swiss AI Lab, IDSIA. Sohrob holds a Ph.D. in cognitive and neural systems from Boston University and bachelor of sciences degrees in cognitive science and computer science from the University of Connecticut.

About the author

Sohrob Kazerounian

Sohrob Kazerounian is the senior data scientist at Vectra AI. Sohrob is highly experienced in artificial intelligence, deep learning, recurrent neural networks, and machine learning. Before joining Vectra, he did machine learning and artificial intelligence work for SportsManias. Prior to SportsManias, he was a postdoctoral research fellow at IDSIA. He received a B.S. in cognitive science as well as computer science and engineering from The University of Connecticut and a doctor of philosophy (Ph.D.) in cognitive neural systems from Boston University.

Author profile and blog posts

Most recent blog posts from the same author

Artificial intelligence

Neural Networks and Deep Learning

June 13, 2018
Read blog post
Artificial intelligence

AI and the future of cybersecurity work

November 7, 2018
Read blog post
Artificial intelligence

The rise of machine intelligence

April 10, 2018
Read blog post