As anyone who can operate a personal computer knows, the way to make the machine perform some desired task is to open the appropriate program stored in the computer’s memory. Life was not always so simple. The earliest large-scale electronic digital computers, the British Colossus (1944) and the American ENIAC (1945), did not store programs in memory. To set up these computers for a fresh task, it was necessary to modify some of the machine’s wiring, re-routing cables by hand and setting switches. The basic principle of the modern computer—the idea of controlling the machine’s operations by means of a program of coded instructions stored in the computer’s memory—was conceived by Alan Turing.
Turing’s abstract ‘universal computing machine’ of 1936, soon known simply as the universal Turing machine, consists of a limitless memory, in which both data and instructions are stored, and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols.2 By inserting different programs into the memory, the machine is made to carry out different computations. It was a fabulous idea—a single machine of fixed structure which, by making use of coded instructions stored in memory, could change itself, chameleon-like, from a machine dedicated to one task into a machine dedicated to a quite different one.
Turing showed that his universal machine is able to accomplish any task that can be carried out by means of a rote method (hence the characterization ‘universal’). Nowadays, when so many people possess a physical realization of the universal Turing machine, Turing’s idea of a one-stop-shop computing machine might seem as obvious as the wheel. But in 1936, when engineers thought in terms of building different machines for different purposes, Turing’s concept was revolutionary