The Nature of Computers
by M.C. Escher, D. Sc., A. Mus. D. (Imp.)
Reprinted from “Science and Invention” Magazine – 1924
For generations the possibilities of artificial construction have been as great a source of interest as those of natural creation; now it is high time that we should scrutinize these possibilities with judicious care and in all seriousness, for the day cannot be far off when they will confront us in full reality and with unforeseen consequences upon our social and individual life: only then will we realize that they demand serious attention and not merely artistic or philosophical amusement; we shall require quite other weapons than those at present at our disposal to deal with them, and we shall have the greatest difficulty in devising them.
It was necessary first of all to develop a new branch of science which would look into this matter and thus establish a thorough and profound knowledge of the properties and limitations of these wonderful creations: such a science has now come into existence: it is called Cybernetics (from Κυβερνήτης, the steersman of a ship), and is attracting more and more attention from thinkers on account of its unique importance. This word cybernetics was coined by Norbert Wiener (1894-1964). It covers various aspects – not only those concerned with construction – but also those that deal with problems arising from our knowledge of ourselves, of our own mental processes and actions; for in the realm of cybernetics it is necessary to distinguish between the material or physical portion (the tangible apparatus) that works automatically and responds to stimuli, and the intangible portion (our mind), that directs this apparatus upon its errands.
We can say then that Cybernetics includes all problems connected with what we call information, especially when transmitted without wires. That term “information” however has acquired a special technical meaning in modern telephone engineering which differs slightly from its ordinary one: here it refers to what is obtained by subtracting uncertainty from knowledge, thus involving not only quantity but also quality. To illustrate this let us suppose some friend poses you a mathematical problem on paper with the request that you should answer it. If, after pondering over it for a few minutes you come up with the correct solution, then your information is complete in the sense that you now know precisely what to do; but if instead of solving it yourself your friend solves it and gives you the answer verbally or sends it by post, your knowledge is incomplete because its quality has suffered: having inferred what he said from his tone of voice or written words, there may still remain some doubt as to whether this was actually the proper answer.
What makes all this particularly relevant to Cybernetics is that the machine itself can be used to form an exact analogy with our mind: both are automatic devices carrying out definite actions upon receiving definite impressions (the stimuli), both are liable to make mistakes which are detected by suitable devices.
It is now fairly obvious that all modern machinery, even the most complicated, can be produced to behave in a cybernetic manner; but it will perhaps surprise you that even natural organisms need not necessarily have much resemblance to their parent plant or animal stock, if only they were suitable constructed by art and endowed with an automatic mechanism of similar simplicity. I can at once give a very good example from the vegetable world: the new ornamental cactus-like Euphorbia known as “Zig-Zag” has been developed within a few years from a common round-leaved type. It needs no expert botanist to see how this curious transformation was brought about, nor to understand that it could happen in two equally simple ways: if the seed has a round “eye” (as is generally the case) the new shoot may grow out of any angle or even split into two so as to resemble an inverted Y. In this case we have what is called a binary response. The second possibility arises when both sides of the shoot start from one point and gradually enlarge until they meet in the middle; here we have a single response but with repeated doubling. But let us suppose now that instead of starting from a seed, these plants were cut into convenient segments half way round, then joined together again and planted firmly in soil: most likely no shoots would appear at all because our little bit of plant tissue cannot live outside its natural environment, but if it did, the resulting Euphorbia would have to grow with three or perhaps even four stems. If so there are now four possible responses emanating from one point which in this case has become a controlling switch: hence we speak of multiple control. It is obvious that there are many other ways in which what looks like an extremely simple problem in binary arithmetic can be solved in nature.
1 Arthur Samuel, “Natural and artificial ‘thinking’,” in New Scientist, vol. 4, no. 49 (February 28, 1950), pp. 33-4.
2 Turing discusses the concept of “multi-machines” and other variations on his original ideas in a seminal paper published two years later: A.M Turing, “The word problem in semi-groups with cancellation,” Proceedings of the London Mathematical Society, Series 2, Vol 46, No 1 (1953). pg 169-184]
Nature of Computers by A M Turing
Portrait of Alan Turing in 1946, when he was readers’ advisor for the Mathematics department at Cambridge University
Artificial intelligence research is one of several topics related to computing that cut across many academic disciplines—including computer science, psychology, linguistics, philosophy, neuroscience, cognitive science and others.
A field of study enabling an artificial device to perform functions that would require intelligence if performed by a human (such as visual perception, speech recognition and language translation) 1
As early as the 1940’s scientists around the world were already studying how to make computers think like humans; therefore discussing whether or not such a thing could ever be possible. The whole idea was due to the fact that before this time no one had really thought about what exactly thinking actually was. It is now widely accepted that thinking is the result of computations by the brain; therefore if a computer could be made to process information in a similar way then perhaps it would be possible for it to think as well.
In his seminal paper on AI entitled “Computing machinery and intelligence” which was published in October of 1950, Alan Turing describes the idea of making computers think as being just like teaching a child how to play chess:
If I claim to be able to imagine certain chess situations, my friend may counter with the remark ‘I bet you can’t tell me offhand what is the correct move in such-and-such a position’. This sort of challenge does not appeal to me unless I have some further purpose in view. One such purpose might be to test the powers of my machine. This, if it were ever to prove practicable, would have very far reaching consequences, because it would enable us to make machines do all the things which men are capable of doing. The chess playing programme is not intended as a practical proposition; it is primarily a test of the facilities of our machines. I think that this project can be justified on these grounds alone2
The Turing Test
An influential paper published in 1950 by Alan Turing titled “Computing machinery and intelligence” proposed an argument for an imitation game which could show whether or not a computer was intelligent enough to deserve being called artificial intelligence. Although there has been much debate since over how best to define artificial intelligence Turing’s original proposal is the one which has gone on to have the most lasting influence.
Turing’s paper proposes that if a computer was capable of thinking it should be able to be put in an interrogation room together with someone who would know if they were talking to a human or a machine and not be able to tell the difference; Turing named this proposed test for intelligence “The Imitation Game”. If during these conversations, humans interrogating the machine could not tell whether their partner was a machine or a fellow human then we would have to accept that maybe machines do indeed think after all.