The Information by James Gleick


  “A calculus is developed for manipulating these equations by simple mathematical processes”—with this clarion call, Shannon began his thesis in 1937. So far the equations just represented combinations of circuits. Then, “the calculus is shown to be exactly analogous to the calculus of propositions used in the symbolic study of logic.” Like Boole, Shannon showed that he needed only two numbers for his equations: zero and one. Zero represented a closed circuit; one represented an open circuit. On or off. Yes or no. True or false. Shannon pursued the consequences. He began with simple cases: two-switch circuits, in series or in parallel. Circuits in series, he noted, corresponded to the logical connective and; whereas circuits in parallel had the effect of or. An operation of logic that could be matched electrically was negation, converting a value into its opposite. As in logic, he saw that circuitry could make “if … then” choices. Before he was done, he had analyzed “star” and “mesh” networks of increasing complexity, by setting down postulates and theorems to handle systems of simultaneous equations. He followed this tower of abstraction with practical examples—inventions, on paper, some practical and some just quirky. He diagrammed the design of an electric combination lock, to be made from five push-button switches. He laid out a circuit that would “automatically add two numbers, using only relays and switches”;♦ for convenience, he suggested arithmetic using base two. “It is possible to perform complex mathematical operations by means of relay circuits,” he wrote. “In fact, any operation that can be completely described in a finite number of steps using the words if, or, and, etc. can be done automatically with relays.” As a topic for a student in electrical engineering this was unheard of: a typical thesis concerned refinements to electric motors or transmission lines. There was no practical call for a machine that could solve puzzles of logic, but it pointed to the future. Logic circuits. Binary arithmetic. Here in a master’s thesis by a research assistant was the essence of the computer revolution yet to come.

  Shannon spent a summer working at the Bell Telephone Laboratories in New York City and then, at Vannevar Bush’s suggestion, switched from electrical engineering to mathematics at MIT. Bush also suggested that he look into the possibility of applying an algebra of symbols—his “queer algebra”♦—to the nascent science of genetics, whose basic elements, genes and chromosomes, were just dimly understood. So Shannon began work on an ambitious doctoral dissertation to be called “An Algebra for Theoretical Genetics.”♦ Genes, as he noted, were a theoretical construct. They were thought to be carried in the rodlike bodies known as chromosomes, which could be seen under a microscope, but no one knew exactly how genes were structured or even if they were real. “Still,” as Shannon noted, “it is possible for our purposes to act as though they were.… We shall speak therefore as though the genes actually exist and as though our simple representation of hereditary phenomena were really true, since so far as we are concerned, this might just as well be so.” He devised an arrangement of letters and numbers to represent “genetic formulas” for an individual; for example, two chromosome pairs and four gene positions could be represented thus:

  A1B2C3D5 E4F1G6H1

  A3B1C4D3 E4F2G6H2

  Then, the processes of genetic combination and cross-breeding could be predicted by a calculus of additions and multiplications. It was a sort of road map, far abstracted from the messy biological reality. He explained: “To non-mathematicians we point out that it is a commonplace of modern algebra for symbols to represent concepts other than numbers.” The result was complex, original, and quite detached from anything people in the field were doing.♦♦ He never bothered to publish it.

  Meanwhile, late in the winter of 1939, he wrote Bush a long letter about an idea closer to his heart:

  Off and on I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intellegence, including telephony, radio, television, telegraphy, etc. Practically all systems of communication may be thrown into the following general form:♦

  T and R were a transmitter and a receiver. They mediated three “functions of time,” f(t): the “intelligence to be transmitted,” the signal, and the final output, which, of course, was meant to be as nearly identical to the input as possible. (“In an ideal system it would be an exact replica.”) The problem, as Shannon saw it, was that real systems always suffer distortion—a term for which he proposed to give a rigorous definition in mathematical form. There was also noise (“e.g., static”). Shannon told Bush he was trying to prove some theorems. Also, and not incidentally, he was working on a machine for performing symbolic mathematical operations, to do the work of the Differential Analyzer and more, entirely by means of electric circuits. He had far to go. “Although I have made some progress in various outskirts of the problem I am still pretty much in the woods, as far as actual results are concerned,” he said.

  I have a set of circuits drawn up which actually will perform symbolic differentiation and integration on most functions, but the method is not quite general or natural enough to be perfectly satisfactory. Some of the general philosophy underlying the machine seems to evade me completely.

  He was painfully thin, almost gaunt. His ears stuck out a little from his close-trimmed wavy hair. In the fall of 1939, at a party in the Garden Street apartment he shared with two roommates, he was standing shyly in his own doorway, a jazz record playing on the phonograph, when a young woman started throwing popcorn at him. She was Norma Levor, an adventurous nineteen-year-old Radcliffe student from New York. She had left school to live in Paris that summer but returned when Nazi Germany invaded Poland; even at home, the looming war had begun to unsettle people’s lives. Claude struck her as dark in temperament and sparkling in intellect. They began to see each other every day; he wrote sonnets for her, uncapitalized in the style of E. E. Cummings. She loved the way he loved words, the way he said Boooooooolean algebra. By January they were married (Boston judge, no ceremony), and she followed him to Princeton, where he had received a postdoctoral fellowship.

  The invention of writing had catalyzed logic, by making it possible to reason about reasoning—to hold a train of thought up before the eyes for examination—and now, all these centuries later, logic was reanimated with the invention of machinery that could work upon symbols. In logic and mathematics, the highest forms of reasoning, everything seemed to be coming together.

  By melding logic and mathematics in a system of axioms, signs, formulas, and proofs, philosophers seemed within reach of a kind of perfection—a rigorous, formal certainty. This was the goal of Bertrand Russell and Alfred North Whitehead, the giants of English rationalism, who published their great work in three volumes from 1910 to 1913. Their title, Principia Mathematica, grandly echoed Isaac Newton; their ambition was nothing less than the perfection of all mathematics. This was finally possible, they claimed, through the instrument of symbolic logic, with its obsidian signs and implacable rules. Their mission was to prove every mathematical fact. The process of proof, when carried out properly, should be mechanical. In contrast to words, symbolism (they declared) enables “perfectly precise expression.” This elusive quarry had been pursued by Boole, and before him, Babbage, and long before either of them, Leibniz, all believing that the perfection of reasoning could come with the perfect encoding of thought. Leibniz could only imagine it: “a certain script of language,” he wrote in 1678, “that perfectly represents the relationships between our thoughts.”♦ With such encoding, logical falsehoods would be instantly exposed.

  The characters would be quite different from what has been imagined up to now.… The characters of this script should serve invention and judgment as in algebra and arithmetic.… It will be impossible to write, using these characters, chimerical notions [chimères].

  Russell and Whitehead explained that symbolism suits the “highly abstract processes and ideas”♦ used in logic, with its trains of reasoning. Ordinary language works better for the muck and mire of the ordinary world.
A statement like a whale is big uses simple words to express “a complicated fact,” they observed, whereas one is a number “leads, in language, to an intolerable prolixity.” Understanding whales, and bigness, requires knowledge and experience of real things, but to manage 1, and number, and all their associated arithmetical operations, when properly expressed in desiccated symbols, should be automatic.

  They had noticed some bumps along the way, though—some of the chimères that should have been impossible. “A very large part of the labour,” they said in their preface, “has been expended on the contradictions and paradoxes which have infected logic.” “Infected” was a strong word but barely adequate to express the agony of the paradoxes. They were a cancer.

  Some had been known since ancient times:

  Epimenides the Cretan said that all Cretans were liars, and all other statements made by Cretans were certainly lies. Was this a lie?♦

  A cleaner formulation of Epimenides’ paradox—cleaner because one need not worry about Cretans and their attributes—is the liar’s paradox: This statement is false. The statement cannot be true, because then it is false. It cannot be false, because then it becomes true. It is neither true nor false, or it is both at once. But the discovery of this twisting, backfiring, mind-bending circularity does not bring life or language crashing to a halt—one grasps the idea and moves on—because life and language lack the perfection, the absolutes, that give them force. In real life, all Cretans cannot be liars. Even liars often tell the truth. The pain begins only with the attempt to build an airtight vessel. Russell and Whitehead aimed for perfection—for proof—otherwise the enterprise had little point. The more rigorously they built, the more paradoxes they found. “It was in the air,” Douglas Hofstadter has written, “that truly peculiar things could happen when modern cousins of various ancient paradoxes cropped up inside the rigorously logical world of numbers,… a pristine paradise in which no one had dreamt paradox might arise.”♦

  One was Berry’s paradox, first suggested to Russell by G. G. Berry, a librarian at the Bodleian. It has to do with counting the syllables needed to specify each integer. Generally, of course, the larger the number the more syllables are required. In English, the smallest integer requiring two syllables is seven. The smallest requiring three syllables is eleven. The number 121 seems to require six syllables (“one hundred twenty-one”), but actually four will do the job, with some cleverness: “eleven squared.” Still, even with cleverness, there are only a finite number of possible syllables and therefore a finite number of names, and, as Russell put it, “Hence the names of some integers must consist of at least nineteen syllables, and among these there must be a least. Hence the least integer not nameable in fewer than nineteen syllables must denote a definite integer.”♦♦ Now comes the paradox. This phrase, the least integer not nameable in fewer than nineteen syllables, contains only eighteen syllables. So the least integer not nameable in fewer than nineteen syllables has just been named in fewer than nineteen syllables.

  Another paradox of Russell’s is the Barber paradox. The barber is the man (let us say) who shaves all the men, and only those, who do not shave themselves. Does the barber shave himself? If he does he does not, and if he does not he does. Few people are troubled by such puzzles, because in real life the barber does as he likes and the world goes on. We tend to feel, as Russell put it, that “the whole form of words is just a noise without meaning.”♦ But the paradox cannot be dismissed so easily when a mathematician examines the subject known as set theory, or the theory of classes. Sets are groups of things—for example, integers. The set 0, 2, 4 has integers as its members. A set can also be a member of other sets. For example, the set 0, 2, 4 belongs to the set of sets of integers and the set of sets with three members but not the set of sets of prime numbers. So Russell defined a certain set this way:

  S is the set of all sets that are not members of themselves.

  This version is known as Russell’s paradox. It cannot be dismissed as noise.

  To eliminate Russell’s paradox Russell took drastic measures. The enabling factor seemed to be the peculiar recursion within the offending statement: the idea of sets belonging to sets. Recursion was the oxygen feeding the flame. In the same way, the liar paradox relies on statements about statements. “This statement is false” is meta-language: language about language. Russell’s paradoxical set relies on a meta-set: a set of sets. So the problem was a crossing of levels, or, as Russell termed it, a mixing of types. His solution: declare it illegal, taboo, out of bounds. No mixing different levels of abstraction. No self-reference; no self-containment. The rules of symbolism in Principia Mathematica would not allow the reaching-back-around, snake-eating-its-tail feedback loop that seemed to turn on the possibility of self-contradiction. This was his firewall.

  Enter Kurt Gödel.

  He was born in 1906 in Brno, at the center of the Czech province of Moravia. He studied physics at the University of Vienna, seventy-five miles south, and as a twenty-year-old became part of the Vienna Circle, a group of philosophers and mathematicians who met regularly in smoky coffeehouses like the Café Josephinum and the Café Reichsrat to propound logic and realism as a bulwark against metaphysics—by which they meant spiritualism, phenomenology, irrationality. Gödel talked to them about the New Logic (this term was in the air) and before long about metamathematics—der Metamathematik. Metamathematics was not to mathematics what metaphysics was to physics. It was mathematics once removed—mathematics about mathematics—a formal system “looked at from the outside” (“äußerlich betrachtet”).♦ He was about to make the most important statement, prove the most important theorem about knowledge in the twentieth century. He was going to kill Russell’s dream of a perfect logical system. He was going to show that the paradoxes were not excrescences; they were fundamental.

  Gödel praised the Russell and Whitehead project before he buried it: mathematical logic was, he wrote, “a science prior to all others, which contains the ideas and principles underlying all sciences.”♦ Principia Mathematica, the great opus, embodied a formal system that had become, in its brief lifetime, so comprehensive and so dominant that Gödel referred to it in shorthand: PM. By PM he meant the system, as opposed to the book. In PM, mathematics had been contained—a ship in a bottle, no longer buffeted and turned by the vast unruly seas. By 1930, when mathematicians proved something, they did it according to PM. In PM, as Gödel said, “one can prove any theorem using nothing but a few mechanical rules.”♦

  Any theorem: for the system was, or claimed to be, complete. Mechanical rules: for the logic operated inexorably, with no room for varying human interpretation. Its symbols were drained of meaning. Anyone could verify a proof step by step, by following the rules, without understanding it. Calling this quality mechanical invoked the dreams of Charles Babbage and Ada Lovelace, machines grinding through numbers, and numbers standing for anything at all.

  Amid the doomed culture of 1930 Vienna, listening to his new friends debate the New Logic, his manner reticent, his eyes magnified by black-framed round spectacles, the twenty-four-year-old Gödel believed in the perfection of the bottle that was PM but doubted whether mathematics could truly be contained. This slight young man turned his doubt into a great and horrifying discovery. He found that lurking within PM—and within any consistent system of logic—there must be monsters of a kind hitherto unconceived: statements that can never be proved, and yet can never be disproved. There must be truths, that is, that cannot be proved—and Gödel could prove it.

  He accomplished this with iron rigor disguised as sleight of hand. He employed the formal rules of PM and, as he employed them, also approached them metamathematically—viewed them, that is, from the outside. As he explained, all the symbols of PM—numbers, operations of arithmetic, logical connectors, and punctuation—constituted a limited alphabet. Every statement or formula of PM was written in this alphabet. Likewise every proof comprised a finite sequence of formulas—just a longer passag
e written in the same alphabet. This is where metamathematics came in. Metamathematically, Gödel pointed out, one sign is as good as another; the choice of a particular alphabet is arbitrary. One could use the traditional assortment of numerals and glyphs (from arithmetic: +, −, =, ×; from logic: ¬, ∨, ⊃, ∃), or one could use letters, or one could use dots and dashes. It was a matter of encoding, slipping from one symbol set to another.

  Gödel proposed to use numbers for all his signs. Numbers were his alphabet. And because numbers can be combined using arithmetic, any sequence of numbers amounts to one (possibly very large) number. So every statement, every formula of PM can be expressed as a single number, and so can every proof. Gödel outlined a rigorous scheme for doing the encoding—an algorithm, mechanical, just rules to follow, no intelligence necessary. It works forward and backward: given any formula, following the rules generates one number, and given any number, following the rules produces the corresponding formula.

  Not every number translates into a correct formula, however. Some numbers decode back into gibberish, or formulas that are false within the rules of the system. The string of symbols “0 0 0 = = =” does not make a formula at all, though it translates to some number. The statement “0 = 1” is a recognizable formula, but it is false. The formula “0 + x = x + 0” is true, and it is provable.

 
Previous Page Next Page
Should you have any enquiry, please contact us via [email protected]