The Information by James Gleick


  But information is physical. Maxwell’s demon makes the link. The demon performs a conversion between information and energy, one particle at a time. Szilárd—who did not yet use the word information—found that, if he accounted exactly for each measurement and memory, then the conversion could be computed exactly. So he computed it. He calculated that each unit of information brings a corresponding increase in entropy—specifically, by k log 2 units. Every time the demon makes a choice between one particle and another, it costs one bit of information. The payback comes at the end of the cycle, when it has to clear its memory (Szilárd did not specify this last detail in words, but in mathematics). Accounting for this properly is the only way to eliminate the paradox of perpetual motion, to bring the universe back into harmony, to “restore concordance with the Second Law.”

  Szilárd had thus closed a loop leading to Shannon’s conception of entropy as information. For his part, Shannon did not read German and did not follow Zeitschrift für Physik. “I think actually Szilárd was thinking of this,” he said much later, “and he talked to von Neumann about it, and von Neumann may have talked to Wiener about it. But none of these people actually talked to me about it.”♦ Shannon reinvented the mathematics of entropy nonetheless.

  To the physicist, entropy is a measure of uncertainty about the state of a physical system: one state among all the possible states it can be in. These microstates may not be equally likely, so the physicist writes S = −Σ pi log pi.

  To the information theorist, entropy is a measure of uncertainty about a message: one message among all the possible messages that a communications source can produce. The possible messages may not be equally likely, so Shannon wrote H = −Σ pi log pi.

  It is not just a coincidence of formalism: nature providing similar answers to similar problems. It is all one problem. To reduce entropy in a box of gas, to perform useful work, one pays a price in information. Likewise, a particular message reduces the entropy in the ensemble of possible messages—in terms of dynamical systems, a phase space.

  That was how Shannon saw it. Wiener’s version was slightly different. It was fitting—for a word that began by meaning the opposite of itself—that these colleagues and rivals placed opposite signs on their formulations of entropy. Where Shannon identified information with entropy, Wiener said it was negative entropy. Wiener was saying that information meant order, but an orderly thing does not necessarily embody much information. Shannon himself pointed out their difference and minimized it, calling it a sort of “mathematical pun.” They get the same numerical answers, he noted:

  I consider how much information is produced when a choice is made from a set—the larger the set the more information. You consider the larger uncertainty in the case of a larger set to mean less knowledge of the situation and hence less information.♦

  Put another way, H is a measure of surprise. Put yet another way, H is the average number of yes-no questions needed to guess the unknown message. Shannon had it right—at least, his approach proved fertile for mathematicians and physicists a generation later—but the confusion lingered for some years. Order and disorder still needed some sorting.

  We all behave like Maxwell’s demon. Organisms organize. In everyday experience lies the reason sober physicists across two centuries kept this cartoon fantasy alive. We sort the mail, build sand castles, solve jigsaw puzzles, separate wheat from chaff, rearrange chess pieces, collect stamps, alphabetize books, create symmetry, compose sonnets and sonatas, and put our rooms in order, and to do all this requires no great energy, as long as we can apply intelligence. We propagate structure (not just we humans but we who are alive). We disturb the tendency toward equilibrium. It would be absurd to attempt a thermodynamic accounting for such processes, but it is not absurd to say we are reducing entropy, piece by piece. Bit by bit. The original demon, discerning one molecule at a time, distinguishing fast from slow, and operating his little gateway, is sometimes described as “superintelligent,” but compared to a real organism it is an idiot savant. Not only do living things lessen the disorder in their environments; they are in themselves, their skeletons and their flesh, vesicles and membranes, shells and carapaces, leaves and blossoms, circulatory systems and metabolic pathways—miracles of pattern and structure. It sometimes seems as if curbing entropy is our quixotic purpose in this universe.

  In 1943 Erwin Schrödinger, the chain-smoking, bow-tied pioneer of quantum physics, asked to deliver the Statutory Public Lectures at Trinity College, Dublin, decided the time had come to answer one of the greatest of unanswerable questions: What is life? The equation bearing his name was the essential formulation of quantum mechanics. In looking beyond his field, as middle-aged Nobel laureates so often do, Schrödinger traded rigor for speculation and began by apologizing “that some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”♦ Nonetheless, the little book he made from these lectures became influential. Without discovering or even stating anything new, it laid a foundation for a nascent science, as yet unnamed, combining genetics and biochemistry. “Schrödinger’s book became a kind of Uncle Tom’s Cabin of the revolution in biology that, when the dust had cleared, left molecular biology as its legacy,”♦ one of the discipline’s founders wrote later. Biologists had not read anything like it before, and physicists took it as a signal that the next great problems might lie in biology.

  Schrödinger began with what he called the enigma of biological stability. In notable contrast to a box of gas, with its vagaries of probability and fluctuation, and in seeming disregard of Schrödinger’s own wave mechanics, where uncertainty is the rule, the structures of a living creature exhibit remarkable permanence. They persist, both in the life of the organism and across generations, through heredity. This struck Schrödinger as requiring explanation.

  “When is a piece of matter said to be alive?”♦ he asked. He skipped past the usual suggestions—growth, feeding, reproduction—and answered as simply as possible: “When it goes on ‘doing something,’ moving, exchanging material with its environment, and so forth, for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances.” Ordinarily, a piece of matter comes to a standstill; a box of gas reaches a uniform temperature; a chemical system “fades away into a dead, inert lump of matter”—one way or another, the second law is obeyed and maximum entropy is reached. Living things manage to remain unstable. Norbert Wiener pursued this thought in Cybernetics: enzymes, he wrote, may be “metastable” Maxwell’s demons—meaning not quite stable, or precariously stable. “The stable state of an enzyme is to be deconditioned,” he noted, “and the stable state of a living organism is to be dead.”♦

  Schrödinger felt that evading the second law for a while, or seeming to, is exactly why a living creature “appears so enigmatic.” The organism’s ability to feign perpetual motion leads so many people to believe in a special, supernatural life force. He mocked this idea—vis viva or entelechy—and he also mocked the popular notion that organisms “feed upon energy.” Energy and matter were just two sides of a coin, and anyway one calorie is as good as another. No, he said: the organism feeds upon negative entropy.

  “To put it less paradoxically,” he added paradoxically, “the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.”♦

  In other words, the organism sucks orderliness from its surroundings. Herbivores and carnivores dine on a smorgasbord of structure; they feed on organic compounds, matter in a well-ordered state, and return it “in a very much degraded form—not entirely degraded, however, for plants can make use of it.” Plants meanwhile draw not just energy but negative entropy from sunlight. In terms of energy, the accounting can be more or less rigorously performed. In terms of order, calculations are not so simple. The mathematical reckoning
of order and chaos remains more ticklish, the relevant definitions being subject to feedback loops of their own.

  Much more remained to be learned, Schrödinger said, about how life stores and perpetuates the orderliness it draws from nature. Biologists with their microscopes had learned a great deal about cells. They could see gametes—sperm cells and egg cells. Inside them were the rodlike fibers called chromosomes, arranged in pairs, with consistent numbers from species to species, and known to be carriers of hereditary features. As Schrödinger put it now, they hold within them, somehow, the “pattern” of the organism: “It is these chromosomes, or probably only an axial skeleton fibre of what we actually see under the microscope as the chromosome, that contain in some kind of code-script the entire pattern of the individual’s future development.” He considered it amazing—mysterious, but surely crucial in some way as yet unknown—that every single cell of an organism “should be in possession of a complete (double) copy of the code-script.”♦ He compared this to an army in which every soldier knows every detail of the general’s plans.

  These details were the many discrete “properties” of an organism, though it remained far from clear what a property entailed. (“It seems neither adequate nor possible to dissect into discrete ‘properties’ the pattern of an organism which is essentially a unity, a ‘whole,’ ”♦ Schrödinger mused.) The color of an animal’s eyes, blue or brown, might be a property, but it is more useful to focus on the difference from one individual to another, and this difference was understood to be controlled by something conveyed in the chromosomes. He used the term gene: “the hypothetical material carrier of a definite hereditary feature.” No one could yet see these hypothetical genes, but surely the time was not far off. Microscopic observations made it possible to estimate their size: perhaps 100 or 150 atomic distances; perhaps one thousand atoms or fewer. Yet somehow these tiny entities must encapsulate the entire pattern of a living creature—a fly or a rhododendron, a mouse or a human. And we must understand this pattern as a four-dimensional object: the structure of the organism through the whole of its ontogenetic development, every stage from embryo to adult.

  In seeking a clue to the gene’s molecular structure, it seemed natural to look to the most organized forms of matter, crystals. Solids in crystalline form have a relative permanence; they can begin with a tiny germ and build up larger and larger structures; and quantum mechanics was beginning to give deep insight into the forces involved in their bonding. But Schrödinger felt something was missing. Crystals are too orderly—built up in “the comparatively dull way of repeating the same structure in three directions again and again.” Elaborate though they seem, crystalline solids contain just a few types of atoms. Life must depend on a higher level of complexity, structure without predictable repetition, he argued. He invented a term: aperiodic crystals. This was his hypothesis: We believe a gene—or perhaps the whole chromosome fiber—to be an aperiodic solid.♦ He could hardly emphasize enough the glory of this difference, between periodic and aperiodic:

  The difference in structure is of the same kind as that between an ordinary wallpaper in which the same pattern is repeated again and again in regular periodicity and a masterpiece of embroidery, say a Raphael tapestry, which shows no dull repetition, but an elaborate, coherent, meaningful design.♦

  Some of his most admiring readers, such as Léon Brillouin, the French physicist recently decamped to the United States, said that Schrödinger was too clever to be completely convincing, even as they demonstrated in their own work just how convinced they were. Brillouin was particularly taken with the comparison to crystals, with their elaborate but inanimate structures. Crystals have some capacity for self-repair, he noted; under stress, their atoms may shift to new positions for the sake of equilibrium. That may be understood in terms of thermodynamics and now quantum mechanics. How much more exalted, then, is self-repair in the organism: “The living organism heals its own wounds, cures its sicknesses, and may rebuild large portions of its structure when they have been destroyed by some accident. This is the most striking and unexpected behavior.”♦ He followed Schrödinger, too, in using entropy to connect the smallest and largest scales.

  The earth is not a closed system, and life feeds upon energy and negative entropy leaking into the earth system.… The cycle reads: first, creation of unstable equilibriums (fuels, food, waterfalls, etc.); then use of these reserves by all living creatures.

  Living creatures confound the usual computation of entropy. More generally, so does information. “Take an issue of The New York Times, the book on cybernetics, and an equal weight of scrap paper,” suggested Brillouin. “Do they have the same entropy?” If you are feeding the furnace, yes. But not if you are a reader. There is entropy in the arrangement of the ink spots.

  For that matter, physicists themselves go around transforming negative entropy into information, said Brillouin. From observations and measurements, the physicist derives scientific laws; with these laws, people create machines never seen in nature, with the most improbable structures. He wrote this in 1950, as he was leaving Harvard to join the IBM Corporation in Poughkeepsie.♦

  That was not the end for Maxwell’s demon—far from it. The problem could not truly be solved, the demon effectively banished without a deeper understanding of a realm far removed from thermodynamics: mechanical computing. Later, Peter Landsberg wrote its obituary this way: “Maxwell’s demon died at the age of 62 (when a paper by Leó Szilárd appeared), but it continues to haunt the castles of physics as a restless and lovable poltergeist.”♦

  10 | LIFE’S OWN CODE

  (The Organism Is Written in the Egg)

  What lies at the heart of every living thing is not a fire, not warm breath, not a “spark of life.” It is information, words, instructions. If you want a metaphor, don’t think of fires and sparks and breath. Think, instead, of a billion discrete, digital characters carved in tablets of crystal.

  —Richard Dawkins (1986)♦

  SCIENTISTS LOVE THEIR FUNDAMENTAL PARTICLES. If traits are handed down from one generation to the next, these traits must take some primal form or have some carrier. Hence the putative particle of protoplasm. “The biologist must be allowed as much scientific use of the imagination as the physicist,” The Popular Science Monthly explained in 1875. “If the one must have his atoms and molecules, the other must have his physiological units, his plastic molecules, his ‘plasticules.’ ”♦

  Plasticule did not catch on, and almost everyone had the wrong idea about heredity anyway. So in 1910 a Danish botanist, Wilhelm Johannsen, self-consciously invented the word gene. He was at pains to correct the common mythology and thought a word might help. The myth was this: that “personal qualities” are transmitted from parent to progeny. This is “the most naïve and oldest conception of heredity,”♦ Johanssen said in a speech to the American Society of Naturalists. It was understandable. If father and daughter are fat, people might be tempted to think that his fatness caused hers, or that he passed it on to her. But that is wrong. As Johannsen declared, “The personal qualities of any individual organism do not at all cause the qualities of its offspring; but the qualities of both ancestor and descendent are in quite the same manner determined by the nature of the ‘sexual substances’—i.e., the gametes—from which they have developed.” What is inherited is more abstract, more in the nature of potentiality.

  To banish the fallacious thinking, he proposed a new terminology, beginning with gene: “nothing but a very applicable little word, easily combined with others.”♦ It hardly mattered that neither he nor anyone else knew what a gene actually was; “it may be useful as an expression for the ‘unit-factors,’ ‘elements,’ or ‘allelomorphs.’… As to the nature of the ‘genes’ it is as yet of no value to propose a hypothesis.” Gregor Mendel’s years of research with green and yellow peas showed that such a thing must exist. Colors and other traits vary depending on many factors, such as temperature and soil content, but something is preserved wh
ole; it does not blend or diffuse; it must be quantized.♦ Mendel had discovered the gene, though he did not name it. For him it was more an algebraic convenience than a physical entity.

  When Schrödinger contemplated the gene, he faced a problem. How could such a “tiny speck of material” contain the entire complex code-script that determines the elaborate development of the organism? To resolve the difficulty Schrödinger summoned an example not from wave mechanics or theoretical physics but from telegraphy: Morse code. He noted that two signs, dot and dash, could be combined in well-ordered groups to generate all human language. Genes, too, he suggested, must employ a code: “The miniature code should precisely correspond with a highly complicated and specified plan of development and should somehow contain the means to put it into action.”♦

  Codes, instructions, signals—all this language, redolent of machinery and engineering, pressed in on biologists like Norman French invading medieval English. In the 1940s the jargon had a precious, artificial feeling, but that soon passed. The new molecular biology began to examine information storage and information transfer. Biologists could count in terms of “bits.” Some of the physicists now turning to biology saw information as exactly the concept needed to discuss and measure biological qualities for which tools had not been available: complexity and order, organization and specificity.♦ Henry Quastler, an early radiologist from Vienna, then at the University of Illinois, was applying information theory to both biology and psychology; he estimated that an amino acid has the information content of a written word and a protein molecule the information content of a paragraph. His colleague Sidney Dancoff suggested to him in 1950 that a chromosomal thread is “a linear coded tape of information”♦:

 
Previous Page Next Page
Should you have any enquiry, please contact us via [email protected]