The Disappearing Spoon Part 7

You’re reading novel The Disappearing Spoon Part 7 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!

Now, if you translate quantum mechanics into English (always risky), the equation says that the uncertainty in something's position (x) times the uncertainty in its speed and direction (its momentum, p) always exceeds or is equal to the number "h divided by four times pi." (The h stands for Planck's constant, which is such a small number, about 100 trillion trillion times smaller than one, that the uncertainty principle applies only to tiny, tiny things such as electrons or photons.) In other words, if you know a particle's position very well, you cannot know its momentum well at all, and vice versa. p) always exceeds or is equal to the number "h divided by four times pi." (The h stands for Planck's constant, which is such a small number, about 100 trillion trillion times smaller than one, that the uncertainty principle applies only to tiny, tiny things such as electrons or photons.) In other words, if you know a particle's position very well, you cannot know its momentum well at all, and vice versa.

Note that these uncertainties aren't uncertainties about measuring things, as if you had a bad ruler; they're uncertainties built into nature itself. Remember how light has a reversible nature, part wave, part particle? In dismissing the laser, Bohr and von Neumann got stuck on the ways light acts like particles, or photons. To their ears, lasers sounded so precise and focused that the uncertainty in the photons' positions would be nil. That meant the uncertainty in the momentum had to be large, which meant the photons could be flying off at any energy or in any direction, which seemed to contradict the idea of a tightly focused beam.

They forgot that light behaves like waves, too, and that the rules for waves are different. For one, how can you tell where a wave is? By its nature, it spreads out-a built-in source of uncertainty. And unlike particles, waves can swallow and combine with other waves. Two rocks thrown into a pond will kick up the highest crests in the area between them, which receives energy from smaller waves on both sides.

In the laser's case, there aren't two but trillions of trillions of "rocks" (i.e., electrons) kicking up waves of light, which all mix together. The key point is that the uncertainty principle doesn't apply to sets of particles, only to individual particles. Within a beam, a set of light particles, it's impossible to say where any one photon is located. And with such a high uncertainty about each photon's position inside the beam, you can channel its energy and direction very, very precisely and make a laser. This loophole is difficult to exploit but is enormously powerful once you've got your fingers inside it-which is why Time Time magazine honored Townes by naming him one of its "Men of the Year" (along with Pauling and Segre) in 1960, and why Townes won a n.o.bel Prize in 1964 for his maser work. magazine honored Townes by naming him one of its "Men of the Year" (along with Pauling and Segre) in 1960, and why Townes won a n.o.bel Prize in 1964 for his maser work.

In fact, scientists soon realized that much more fit inside the loophole than photons. Just as light beams have a dual particle/wave nature, the farther you burrow down and pa.r.s.e electrons and protons and other supposed hard particles, the fuzzier they seem. Matter, at its deepest, most enigmatic quantum level, is indeterminate and wavelike. And because, deep down, the uncertainty principle is a mathematical statement about the limits of drawing boundaries around waves, those particles fall under the aegis of uncertainty, too.



Now again, this works only on minute scales, scales where h, Planck's constant, a number 100 trillion trillion times smaller than one, isn't considered small. What embarra.s.ses physicists is when people extrapolate up and out to human beings and claim that really "proves" you cannot observe something in the everyday world without changing it-or, for the heuristically daring, that objectivity itself is a scam and that scientists fool themselves into thinking they "know" anything. In truth, there's about only one case where uncertainty on a nanoscale affects anything on our macroscale: that outlandish state of matter-Bose-Einstein condensate (BEC)-promised earlier in this chapter. really "proves" you cannot observe something in the everyday world without changing it-or, for the heuristically daring, that objectivity itself is a scam and that scientists fool themselves into thinking they "know" anything. In truth, there's about only one case where uncertainty on a nanoscale affects anything on our macroscale: that outlandish state of matter-Bose-Einstein condensate (BEC)-promised earlier in this chapter.

This story starts in the early 1920s when Satyendra Nath Bose, a chubby, bespectacled Indian physicist, made an error while working through some quantum mechanics equations during a lecture. It was a sloppy, undergraduate b.o.n.e.r, but it intrigued Bose. Unaware of his mistake at first, he'd worked everything out, only to find that the "wrong" answers produced by his mistake agreed very well with experiments on the properties of photons-much better than the "correct" theory.*

So as physicists have done throughout history, Bose decided to pretend that his error was the truth, admit that he didn't know why, and write a paper. His seeming mistake, plus his obscurity as an Indian, led every established scientific journal in Europe to reject it. Undaunted, Bose sent his paper directly to Albert Einstein. Einstein studied it closely and determined that Bose's answer was clever-it basically said that certain particles, like photons, could collapse on top of each other until they were indistinguishable. Einstein cleaned the paper up a little, translated it into German, and then expanded Bose's work into another, separate paper that covered not just photons but whole atoms. Using his celebrity pull, Einstein had both papers published jointly.

In them, Einstein included a few lines pointing out that if atoms got cold enough-billions of times colder than even superconductors-they would condense into a new state of matter. However, the ability to produce atoms that cold so outpaced the technology of the day that not even far-thinking Einstein could comprehend the possibility. He considered his condensate a frivolous curiosity. Amazingly, scientists got a glimpse of Bose-Einstein matter a decade later, in a type of superfluid helium where small pockets of atoms bound themselves together. The Cooper pairs of electrons in superconductors also behave like the BEC in a way. But this binding together in superfluids and superconductors was limited, and not at all like the state Einstein envisioned-his was a cold, spa.r.s.e mist. Regardless, the helium and BCS people never pursued Einstein's conjecture, and nothing more happened with the BEC until 1995, when two clever scientists at the University of Colorado conjured some up with a gas of rubidium atoms.

Fittingly, one technical achievement that made real BEC possible was the laser-which was based on ideas first espoused by Bose about photons. That may seem backward, since lasers usually heat things up. But lasers can cool atoms, too, if wielded properly. On a fundamental, nanoscopic level, temperature just measures the average speed of particles. Hot molecules are furious little clas.h.i.+ng fists, and cold molecules drag along. So the key to cooling something down is slowing its particles down. In laser cooling, scientists cross a few beams, Ghostbusters-like, and create a trap of "optical mola.s.ses." When the rubidium atoms in the gas hurtled through the mola.s.ses, the lasers pinged them with low-intensity photons. The rubidium atoms were bigger and more powerful, so this was like shooting a machine gun at a screaming asteroid. Size disparities notwithstanding, shooting an asteroid with enough bullets will eventually halt it, and that's exactly what happened to the rubidium atoms. After absorbing photons from all sides, they slowed, and slowed, and slowed some more, and their temperature dropped to just 1/10,000 of a degree above absolute zero.

Still, even that temperature is far too sweltering for the BEC (you can grasp now why Einstein was so pessimistic). So the Colorado duo, Eric Cornell and Carl Wieman, incorporated a second phase of cooling in which a magnet repeatedly sucked off the "hottest" remaining atoms in the rubidium gas. This is basically a sophisticated way of blowing on a spoonful of soup-cooling something down by pus.h.i.+ng away warmer atoms. With the energetic atoms gone, the overall temperature kept sinking. By doing this slowly and whisking away only the few hottest atoms each time, the scientists plunged the temperature to a billionth of a degree (0.000000001) above absolute zero. At this point, finally, the sample of two thousand rubidium atoms collapsed into the Bose-Einstein condensate, the coldest, gooeyest, and most fragile ma.s.s the universe has ever known.

But to say "two thousand rubidium atoms" obscures what's so special about the BEC. There weren't two thousand rubidium atoms as much as one giant marshmallow of a rubidium atom. It was a singularity, and the explanation for why relates back to the uncertainty principle. Again, temperature just measures the average speed of atoms. If the molecules' temperature dips below a billionth of a degree, that's not much speed at all-meaning the uncertainty about that speed is absurdly low. It's basically zero. And because of the wavelike nature of atoms on that level, the uncertainty about their position must be quite large.

So large that, as the two scientists relentlessly cooled the rubidium atoms and squeezed them together, the atoms began to swell, distend, overlap, and finally disappear into each other. This left behind one large ghostly "atom" that, in theory (if it weren't so fragile), might be capacious enough to see under a microscope. That's why we can say that in this case, unlike anywhere else, the uncertainty principle has swooped upward and affected something (almost) human-sized. It took less than $100,000 worth of equipment to create this new state of matter, and the BEC held together for only ten seconds before combusting. But it held on long enough to earn Cornell and Wieman the 2001 n.o.bel Prize.*

As technology keeps improving, scientists have gotten better and better at inducing matter to form the BEC. It's not like anyone's taking orders yet, but scientists might soon be able to build "matter lasers" that shoot out ultra-focused beams of atoms thousands of times more powerful than light lasers, or construct "supersolid" ice cubes that can flow through each other without losing their solidity. In our sci-fi future, such things could prove every bit as amazing as light lasers and superfluids have in our own pretty remarkable age.

Spheres of Splendor: The Science of Bubbles

Not every breakthrough in periodic-table science has to delve into exotic and intricate states of matter like the BEC. Everyday liquids, solids, and gases still yield secrets now and then, if fortune and the scientific muses collude in the right way. According to legend, as a matter of fact, one of the most important pieces of scientific equipment in history was invented not only over over a gla.s.s of beer but a gla.s.s of beer but by by a gla.s.s of beer. a gla.s.s of beer.

Donald Glaser-a lowly, thirsty, twenty-five-year-old junior faculty member who frequented bars near the University of Michigan-was staring one night at the bubbles streaming through his lager, and he naturally started thinking particle physics. At the time, 1952, scientists were using knowledge from the Manhattan Project and nuclear science to conjure up exotic and fragile species of particles such as kaons, muons, and pions, ghostly brothers of familiar protons, neutrons, and electrons. Particle physicists suspected, even hoped, that those particles would overthrow the periodic table as the fundamental map of matter, since they'd be able to peer even deeper into subatomic caves.

But to progress further, they needed a better way to "see" those infinitesimal particles and track how they behaved. Over his beer, Glaser-who had short, wavy hair, gla.s.ses, and a high forehead-decided bubbles were the answer. Bubbles in liquids form around imperfections or incongruities. Microscopic scratches in a champagne gla.s.s are one place they form; dissolved pockets of carbon dioxide in beer are another. As a physicist, Glaser knew that bubbles are especially p.r.o.ne to form as liquids heat up and approach their boiling point (think of a pan of water on the stove). In fact, if you hold a liquid just below its boiling point, it will burst into bubbles if anything agitates it.

This was a good start but still basic physics. What made Glaser stand out were the next mental steps he took. Those rare kaons, muons, and pions appear only when an atom's nucleus, its dense core, is splintered. In 1952, a device called a cloud chamber existed, in which a "gun" shot ultra-fast atomic torpedoes at cold gas atoms. Muons and kaons and so on sometimes appeared in the chamber after direct strikes, and the gas condensed into liquid drops along the particles' track. But subst.i.tuting a liquid for the gas made more sense, Glaser thought. Liquids are thousands of times denser than gases, so aiming the atomic gun at, say, liquid hydrogen would cause far more collisions. Plus, if liquid hydrogen was held a shade below its boiling point, even a little kick of energy from a ghostly particle would lather up the hydrogen like Glaser's beer. Glaser also suspected he could photograph the bubble trails and then measure how different particles left different trails or spirals, depending on their size and charge.... By the time he swallowed the final bubble in his own gla.s.s, the story goes, Glaser had the whole thing worked out.

It's a story of serendipity that scientists have long wanted to believe. But sadly, like most legends, it's not entirely accurate. Glaser did invent the bubble chamber, but through careful experimentation in a lab, not on a pub napkin. Happily, though, the truth is even stranger than the legend. Glaser designed his bubble chamber to work as explained above, but with one modification.

Depending on their size and charge, different subatomic particles make different swirls and spirals as they blast through a bubble chamber. The tracks are actually finely s.p.a.ced bubbles in a frigid bath of liquid hydrogen. (Courtesy of CERN) For Lord knows what reason-perhaps lingering undergraduate fascination-this young man decided beer, not hydrogen, was the best liquid to shoot the atomic gun at. He really thought that beer would lead to an epochal breakthrough in subatomic science. You can almost imagine him smuggling Budweiser into the lab at night, perhaps splitting a six-pack between science and his stomach as he filled thimble-sized beakers with America's finest, heated them almost to boiling, and bombarded them to produce the most exotic particles then known to physics.

Unfortunately for science, Glaser later said, the beer experiments flopped. Nor did lab partners appreciate the stink of vaporized ale. Undaunted, Glaser refined his experiments, and his colleague Luis Alvarez-of dinosaur-killing-asteroid fame-eventually determined the most sensible liquid to use was in fact hydrogen. Liquid hydrogen boils at 435F, so even minute amounts of heat will make a froth. As the simplest element, hydrogen also avoided the messy complications that other elements (or beer) might cause when particles collided. Glaser's revamped "bubble chamber" provided so many insights so quickly that in 1960 he appeared among the fifteen "Men of the Year" in Time Time magazine with Linus Pauling, William Shockley, and Emilio Segre. He also won the n.o.bel Prize at the disgustingly young age of thirty-three. Having moved on to Berkeley by then, he borrowed Edwin McMillan and Segre's white vest for the ceremony. magazine with Linus Pauling, William Shockley, and Emilio Segre. He also won the n.o.bel Prize at the disgustingly young age of thirty-three. Having moved on to Berkeley by then, he borrowed Edwin McMillan and Segre's white vest for the ceremony.

Bubbles aren't usually counted as an essential scientific tool. Despite-or maybe because of-their ubiquity in nature and the ease of producing them, they were dismissed as toys for centuries. But when physics emerged as the dominant science in the 1900s, physicists suddenly found a lot of work for these toys in probing the most basic structures in the universe. Now that biology is ascendant, biologists use bubbles to study the development of cells, the most complex structures in the universe. Bubbles have proved to be wonderful natural laboratories for experiments in all fields, and the recent history of science can be read in parallel with the study of these "spheres of splendor."

One element that readily forms bubbles-as well as foam, a state where bubbles overlap and lose their spherical shape-is calcium. Cells are to tissues what bubbles are to foams, and the best example of a foam structure in the body (besides saliva) is spongy bone. We usually think of foams as no st.u.r.dier than shaving cream, but when certain air-infused substances dry out or cool down, they harden and stiffen, like durable versions of bath suds. NASA actually uses special foams to protect s.p.a.ce shuttles on reentry, and calcium-enriched bones are similarly strong yet light. What's more, sculptors for millennia have carved tombstones and obelisks and false G.o.ds from pliable yet st.u.r.dy calcium rocks such as marble and limestone. These rocks form when tiny sea creatures die and their calcium-rich sh.e.l.ls sink and pile up on the ocean floor. Like bones, sh.e.l.ls have natural pores, but calcium's chemistry enhances their supple strength. Most natural water, such as rainwater, is slightly acidic, while calcium's minerals are slightly basic. When water leaks into calcium's pores, the two react like a mini grade-school volcano to release small amounts of carbon dioxide, which softens up the rock. On a large and geological scale, reactions between rainwater and calcium form the huge cavities we know as caves.

Beyond anatomy and art, calcium bubbles have shaped world economics and empires. The many calcium-rich coves along the southern coast of England aren't natural, but originated as limestone quarries around 55 BC BC, when the limestone-loving Romans arrived. Scouts sent out by Julius Caesar spotted an attractive, cream-colored limestone near modern-day Beer, England, and began chipping it out to adorn Roman facades. English limestone from Beer later was used in building Buckingham Palace, the Tower of London, and Westminster Abbey, and all that missing stone left gaping caverns in the seaside cliffs. By 1800, a few local boys who'd grown up sailing s.h.i.+ps and playing tag in the labyrinths decided to marry their childhood pastimes by becoming smugglers, using the calcium coves to conceal the French brandy, fiddles, tobacco, and silk they ran over from Normandy in fast cutters.

The smugglers (or, as they styled themselves, free traders) thrived because of the hateful taxes the English government levied on French goods to spite Napoleon, and the scarcity of the taxed items created, inevitably, a demand bubble. Among many other things, the inability of His Majesty's expensive coast guard to crack down on smuggling convinced Parliament to liberalize trade laws in the 1840s-which brought about real free trade, and with it the economic prosperity that allowed Great Britain to expand its never-darkening empire.

Given all this history, you'd expect a long tradition of bubble science, but no. Notable minds like Benjamin Franklin (who discovered why oil calms frothy water) and Robert Boyle (who experimented on and even liked to taste the fresh, frothy urine in his chamber pot) did dabble in bubbles. And primitive physiologists sometimes did things such as bubbling gases into the blood of half-living, half-dissected dogs. But scientists mostly ignored bubbles themselves, their structure and form, and left the study of bubbles to fields that they scorned as intellectually inferior-what might be called "intuitive sciences." Intuitive sciences aren't pathological, merely fields such as horse breeding or gardening that investigate natural phenomena but that long relied more on hunches and almanacs than controlled experiments. The intuitive science that picked up bubbles research was cooking. Bakers and brewers had long used yeasts-primitive bubble-making machines-to leaven bread and carbonate beer. But eighteenth-century haute cuisine chefs in Europe learned to whip egg whites into vast, fluffy foams and began to experiment with the meringues, porous cheeses, whipped creams, and cappuccinos we love today.

Still, chefs and chemists tended to distrust one another, chemists seeing cooks as undisciplined and unscientific, cooks seeing chemists as sterile killjoys. Only around 1900 did bubble science coalesce into a respectable field, though the men responsible, Ernest Rutherford and Lord Kelvin, had only dim ideas of what their work would lead to. Rutherford, in fact, was mostly interested in plumbing what at the time were the murky depths of the periodic table.

Shortly after moving from New Zealand to Cambridge University in 1895, Rutherford devoted himself to radioactivity, the genetics or nanotechnology of the day. Natural vigorousness led Rutherford to experimental science, for he wasn't exactly a clean-fingernails guy. Having grown up hunting quail and digging potatoes on a family farm, he recalled feeling like "an a.s.s in lion's skin" among the robed dons of Cambridge. He wore a walrus mustache, toted radioactive samples around in his pockets, and smoked foul cigars and pipes. He was given to blurting out both weird euphemisms-perhaps his devout Christian wife discouraged him from swearing-and also the bluest curses in the lab, because he couldn't help himself from d.a.m.ning his equipment to h.e.l.l when it didn't behave. Perhaps to make up for his cursing, he also sang, loudly and quite off-key, "Onward, Christian Soldiers" as he marched around his dim lab. Despite that ogre-like description, Rutherford's outstanding scientific trait was elegance. n.o.body was better, possibly in the history of science, at coaxing nature's secrets out of physical apparatus. And there's no better example than the elegance he used to solve the mystery of how one element can transform into another.

After moving from Cambridge to Montreal, Rutherford grew interested in how radioactive substances contaminate the air around them with more radioactivity. To investigate this, Rutherford built on the work of Marie Curie, but the New Zealand hick proved cagier than his more celebrated female contemporary. According to Curie (among others), radioactive elements leaked a sort of gas of "pure radioactivity" that charged the air, just as lightbulbs flood the air with light. Rutherford suspected that "pure radioactivity" was actually an unknown gaseous element with its own radioactive properties. As a result, whereas Curie spent months boiling down thousands of pounds of black, bubbling pitchblende to get microscopic samples of radium and polonium, Rutherford sensed a shortcut and let nature work for him. He simply left active samples beneath an inverted beaker to catch escaping bubbles of gas, then came back to find all the radioactive material he needed. Rutherford and his collaborator, Frederick Soddy, quickly proved the radioactive bubbles were in fact a new element, radon. And because the sample beneath the beaker shrank in proportion as the radon sample grew in volume, they realized that one element actually mutated into another.

Not only did Rutherford and Soddy find a new element, they discovered novel rules for jumping around on the periodic table. Elements could suddenly move laterally as they decayed and skip across s.p.a.ces. This was thrilling but blasphemous. Science had finally discredited and excommunicated the chemical magicians who'd claimed to turn lead into gold, and here Rutherford and Soddy were opening the gate back up. When Soddy finally let himself believe what was happening and burst out, "Rutherford, this is trans.m.u.tation!" Rutherford had a fit.

"For Mike's sake, Soddy," he boomed. "Don't call it trans.m.u.tation. They'll have our heads off as alchemists!"

The radon sample soon midwifed even more startling science. Rutherford had arbitrarily named the little bits that flew off radioactive atoms alpha particles. (He also discovered beta particles.) Based on the weight differences between generations of decaying elements, Rutherford suspected that alphas were actually helium atoms breaking off and escaping like bubbles through a boiling liquid. If this was true, elements could do more than hop two s.p.a.ces on the periodic table like pieces on a typical board game; if uranium emitted helium, elements were jumping from one side of the table to the other like a lucky (or disastrous) move in Snakes & Ladders.

To test this idea, Rutherford had his physics department's gla.s.sblowers blow two bulbs. One was soap-bubble thin, and he pumped radon into it. The other was thicker and wider, and it surrounded the first. The alpha particles had enough energy to tunnel through the first gla.s.s sh.e.l.l but not the second, so they became trapped in the vacuum cavity between them. After a few days, this wasn't much of an experiment, since the trapped alpha particles were colorless and didn't really do anything. But then Rutherford ran a battery current through the cavity. If you've ever traveled to Tokyo or New York, you know what happened. Like all n.o.ble gases, helium glows when excited by electricity, and Rutherford's mystery particles began glowing helium's characteristic green and yellow. Rutherford basically proved that alpha particles were escaped helium atoms with an early "neon" light. It was a perfect example of his elegance, and also his belief in dramatic science.

With typical flair, Rutherford announced the alpha-helium connection during his acceptance speech for the 1908 n.o.bel Prize. (In addition to winning the prize himself, Rutherford mentored and hand-trained eleven future prizewinners, the last in 1978, more than four decades after Rutherford died. It was perhaps the most impressive feat of progeny since Genghis Khan fathered hundreds of children seven centuries earlier.) His findings intoxicated the n.o.bel audience. Nevertheless, the most immediate and practical application of Rutherford's helium work probably escaped many in Stockholm. As a consummate experimentalist, however, Rutherford knew that truly great research didn't just support or disprove a given theory, but fathered more experiments. In particular, the alpha-helium experiment allowed him to pick the scab off the old theological-scientific debate about the true age of the earth.

The first semi-defensible guess for that age came in 1650, when Irish archbishop James Ussher worked backward from "data" such as the begats list in the Bible ("... and Serug lived thirty years, and begat Nahor... and Nahor lived nine and twenty years, and begat Terah," etc.) and calculated that G.o.d had finally gotten around to creating the earth on October 23, 4004 BC BC. Ussher did the best he could with the available evidence, but within decades that date was proved laughably late by most every scientific field. Physicists could even pin precise numbers on their guesses by using the equations of thermodynamics. Just as hot coffee cools down in a freezer, physicists knew that the earth constantly loses heat to s.p.a.ce, which is cold. By measuring the rate of lost heat and extrapolating backward to when every rock on earth was molten, they could estimate the earth's date of origin. The premier scientist of the nineteenth century, William Thomson, known as Lord Kelvin, spent decades on this problem and in the late 1800s announced that the earth had been born twenty million years before.

It was a triumph of human reasoning-and about as dead wrong as Ussher's guess. By 1900, Rutherford among others recognized that however far physics had outpaced other sciences in prestige and glamour (Rutherford himself was fond of saying, "In science, there is only only physics; all the rest is stamp collecting"-words he later had to eat when he won a n.o.bel Prize in Chemistry), in this case the physics didn't feel right. Charles Darwin argued persuasively that humans could not have evolved from dumb bacteria in just twenty million years, and followers of Scottish geologist James Hutton argued that no mountains or canyons could have formed in so short a span. But no one could unravel Lord Kelvin's formidable calculations until Rutherford started poking around in uranium rocks for bubbles of helium. physics; all the rest is stamp collecting"-words he later had to eat when he won a n.o.bel Prize in Chemistry), in this case the physics didn't feel right. Charles Darwin argued persuasively that humans could not have evolved from dumb bacteria in just twenty million years, and followers of Scottish geologist James Hutton argued that no mountains or canyons could have formed in so short a span. But no one could unravel Lord Kelvin's formidable calculations until Rutherford started poking around in uranium rocks for bubbles of helium.

Inside certain rocks, uranium atoms spit out alpha particles (which have two protons) and trans.m.u.tate into element ninety, thorium. Thorium then begets radium by spitting out another alpha particle. Radium begets radon with yet another, and radon begets polonium, and polonium begets stable lead. This was a well-known deterioration. But in a stroke of genius akin to Glaser's, Rutherford realized that those alpha particles, after being ejected, form small bubbles of helium inside rocks. The key insight was that helium never reacts with or is attracted to other elements. So unlike carbon dioxide in limestone, helium shouldn't normally be inside rocks. Any helium that is is inside rocks was therefore fathered by radioactive decay. Lots of helium inside a rock means that it's old, while scant traces indicate it's a youngster. inside rocks was therefore fathered by radioactive decay. Lots of helium inside a rock means that it's old, while scant traces indicate it's a youngster.

Rutherford had thought about this process for a few years by 1904, when he was thirty-three and Kelvin was eighty. By that age, despite all that Kelvin had contributed to science, his mind had fogged. Gone were the days when he could put forward exciting new theories, like the one that all the elements on the periodic table were, at their deepest levels, twisted "knots of ether" of different shapes. Most detrimentally to his science, Kelvin never could incorporate the unsettling, even frightening science of radioactivity into his worldview. (That's why Marie Curie once pulled him, too, into a closet to look at her glow-in-the-dark element-to instruct him.) In contrast, Rutherford realized that radioactivity in the earth's crust would generate extra heat, which would bollix the old man's theories about a simple heat loss into s.p.a.ce.

Excited to present his ideas, Rutherford arranged a lecture in Cambridge. But however dotty Kelvin got, he was still a force in scientific politics, and demolis.h.i.+ng the old man's proudest calculation could in turn jeopardize Rutherford's career. Rutherford began the speech warily, but luckily, just after he started, Kelvin nodded off in the front row. Rutherford raced to get to his conclusions, but just as he began knocking the knees out from under Kelvin's work, the old man sat up, refreshed and bright.

Trapped onstage, Rutherford suddenly remembered a throwaway line he'd read in Kelvin's work. It said, in typically couched scientific language, that Kelvin's calculations about the earth's age were correct unless someone discovered extra sources of heat unless someone discovered extra sources of heat inside the earth. Rutherford mentioned that qualification, pointed out that radioactivity might be that latent source, and with masterly spin ad-libbed that Kelvin had therefore predicted the discovery of radioactivity dozens of years earlier. What genius! The old man glanced around the audience, radiant. He thought that Rutherford was full of c.r.a.p, but he wasn't about to disregard the compliment. inside the earth. Rutherford mentioned that qualification, pointed out that radioactivity might be that latent source, and with masterly spin ad-libbed that Kelvin had therefore predicted the discovery of radioactivity dozens of years earlier. What genius! The old man glanced around the audience, radiant. He thought that Rutherford was full of c.r.a.p, but he wasn't about to disregard the compliment.

Rutherford laid low until Kelvin died, in 1907, then he soon proved the helium-uranium connection. And with no politics stopping him now-in fact, he became an eminent peer himself (and later ended up as scientific royalty, too, with a box on the periodic table, element 104, rutherfordium)-the eventual Lord Rutherford got some primordial uranium rock, eluted the helium from microscopic bubbles inside, and determined that the earth was at least 500 million years old-twenty-five times greater than Kelvin's guess and the first calculation correct to within a factor of ten. Within years, geologists with more experience finessing rocks took over for Rutherford and determined that the helium pockets proved the earth to be at least two billion years old. This number was still 50 percent too low, but thanks to the tiny, inert bubbles inside radioactive rocks, human beings at last began to face the astounding age of the cosmos.

After Rutherford, digging for small bubbles of elements inside rocks became standard work in geology. One especially fruitful approach uses zircon, a mineral that contains zirconium, the p.a.w.nshop heartbreaker and knockoff jewelry subst.i.tute.

For chemical reasons, zircons are hardy-zirconium sits below t.i.tanium on the periodic table and makes convincing fake diamonds for a reason. Unlike soft rocks such as limestone, many zircons have survived since the early years of the earth, often as hard, poppy-seed grains inside larger rocks. Due to their unique chemistry, when zircon crystals formed way back when, they vacuumed up stray uranium and packed it into atomic bubbles inside themselves. At the same time, zircons had a distaste for lead and squeezed that element out (the opposite of what meteors do). Of course, that didn't last long, since uranium decays into lead, but the zircons had trouble working the lead slivers out again. As a result, any lead inside lead-phobic zircons nowadays has to be a daughter product of uranium. The story should be familiar by now: after measuring the ratio of lead to uranium in zircons, it's just a matter of graphing backward to year zero. Anytime you hear scientists announcing a record for the "world's oldest rock"-probably in Australia or Greenland, where zircons have survived the longest-rest a.s.sured they used zircon-uranium bubbles to date it.

Other fields adopted bubbles as a paradigm, too. Glaser began experimenting with his bubble chamber in the 1950s, and around that same time, theoretical physicists such as John Archibald Wheeler began speaking of the universe as foam on its fundamental level. On that scale, billions of trillions of times smaller than atoms, Wheeler dreamed that "the gla.s.sy smooth s.p.a.cetime of the atomic and particle worlds gives way.... There would literally be no left and right, no before and after. Ordinary ideas of length would disappear. Ordinary ideas of time would evaporate. I can think of no better name than quantum foam for this state of affairs." Some cosmologists today calculate that our entire universe burst into existence when a single submicronan.o.bubble slipped free from that foam and began expanding at an exponential rate. It's a handsome theory, actually, and explains a lot-except, unfortunately, why this might have happened.

Ironically, Wheeler's quantum foam traces its intellectual lineage to the ultimate physicist of the cla.s.sical, everyday world, Lord Kelvin. Kelvin didn't invent froth science-that was a blind Belgian with the fitting name (considering how little influence his work had) of Joseph Plateau. But Kelvin did popularize the science by saying things like he could spend a lifetime scrutinizing a single soap bubble. That was actually disingenuous, since according to his lab notebooks, Kelvin formulated the outline of his bubble work one lazy morning in bed, and he produced just one short paper on it. Still, there are wonderful stories of this white-bearded Victorian splas.h.i.+ng around in basins of water and glycerin, with what looked like a miniature box spring on a ladle, to make colonies of interlocking bubbles. And squarish squarish bubbles at that, reminiscent of the bubbles at that, reminiscent of the Peanuts Peanuts character Rerun, since the box spring's coils were shaped into rectangular prisms. character Rerun, since the box spring's coils were shaped into rectangular prisms.

Plus, Kelvin's work gathered momentum and inspired real science in future generations. Biologist D'Arcy Wentworth Thompson applied Kelvin's theorems on bubble formation to cell development in his seminal 1917 book On Growth and Form, On Growth and Form, a book once called "the finest work of literature in all the annals of science that have been recorded in the English tongue." The modern field of cell biology began at this point. What's more, recent biochemical research hints that bubbles were the efficient cause of life itself. The first complex organic molecules may have formed not in the turbulent ocean, as is commonly thought, but in water bubbles trapped in Arctic-like sheets of ice. Water is quite heavy, and when water freezes, it crushes together dissolved "impurities," such as organic molecules, inside bubbles. The concentration and compression in those bubbles might have been high enough to fuse those molecules into self-replicating systems. Furthermore, recognizing a good trick, nature has plagiarized the bubble blueprint ever since. Regardless of where the first organic molecules formed, in ice or ocean, the first crude cells were certainly bubble-like structures that surrounded proteins or RNA or DNA and protected them from being washed away or eroded. Even today, four billion years later, cells still have a basic bubble design. a book once called "the finest work of literature in all the annals of science that have been recorded in the English tongue." The modern field of cell biology began at this point. What's more, recent biochemical research hints that bubbles were the efficient cause of life itself. The first complex organic molecules may have formed not in the turbulent ocean, as is commonly thought, but in water bubbles trapped in Arctic-like sheets of ice. Water is quite heavy, and when water freezes, it crushes together dissolved "impurities," such as organic molecules, inside bubbles. The concentration and compression in those bubbles might have been high enough to fuse those molecules into self-replicating systems. Furthermore, recognizing a good trick, nature has plagiarized the bubble blueprint ever since. Regardless of where the first organic molecules formed, in ice or ocean, the first crude cells were certainly bubble-like structures that surrounded proteins or RNA or DNA and protected them from being washed away or eroded. Even today, four billion years later, cells still have a basic bubble design.

Kelvin's work also inspired military science. During World War I, another lord, Lord Rayleigh, took on the urgent wartime problem of why submarine propellers were so p.r.o.ne to disintegrate and decay, even when the rest of the hull remained intact. It turned out that bubbles produced by the churning propellers turned around and attacked the metal blades like sugar attacks teeth, and with similarly corrosive results. Submarine science led to another breakthrough in bubble research as well-though at the time this finding seemed unpromising, even dodgy. Thanks to the memory of German U-boats, studying sonar-sound waves moving in water-was as trendy in the 1930s as radioactivity had been before. At least two research teams discovered that if they rocked a tank with jet enginelevel noise, the bubbles that appeared would sometimes collapse and wink at them with a flash of blue or green light. (Think of biting wintergreen Life Savers in a dark closet.) More interested in blowing up submarines, scientists didn't pursue so-called sonoluminescence, but for fifty years it hung on as a scientific parlor trick, pa.s.sed down from generation to generation.

It might have remained just that if not for a colleague taunting Seth Putterman one day in the mid-1980s. Putterman worked at the University of California at Los Angeles in fluid dynamics, a fiendishly tricky field. In some sense, scientists know more about distant galaxies than about turbulent water gus.h.i.+ng through sewer pipes. The colleague was teasing Putterman about this ignorance, when he mentioned that Putterman's ilk couldn't even explain how sound waves can trans.m.u.tate bubbles into light. Putterman thought that sounded like an urban legend. But after looking up the scant research that existed on sonoluminescence, he chucked his previous work to study blinking bubbles full-time.*

For Putterman's first, delightfully low-tech experiments, he set a beaker of water between two stereo speakers, which were cranked to dog-whistle frequencies. A heated toaster wire in the beaker kicked up bubbles, and sound waves trapped and levitated them in the water. Then came the fun part. Sound waves vary between barren, low-intensity troughs and crests of high intensity. The tiny, trapped bubbles responded to low pressure by swelling a thousand times, like a balloon filling a room. After the sound wave bottomed out, the high-pressure front tore in and crushed the bubble's volume by half a million times, at forces 100 billion times greater than gravity. Not surprisingly, it's that supernova crush that produces the eerie light. Most amazingly, despite being squished into a "singularity," a term rarely used outside the study of black holes, the bubble stays intact. After the pressure lifts, the bubble billows out again, unpopped, as if nothing had happened. It's then squished again and blinks again, with the process repeating thousands of times every second.

Putterman soon bought more sophisticated equipment than his original garage-band setup, and upon doing so, he had a run-in with the periodic table. To help determine what exactly caused the bubbles to sparkle, he began trying different gases. He found that although bubbles of plain air produced nice crackles of blue and green, pure nitrogen or oxygen, which together make up 99 percent of air, wouldn't luminesce, no matter what volume or shrillness he cranked the sound to. Perturbed, Putterman began pumping trace gases from air back into the bubbles until he found the elemental flint-argon.

That was odd, since argon is an inert gas. What's more, the only other gases Putterman (and a growing cadre of bubble scientists) could get to work were argon's heavier chemical cousins, krypton and especially xenon. In fact, when rocked with sonar, xenon and krypton flared up even brighter than argon, producing "stars in a jar" that sizzled at 35,000F inside water-far hotter than the surface of the sun. Again, this was baffling. Xenon and krypton are often used in industry to smother fires or runaway reactions, and there was no reason to think those dull, inert gases could produce such intense bubbles.

Unless, that is, their inertness is a covert a.s.set. Oxygen, carbon dioxide, and other atmospheric gases inside bubbles can use the incoming sonar energy to divide or react with one another. From the point of view of sonoluminescence, that's energy squandered. Some scientists, though, think that inert gases under high pressure cannot help but soak up sonar energy. And with no way to dissipate the energy, bubbles of xenon or krypton collapse and have no choice but to propagate and concentrate energy in the bubbles' cores. If that's the case, then the n.o.ble gases' nonreactivity is the key to sonoluminescence. Whatever the reason, the link to sonoluminescence will rewrite what it means to be an inert gas.

Unfortunately, tempted by harnessing that high energy, some scientists (including Putterman) have linked this fragile bubble science with desktop fusion, a cousin of that all-time favorite pathological science. (Because of the temperatures involved, it's not cold fusion.) There has long been a vague, free-a.s.sociation link between bubbles and fusion, partly because Boris Deryagin, an influential Soviet scientist who studied the stability of foams, believed strongly in cold fusion. (Once, in an inconceivable experiment, the ant.i.thesis of one of Rutherford's, Deryagin supposedly tried to induce cold fusion in water by firing a Kalashnikov rifle into it.) The dubious link between sonoluminescence and fusion (sonofusion) was made explicit in 2002 when the journal Science Science ran a radioactively controversial paper on sonoluminescence-driven nuclear power. Unusually, ran a radioactively controversial paper on sonoluminescence-driven nuclear power. Unusually, Science Science also ran an editorial admitting that many senior scientists thought the paper flawed if not fraudulent; even Putterman recommended that the journal reject this one. also ran an editorial admitting that many senior scientists thought the paper flawed if not fraudulent; even Putterman recommended that the journal reject this one. Science Science printed it anyway (perhaps so everyone would have to buy a copy to see what all the fuss was about). The paper's lead author was later hauled before the U.S. House of Representatives for faking data. printed it anyway (perhaps so everyone would have to buy a copy to see what all the fuss was about). The paper's lead author was later hauled before the U.S. House of Representatives for faking data.

Thankfully, bubble science had a strong enough foundation* to survive that disgrace. Physicists interested in alternative energy now model superconductors with bubbles. Pathologists describe AIDS as a "foamy" virus, for the way infected cells swell before exploding. Entomologists know of insects that use bubbles like submersibles to breathe underwater, and ornithologists know that the metallic sheen of peac.o.c.ks' plumage comes from light tickling bubbles in the feathers. Most important, in 2008, in food science, students at Appalachian State University finally determined what makes Diet c.o.ke explode when you drop Mentos into it. Bubbles. The grainy surface of Mentos candy acts as a net to snag small dissolved bubbles, which are st.i.tched into large ones. Eventually, a few gigantic bubbles break off, rocket upward, and whoosh through the nozzle, spurting up to twenty magnificent feet. This discovery was undoubtedly the greatest moment in bubble science since Donald Glaser eyed his lager more than fifty years before and dreamed of subverting the periodic table. to survive that disgrace. Physicists interested in alternative energy now model superconductors with bubbles. Pathologists describe AIDS as a "foamy" virus, for the way infected cells swell before exploding. Entomologists know of insects that use bubbles like submersibles to breathe underwater, and ornithologists know that the metallic sheen of peac.o.c.ks' plumage comes from light tickling bubbles in the feathers. Most important, in 2008, in food science, students at Appalachian State University finally determined what makes Diet c.o.ke explode when you drop Mentos into it. Bubbles. The grainy surface of Mentos candy acts as a net to snag small dissolved bubbles, which are st.i.tched into large ones. Eventually, a few gigantic bubbles break off, rocket upward, and whoosh through the nozzle, spurting up to twenty magnificent feet. This discovery was undoubtedly the greatest moment in bubble science since Donald Glaser eyed his lager more than fifty years before and dreamed of subverting the periodic table.

Tools of Ridiculous Precision

Think of the most fussy science teacher you ever had. The one who docked your grade if the sixth decimal place in your answer was rounded incorrectly; who tucked in his periodic table T-s.h.i.+rt, corrected every student who said "weight" when he or she meant "ma.s.s," and made everyone, including himself, wear goggles even while mixing sugar water. Now try to imagine someone whom your teacher would hate for being a.n.a.l-retentive. That That is the kind of person who works for a bureau of standards and measurement. is the kind of person who works for a bureau of standards and measurement.

Most countries have a standards bureau, whose job it is to measure everything everything-from how long a second really is to how much mercury you can safely consume in bovine livers (very little, according to the U.S. National Inst.i.tute of Standards and Technology, or NIST). To scientists who work at standards bureaus, measurement isn't just a practice that makes science possible; it's a science in itself. Progress in any number of fields, from post-Einsteinian cosmology to the astrobiological hunt for life on other planets, depends on our ability to make ever finer measurements based on ever smaller sc.r.a.ps of information.

For historical reasons (the French Enlightenment folk were fanatic measurers), the Bureau International des Poids et Mesures (BIPM) just outside Paris acts as the standards bureau's standards bureau, making sure all the "franchises" stay in line. One of the more peculiar jobs of the BIPM is coddling the International Prototype Kilogram-the world's official kilogram. It's a two-inch-wide, 90 percent platinum cylinder that, by definition, has a ma.s.s of exactly 1.000000... kilogram (to as many decimal places as you like). I'd say that's about two pounds, but I'd feel guilty about being inexact.

The two-inch-wide International Prototype Kilogram (center), made of platinum and iridium, spends all day every day beneath three nested bell jars inside a humidity- and temperature-controlled vault in Paris. Surrounding the Kilogram are six official copies, each under two bell jars. (Reproduced with permission of BIPM, which retains full international protected copyright) Because the Kilogram is a physical object and therefore damageable, and because the definition of a kilogram ought to stay constant, the BIPM must make sure it never gets scratched, never attracts a speck of dust, never loses (the bureau hopes!) a single atom. For if any of that happened, its ma.s.s could spike to 1.000000... 1 kilograms or plummet to 0.999999... 9 kilograms, and the mere possibility induces ulcers in a national bureau of standards type. So, like phobic mothers, they constantly monitor the Kilogram's temperature and the pressure around it to prevent microscopic bloating and contracting, stress that could slough off atoms. It's also swaddled within three successively smaller bell jars to prevent humidity from condensing on the surface and leaving a nanoscale film. And the Kilogram is made from dense platinum (and iridium) to minimize the surface area exposed to unacceptably dirty air, the kind we breathe. Platinum also conducts electricity well, which cuts down on the buildup of "parasitic" static electricity (the BIPM's word) that might zap stray atoms.

Finally, platinum's toughness mitigates against the chance of a disastrous fingernail nick on the rare occasions when people actually lay a hand on the Kilogram. Other countries need their own official 1.000000... cylinder to avoid having to fly to Paris every time they want to measure something precisely, and since the Kilogram is the the standard, each country's knockoff has to be compared against it. The United States has had its official kilogram, called K20 (i.e., the twentieth official copy), which resides in a government building in exurban Maryland, calibrated just once since 2000, and it's due for another calibration, says Zeina Jabbour, group leader for the NIST ma.s.s and force team. But calibration is a multimonth process, and security regulations since 2001 have made flying K20 to Paris an absolute ha.s.sle. "We have to hand-carry the kilograms through the flight," says Jabbour, "and it's hard to get through security and customs with a slug of metal, and tell people they cannot touch it." Even opening K20's customized suitcase in a "dusty airport" could compromise it, she says, "and if somebody insists on touching it, that's the end of the calibration." standard, each country's knockoff has to be compared against it. The United States has had its official kilogram, called K20 (i.e., the twentieth official copy), which resides in a government building in exurban Maryland, calibrated just once since 2000, and it's due for another calibration, says Zeina Jabbour, group leader for the NIST ma.s.s and force team. But calibration is a multimonth process, and security regulations since 2001 have made flying K20 to Paris an absolute ha.s.sle. "We have to hand-carry the kilograms through the flight," says Jabbour, "and it's hard to get through security and customs with a slug of metal, and tell people they cannot touch it." Even opening K20's customized suitcase in a "dusty airport" could compromise it, she says, "and if somebody insists on touching it, that's the end of the calibration."

Usually, the BIPM uses one of six official copies of the Kilogram (each kept under two bell jars) to calibrate the knockoffs. But the official copies have to be measured against their own standard, so every few years scientists remove the Kilogram from its vault (using tongs and wearing latex gloves, of course, so as not to leave fingerprints-but not the powdery kind of gloves, because that would leave a residue-oh, and not holding it for too long, because the person's body temperature could heat it up and ruin everything) and calibrate the calibrators.* Alarmingly, scientists noticed during calibrations in the 1990s that, even accounting for atoms that rub off when people touch it, in the past few decades the Kilogram had lost an additional ma.s.s equal to that of a fingerprint(!), half a microgram per year. No one knows why. Alarmingly, scientists noticed during calibrations in the 1990s that, even accounting for atoms that rub off when people touch it, in the past few decades the Kilogram had lost an additional ma.s.s equal to that of a fingerprint(!), half a microgram per year. No one knows why.

The failure-and it is that-to keep the Kilogram perfectly constant has renewed discussions about the ultimate dream of every scientist who obsesses over the cylinder: to make it obsolete. Science owes much of its progress since about 1600 to adopting, whenever possible, an objective, non-human-centered point of view about the universe. (This is called the Copernican principle, or less flatteringly the mediocrity principle.) The kilogram is one of seven "base units" of measurement that permeate all branches of science, and it's no longer acceptable for any of those units to be based on a human artifact, especially if it's mysteriously shrinking.

The goal with every unit, as England's bureau of national standards cheekily puts it, is for one scientist to e-mail its definition to a colleague on another continent and for the colleague to be able to reproduce something with exactly those dimensions, based only on the description in the e-mail. You can't e-mail the Kilogram, and no one has ever come up with a definition more reliable than that squat, s.h.i.+ny, pampered cylinder in Paris. (Or if they have, it's either too impossibly involved to be practical-such as counting trillions of trillions of atoms-or requires measurements too precise for even the best instruments today.) The inability to solve the kilogram conundrum, to either stop it from shrinking or superannuate it, has become an increasing source of international worry and embarra.s.sment (at least for us a.n.a.l types).

The pain is all the more acute because the kilogram is the last base unit bound to human strictures. A platinum rod in Paris defined 1.000000... meter through much of the twentieth century, until scientists redefined it with a krypton atom in 1960, fixing it at 1,650,763.73 wavelengths of red-orange light from a krypton-86 atom. This distance is virtually identical to the length of the old rod, but it made the rod obsolete, since that many wavelengths of krypton light would stretch the same distance in any vacuum anywhere. (That's an e-mailable definition.) Since then, measurement scientists (metrologists) have re-redefined a meter (about three feet) as the distance any light travels in a vacuum in 1/299,792,458 of a second. an e-mailable definition.) Since then, measurement scientists (metrologists) have re-redefined a meter (about three feet) as the distance any light travels in a vacuum in 1/299,792,458 of a second.

Similarly, the official definition of one second used to be about 1/31,556,992 of one trip around the sun (the number of seconds in 365.2425 days). But a few pesky facts made that an inconvenient standard. The length of a year-not just a calendar year, but an astronomical year-varies with every trip because of the slos.h.i.+ng of ocean tides, which drag and slow earth's...o...b..t. To correct for this, metrologists slip in a "leap second" about every third year, usually when no one's paying attention, at midnight on December 31. But leap seconds are an ugly, ad hoc solution. And rather than tie a supposedly universal unit of time to the transit of an unremarkable rock around a forgettable star, the U.S. standards bureau has developed cesium-based atomic clocks.

Atomic clocks run on the same leaping and cras.h.i.+ng of excited electrons we've discussed before. But atomic clocks also exploit a subtler movement, the electrons' "fine structure." If the normal jump of an electron resembles a singer jumping an octave from G to G, fine structure resembles a jump from G to G-flat or G-sharp. Fine structure effects are most noticeable in magnetic fields, and they're caused by things you can safely ignore unless you find yourself in a dense, high-level physics course-such as the magnetic interactions between electrons and protons or corrections due to Einstein's relativity. The upshot is that after those fine adjustments,* each electron jumps either slightly lower (G-flat) or slightly higher (G-sharp) than expected. each electron jumps either slightly lower (G-flat) or slightly higher (G-sharp) than expected.

The electron "decides" which jump to make based on its intrinsic spin, so one electron never hits the sharp and the flat on successive leaps. It hits one or the other every time. Inside atomic clocks, which look like tall, skinny pneumatic tubes, a magnet purges all the cesium atoms whose outer electrons jump to one level, call it G-flat. That leaves only atoms with G-sharp electrons, which are gathered into a chamber and excited by an intense microwave. This causes cesium electrons to pop (i.e., jump and crash) and emit photons of light. Each cycle of jumping up and down is elastic and always takes the same (extremely short) amount of time, so the atomic clock can measure time simply by counting photons. Really, whether you purge the G-flat or G-sharp doesn't matter, but you have to purge one of them because jumping to either level takes a different amount of time, and at the scales metrologists work with, such imprecision is unacceptable.

Cesium proved convenient as the mainspring for atomic clocks because it has one electron exposed in its outermost sh.e.l.l, with no nearby electrons to m.u.f.fle it. Cesium's heavy, lumbering atoms are fat targets for the maser that strums them as well. Still, even in plodding cesium, the outer electron is a quick b.u.g.g.e.r. Instead of a few dozen or few thousand times per second, it performs 9,192,631,770 back-and-forths every one-Mississippi. Scientists picked that ungainly number instead of cutting themselves off at 9,192,631,769 or letting things drag on until 9,192,631,771 because it matched their best guess for a second back in 1955, when they built the first cesium clock. Regardless, 9,192,631,770 is now fixed. It became the first base-unit definition to achieve universal e-mailability, and it even helped liberate the meter from its platinum rod after 1960.

Scientists adopted the cesium standard as the world's official measurement of time in the 1960s, replacing the astronomical second, and while the cesium standard has profited science by ensuring precision and accuracy worldwide, humanity has undeniably lost something. Since before even the ancient Egyptians and Babylonians, human beings used the stars and seasons to track time and record their most important moments. Cesium severed that link with the heavens, effaced it just as surely as urban streetlamps blot out constellations. However fine an element, cesium lacks the mythic feeling of the moon or sun. Besides, even the argument for switching to cesium-its universality, since cesium electrons should vibrate at the same frequency in every pocket of the universe-may no longer be a safe bet.

If anything runs deeper than a mathematician's love of variables, it's a scientist's love of constants. The charge of the electron, the strength of gravity, the speed of light-no matter the experiment, no matter the circ.u.mstances, those parameters never vary. If they did, scientists would have to chuck the precision that separates "hard" sciences from social sciences like economics, where whims and sheer human idiocy make universal laws impossible.

Even more seductive to scientists, because more abstract and universal, are fundamental constants. Obviously, the numerical value of a particle's size or speed would change if we arbitrarily decided that meters should be longer or if the kilogram suddenly shrank (ahem). Fundamental constants, however, don't depend on measurement. Like , they're pure, fixed numbers, and also like , they're pure, fixed numbers, and also like , they pop up in all sorts of contexts that seem tantalizingly explainable but that have so far resisted all explanation. , they pop up in all sorts of contexts that seem tantalizingly explainable but that have so far resisted all explanation.

The best-known dimensionless constant is the fine structure constant, which is related to the fine splitting of electrons. In short, it controls how tightly negative electrons are bound to the positive nucleus. It also determines the strength of some nuclear processes. In fact, if the fine structure constant-which I'll refer to as alpha, because that's what scientists call it-if alpha had been slightly smaller right after the big bang, nuclear fusion in stars would never have gotten hot enough to fuse carbon. Conversely, if alpha had grown slightly larger, carbon atoms would all have disintegrated aeons ago, long before finding their way into us. That alpha avoided this atomic Scylla and Charybdis makes scientists thankful, naturally, but also very antsy, because they cannot explain how it succeeded. Even a good, inveterate atheist like physicist Richard Feynman once said of the fine structure constant, "All good theoretical physicists put this number up on their wall and worry about it.... It's one of the greatest d.a.m.n mysteries of physics: a magic number that comes to us with no understanding by man. You might say the 'hand of G.o.d' wrote that number, and we don't know how He pushed His pencil."

Historically, that didn't stop people from trying to decipher this scientific mene, mene, tekel, upharsin mene, mene, tekel, upharsin. English astronomer Arthur Eddington, who during a solar eclipse in 1919 provided the first experimental proof of Einstein's relativity, grew fascinated with alpha. Eddington had a penchant, and it must be said a talent, for numerology,* and in the early 1900s, after alpha was measured to be around 1/136, Eddington began concocting "proofs" that alpha equaled exactly 1/136, partly because he found a mathematical link between 136 and 666. (One colleague derisively suggested rewriting the book of Revelation to take this "finding" into account.) Later measurements showed that alpha was closer to 1/137, but Eddington just tossed a 1 into his formula somewhere and continued on as if his sand castle hadn't crumbled (earning him the immortal nickname Sir Arthur Adding-One). A friend who later ran across Eddington in a cloakroom in Stockholm was chagrined to see that he insisted on hanging his hat on peg 137. and in the early 1900s, after alpha was measured to be around 1/136, Eddington began concocting "proofs" that alpha equaled exactly 1/136, partly because he found a mathe

The Disappearing Spoon Part 7

You're reading novel The Disappearing Spoon Part 7 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.


The Disappearing Spoon Part 7 summary

You're reading The Disappearing Spoon Part 7. This novel has been translated by Updating. Author: Sam Kean already has 795 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com