What nuclear ‘waste’ is, and is not
And how it might be valuable in the future
Millennia of evolution taught H. sapiens an important lesson: he could get away with being a slob. Hunter-gathers could drop garbage wherever they liked. Before having to take out the trash, they could move camp.
When humans starting huddling together in settlements, the slovenly habits persisted. The ground level of ancient Troy rose an average of 4.7 feet per century. These layers of trash are a gift to archaeologists, who find them “more productive of information than any others,” to quote University of Arizona archaeologist Emil Haury. Sir Leonard Woolley established the chronological sequence of Carchemish on the Euphrates from its garage dump. Professor Haury dated a mammoth meal found near near Naco, Arizona from Clovis spear points left in the bones.
“For thousands of years,” Lewis Mumford writes in The City in History, “city dwellers put up with defective, often quite vile, sanitary arrangements, wallowing in rubbish and filth they certainly had the power to remove.” In ancient Rome, Juvenal joked about the hazards to pedestrians of pottery tossed out windows. There, owners of urban villas did not fight the rising tide of rubble, but instead periodically raised their doorsteps. Rome’s Lex Municipalis, the first laws governing urban living, did require street-sweeping, but were targeted at clearing traffic hazards, not sanitation. Organic waste removal was the job of domestic pigs or more feral fauna. A stray dog is reported to have brought a human hand into the imperial dining room while Vespasian was having dinner.
It took vast piles of coal ash beginning to accrue in the early industrial era for a lightbulb to go off in H. sapiens’ dim brain. London reformer Corbyn Morris finally suggested in 1751 that “all the filth be conveyed … to proper distance in the country.”
The contemporary climate emergency is a waste management problem. In fact, in the same one. We can hardly fault Corbyn Morris for being unaware of the less visible products of coal combustion, which were, after all, being “conveyed,” if into the atmosphere. It was not an obvious evil. When carbon dioxide gas was discovered by Flemish chemist Jan Baptist van Helmont in 1640, it was thought to be a natural element. Van Helmont even gave it an eco-friendly Latin name: spiritus sylvestris — “forest gas.”
By the early 1800s, an industry had grown up to convey ashes from coal fires used for cooking and heating out of London. Ashes were collected by a colorful figure known as the “dustmen” (to this day, the British word for a garbage collector). They were carted just outside town to tremendous dust heaps, the most famous of these today being the fictional one inherited by Nicodemus Boffin in Dickens’ Our Mutual Friend. “Dust-contractors,” such as Boffin’s benefactor, had sweet deals with the local parishes. They were, in the 1851 report of Henry Mayhew, “men of considerable wealth.”
“Dust sifting” was the work of the Victorian poor, including women and children. Sifted to “fine” grade, the dust was used, like sand, to reclaim marshland for agriculture. Courser grades went into brick-making. Linen rags found in the dust could be sold on to make paper.
For a Christmas Faire idea, here is how Mayhew describes the Victorian recyclers in London Labour and the London Poor (1851):
In a dust-yard … the sifters formed a curious sight; they were almost up to their middle in dust, ranged in a semi-circle in front of what part of the heap which was being “worked;” each had before her a small mound of soil which had fallen through her sieve and formed a sort of embankment, behind which she stood. Their coarse dirty cotton gowns were tucked up behind them, their arms were bared above their elbows, their black bonnets crushed and battered like those of fish-women; over their gowns they wore a strong leathern apron, extending from their necks to the extremities of their petticoats, while over this, again, was another leathern apron, shorter, thickly padded, and fastened by a stout string or strap round the waist. In the process of their work they pushed the sieve from them and drew it back again with apparent violence, striking it against the outer leathern apron with such force that it produced each time a hollow sound, like a blow on the tenor drum.
Dickens’ dust-sifting urchins and their mothers were unaware, but when coal is burned to fly ash it, becomes radioactive. Trace amounts of uranium and thorium are concentrated up to 10 times. While uranium had been isolated from pitchblende in 1789 — it was named after the then newly discovered planet Uranus — its radioactivity was not appreciated until 1896, when it was described by Henri Becquerel, Marie Curie’s mentor. “Coal Ash Is More Radioactive Than Nuclear Waste,” is a 2007 headline from Scientific American. In the 1960s, when uranium was thought to be scarce, Union Carbide ran a commercial plant extracting it from coal ash in North Dakota. Some Chinese coal-burning power plants today pile up their fly ash with an eye to eventual uranium extraction.
All the uranium on earth — and on the moon, and in meteorites — was created at the same instant 4.571 billion years ago, when a supernova compressed a cloud of interstellar gas and dust into us, or what would become us. Half of that “primordial” uranium is still around. The most common isotope of uranium, U-238, has a half-life of (coincidentally) 4.5 billion years. Thorium-232, created in the same Big Squeeze, has a half-life of 14.05 billion years.
Anyone seeking to avoid radioactivity altogether is living on the wrong planet. Heat from the radioactive decay of uranium and thorium keeps the earth’s mantle liquid, the continents adrift on it. Seawater contains a enough dissolved uranium (3.3 micrograms per liter) that it is contemplated from time to time as a source for the element. The granite used in public buildings, such as Grand Central Station and the U.S. Capitol, contains enough uranium that the over-background radioactivity of these monuments is easily measured. One of uranium’s decay products is radon gas, dangerous when it concentrates in basements and enclosed spaces.
The terminology of radiation is complicated. The elemental unit used to measure radioactivity is the becquerel, defined as one atomic nucleus disintegrating per second. In 1975 the becquerel, which is very small, succeeded the 1910 measure of radioactivity, the curie, which is very large. One curie (named after Pierre, not Marie) was then defined as the level of radioactivity of 1 gram of radium-226. It equals 37 giga-becquerels:
1 Ci = 3.7 × 10¹⁰ Bq
Just about anything is impressively radioactive measured in becquerels. Consider the banana, a favorite of all primates and an excellent source of dietary potassium. The average banana contains one-half gram of potassium. Of that half-gram, 0.0117% is a naturally radioactive isotope, potassium-40. The math shows the average banana gives off about 15 becquerels of radioactivity: 15 potassium atoms are disintegrating per second. Those disintegrations are going on whether the banana is being held in a primate hand, or has been eaten.
Potassium-40 in a banana undergoes what is called beta decay over a long period of time, 1.251×10⁹ years. It will, eventually, turn into calcium-40. Radiation comes in three flavors: energetic (gamma), mild (alpha), and in-between (beta). In general, alpha particles cannot penetrate a sheet of paper or the the skin. Beta particles can be stopped by doubled-up aluminum foil, of the sort used in protective headgear. Gamma radiation is stopped only by lead or concrete.
Radioactive elements have their own evolutionary ecology. Those with half-lives less than 100 million years have gone extinct. A few are continually reborn from cosmic rays striking dust in the upper atmosphere or from the decay of longer-lived radionuclides. Radon, which has a short half-life, is born of uranium, which has a long one. A counter-intuitive rule of thumb about radioactive decay is: the longer the half-life of the element, the lower the energy of the radiation and, by implication, its risk to human health.
Radioactive decay is a blessing to geologists who use it for radiometric dating, a non-social activity. The different rate at which long-lived isotopes decay allows the age of rock to be computed. Modern weapons, not Clovis spearpoints, will provide future geologists a handy stratigraphic marker for the Anthropocene, our current geological epoch — if we decide, rather immodestly, that humans deserve one of their own. The Anthropocene Working Group of the International Union of Geological Sciences very much likes 1950 as the kick-off time for our new epoch — because of the bright line left in the sand at that time by fallout from above-ground atomic testing. Traces of that radioactivity, already much decayed, will presumably remain detectable by far-future archaeologists with their superior instruments. Humans will no longer need envy the dinosaurs, who have a fine band of meteoric iridium on their tombstone.
Life on earth has had a long time — in fact, its whole time — to get used to background radiation. How evolution learned to cope with radiation — or if it actually has — has been topic of much debate since 1945.
One theory, popular in the 1920s and 1930s, held that radiation might be the very engine of evolution. In the late 1920s, Hermann Muller, the fruit fly man, documented mutagenesis by bombarding his flies with X-rays. Muller’s mutant flies — those that survived — had permanently altered chromosomes.
After World War II, Muller became an outspoken critic of atomic testing. His media pronouncements provided ample food for worry for that era’s analog doomscrollers. Dr. Muller possibly also gets credit for inspiring a generation Japanese of filmmakers. To be fair to the lovable reptile, Godzilla (1954) was not a genetic mutant; the prehistoric sea monster was only “awakened and empowered” by atomic radiation.
In the late 1950s, the Drs. Wharton, a husband-and-wife team, looked at the question of whether cockroaches would survive an atomic Armageddon. The Whartons found the tough-guy reputation of cockroaches much inflated. Among the various insects the Whartons irradiated, cockroaches were strictly middle-of-the-pack. Some species actually thrived after being dosed with X-rays. Radioresistance has since become a well-documented, if poorly understood, phenomenon; it pops up in the strangest places, such as the exclusion zone around Chernobyl.
The impact of low-level radiation on humans remains a subject of contentious debate. The “linear no-threshold” (LNT) hypothesis takes as its starting point the observation that very high doses of radiation can kill. It then interpolates lower-level effects by drawing a straight line back to the origin point: zero dose, zero impact. There is no bend in the line, no threshold. If 20 aspirins have a 50% chance of killing a person, one aspirin has a 2.5% chance.
Regulators and the risk-management community have a large investment in LNT. It goes down well with the public’s unfocused fear of all things radioactive. Sadly, LNT is simply too simple, and not supported by much, if any, empirical evidence. Workers in occupations exposed to higher-than-average doses of radiation — flight crews, nuclear plant workers, medical technicians — have no higher cancer mortality than the general population. People living in Colorado and Wyoming get twice the dose of natural background radiation as those living in Los Angeles, but have lower cancer rates. LNT stumbles on as a zombie theory, the regulators preferring to regulate “as if” it were true.
If the are thresholds for exposure to various sorts of radiation, establishing them is a very complicated business. Complicating matters even more is a third hypothesis, hormesis, that is presently heresy in polite society. Hormesis holds that low doses of radiation are actually beneficial, operating somewhat like physical exercise to stimulate the body’s repair mechanisms.
Historically, hormesis was a truism of folk medicine. Most natural hot springs, where for centuries the well-off “took the cure,” are radioactive at some level. Hormesis has its believers, who seem to cluster in Germany. A German study showed that radium-224 — a kinder, gentler, isotope than the usual radium-226 — eased back pain for sufferers of ankylosing spondylitis, a form of spinal arthritis.
Current research protocols, of course, make hormesis almost impossible to study in humans. A carefully controlled study of plutonium-239 oxide inhalation, the “plutonium dogs” study, found a zero incidence of lung cancer among the plutonium-exposed dogs, against a statistically significant natural rate in the non-exposed dogs—suggesting the plutonium oxide actually had a preventative effect.
The gullibility of our great-grandparents also lends support to a threshold theory. Radium was used in beauty and skin-care products sold to women in the early 20th century. The radium craze of that era began to lose steam only in the mid-1920s, when it became public knowledge that the “dial-painters” — young women hand-painting luminous dials on timepieces —were developing bone and jaw cancers. The exposure was quickly traced to the practice of “tipping” their brushes with their lips, and when the practice was stopped, so did the cancers. The factor
The radium craze of the 1920s did had one useful scientific legacy: rare, longitudinal life-long data on thousands of people who had been exposed to radium. The results of several dozen studies were summarized in 1995 by R. E. Rowland in Radium in Humans. Rowland from this work convinced that there was radiation exposure threshold, or, as an alternative way of putting it, the “dosage/adverse impact” curve was very much non-linear. Rowland was even willing to entertain the hormesis hypothesis, and makes an interesting point on why the usual studies might not see it: “epidemiologists never seem to look at deviations in the negative directions, even if they’re statistically significant.”
There is a good reason why uranium is the last naturally occurring element. Its nucleus, with 92 protons and 146 neutrons (U-238), can barely hold itself together.
That uranium undergoes radioactive decay was discovered by Becquerel and the Curies in the 1890s. Marie Curie’s radium is yet another “daughter product” of uranium; the Curies were able to isolate tiny quantities from pitchblende, uranium ore.
Slow radioactive decay was one thing. It was inconceivable at that time, and for decades after, that the uranium nucleus could break in two. For fans of alternative history, that discovery should have been in Mussolini’s fascist Italy in 1934. After the neutron was discovered in 1932, the brilliant Enrico Fermi, then in Rome, started bombarding different elements with neutrons to see what happened. Fermi methodically worked his way up the periodic table. When he got to uranium, Fermi’s experimental setup — the shielding — prevented him from noticing fission.
To be fair to Fermi, no one was looking for the atomic nucleus to split. The mental model physicists had of it in the mid-1930s was of a hard ball, like iron or stone. By bombarding it with various particles, they hoped at best to chip some pieces off it, not shatter it in two. The inexplicable presence of two lighter elements — notably barium — after some experiments was taken as evidence that the experimenters had done something wrong.
It took the brilliance of Niels Bohr to suggest a different model for the nucleus, the liquid drop. If the nucleus was a liquid drop, that of uranium was a wobbly, water-filled balloon. Still, no one suspected it could break. Finally, two supremely careful chemists, Otto Hahn and Fritz Strassmann at the Kaiser Wilhelm Institute in Berlin, swore up and down that the neutron bombardment of uranium produced barium. They mailed their results to physicist Lise Meitner and her nephew, Otto Frisch. Meitner and Frisch mentally put the pieces together. The German scientists had split the atom.
The start of the war in Europe was months away. Meitner, born into a Jewish family in Vienna, had already fled Nazi Germany and was in living exile in Sweden. In history, timing is everything. Meitner’s and Frisch’s result just made it into the open literature. (British Nature, February 1939, “A New Type of Nuclear Reaction”). Berkeley Rad Lab physicist Louis Alvarez, then 27 years old, read about the German scientists “splitting the atom” in the January 31, 1939 edition of San Francisco Chronicle, while getting his hair cut in the student union.
Not lost on any scientist, or on the Associated Press, was the vast amount of energy splitting the atom would release. The observed loss of mass had only to be plugged into Einstein’s famous equation, e = mc². The experiment itself was not even all that hard to replicate. Uranium fission took place for the first time in the United States in the basement of a Washington, D.C. office building, in a quickly lashed-up confirmation of the German result.
Fermi did made one very important discovery in his Rome lab, which confirms Pasteur’s dictum that serendipity favors the prepared mind. Fermi noticed that “slow” neutrons passing through an old wooden lab bench were much more activating than the “fast” neutrons passing through the fancy Italian marble tops on the other benches. Uranium, it would prove, needs “slow” neutrons to fission best; slow neutrons have more time on target when the target is the nucleus. In December 1938, Fermi and his wife Laura, who was Jewish, traveled to Stockholm to pick up a Nobel Prize for this discovery. The Fermis kept on going. They ended up in New York.
In May 1972, alarm bells went off at a secretive French processing plant northwest of Marseille. A shipment of uranium ore from Africa had come in short.
The door to the nuclear weapons club was then ajar. Only the USA, USSR, UK, France and China officially had atomic bombs, but it was obvious that any small country with a strong desire and a bit of engineering talent could make one. Nuclear fission wasn’t rocket science. India was intent on building a bomb. What India was about to have, Pakistan’s military clique wanted. Zulfikar Ali Bhutto volunteered his citizens to make any sacrifice for the country’s weapons program: “We will make an atomic bomb even if we have to eat grass,” Bhutto said in January 1972. Even the minority apartheid regime of South Africa was working on an atomic bomb.
More troublesome, as always, was the Middle East. Israel had readied two rudimentary atomic bombs to use as a last resort in the 1967 Six-Day War with Jordan, Syria and Egypt. These were, according to subsequent reports, ungainly contraptions that would have been delivered across the desert by truck. They weren’t used, but did provide Tom Clancy with a premise for The Sum of All Fears.
Back in Marseille, diversion was on many minds. What was strange was that only the “good stuff” in the uranium ore — the fissile isotope U-235 — had gone missing. Its percentage just too low. All natural uranium ore, having been created at the same time, should anywhere and everywhere consist of 99.3% U-238 and 0.72% U-235. So how did the U-235 go missing in — or on its way from — Gabon?
The scientific investigation that followed was not Tom Clancy, but rather inspired for real life. It turned out the uranium miners in Gabon had driven their shaft into an ancient nuclear reactor — in fact, into one of 16 ancient nuclear reactors on the Oklo site. No, these had not been left around by extraterrestrial visitors refueling their starship. They had fired up naturally some 2 billion years ago. In the distant past, groundwater had leaked into cracks running through the exceptionally rich veins of uranium. This water — as in a man-made reactor type, the BWR (boiling water reactor), long associated with General Electric — acted as a moderator, slowing down the neutrons. Nuclear fission started naturally.
The Oklo reactors operated for several hundred thousand years in pulse mode, reminiscent of Old Faithful. They exhibiting the passive safety feature hit upon by Samuel Untermyer, inventor of the BWR, in the late 1940s: the water, upon heating to a few hundred degrees Celsius, boiled away, removing itself as the moderator and stopping fission. When the water cooled and collected again, fission resumed.
The Gabon site, with its groundwater fissures, would not have been any geologist’s choice as a location for storing spent nuclear fuel. Yet the fission byproducts of the Oklo reactors didn’t budge in 2 billion years.
Exactly how a wobbly balloon like the uranium nucleus bursts apart is not entirely predictable. Two lighter elements will be produced in the basic split; these cluster at two distinct probabilistic peaks in the periodic table.
A sufficiently advanced technology, as Arthur C. Clarke put in his Third Law, is indistinguishable from magic. The high energies released in fission result in the transmutation, the changing of one element into another.
Respectable chemists spent centuries dismissing transmutation as a fantasy. No chemical reaction can turn lead into gold. Then, in 1901, Ernest Rutherford and Frederick Soddy discovered that the rays they were studying — alpha, beta, gamma — were being produced by radioactive elements decaying into other elements. Soddy notably found that radioactive thorium converts to radium; he insisted on reviving the medieval word, much over Rutherford’s objections. It was, Soddy argued, what was going on. Rutherford shook his head, but went along. “They’ll have our heads off as alchemists,” he warned.
Many transmutations take place in the crucible of uranium fission. In a conventional nuclear reactor, what goes in is uranium dioxide. This uranium consists of two isotopes: U-235, which fissions readily, and “ordinary” U-238, which does not.
Civilian light-water reactors typically use uranium that has been enriched to be around 5% U-235. (Bomb-grade uranium is typically 90% pure U-235; nuclear physics is the guarantee that ordinary reactors will not explosively detonate like atomic bombs.) Uranium enriched to 20% is arguably useful in research reactors. The practice of enriching uranium above 20% is considered prima facie evidence of bomb-making intent; this is why Iran’s enrichment percentages are reported on the nightly news, like soccer scores.
What comes out in the spent reactor fuel is 95.6% uranium, most of it unused “ordinary” U-238. Radioactive nuclides, some troublesome, make up 0.5%. We’ll get to them. Non-radioactive fission products make up 2.9%. About 1% (0.9%) of the U-238 transmutes into plutonium during the chain reaction by a process called neutron capture.
No alchemist’s transmutation could have been more consequential than the one that makes plutonium. In the Manhattan Project, the U.S. found that enriching uranium is painfully hard. The process is slow. In 1944, the US ran no fewer than three convoluted, and very expensive, uranium enrichment processes in parallel, hoping to produce enough U-235 for a weapon. Little Boy, dropped on Hiroshima 6 August 1945, was a uranium bomb — the only one. The scientists in New Mexico had to be confident it would work the first time. They couldn’t afford to test it, not having enough enriched uranium for a second bomb.
But a sequence of discoveries over the course of the Manhattan Project had revealed a second path to a weapon: plutonium.
It was an unlikely chain of events. First, plutonium had to be discovered. It was, in the late 1940 by a team led by Glenn Seaborg at the University of California, Berkeley. Then, the physicists had to determine if any of it isotopes would fission. Several would, the most promising being plutonium-239. To manufacture plutonium on any scale, Enrico Fermi’s prototype reactor, the
“atomic pile,” had to work. It did. Giant “breeder reactors,” designed to expose U-238 to neutrons produced in the chain reaction, were built at Hanford, Washington. To extract the plutonium, the “hot” exposed U-238 had to be transported and put through equally massive chemical processing vats. Herbert H. Anderson and Larned B. Asprey at the University of Chicago later formalized the process, giving it a name, PUREX, that sounds like a brand of bleach. PUREX actually stands for “plutonium uranium reduction extraction.”
All of that gave the Manhattan Project a just-in-time path to a second weapon. Fat Man, dropped on Nagasaki 9 August 1945, was a plutonium bomb. The implosion mechanism required to detonate the plutonium was very tricky, so this design was tested, at the Trinity site 16 July 1945. “Wasting” valuable plutonium on a test was opposed by some in the military, but Hanford was making just enough of it. If the war had not come to an end, the U.S. could have produced one new plutonium bomb every two weeks.
The reactors that make plutonium are different animals from civilian power reactors. Their fuel is natural uranium, which contains very little uranium-235. The U-238 is exposed at full power for only a few weeks. Then it is put through a PUREX-like process to separate out the plutonium.
The short exposure time is necessary to make sure that what comes out mostly plutonium-239, the desirable fissionable isotope. Plutonium-240, which builds up in a reactor over time, is troublesome contaminant in weapons-grade plutonium (the definition is 92% plutonium-239). Plutonium-240 can fission spontaneously, a problem for those trying to assemble bombs. It was the plutonium-240 that made the complicated, imploding-sphere design of the 1945 “Fat Man” bomb necessary.
Arms races between industrial nations have a mad logic, and tend to spiral out of control. The Dreadnought-building competition between England and Germany prior to World War I displayed this tendency. Neither side dared call Uncle. So it was with plutonium production during the Cold War. The Hanford reactors in humming.
The true children of the Cold War were four plutonium production reactors at Savanna River, South Carolina. DuPont, which had built Hanford, was asked to build the site in 1950. The first plutonium production reactor (“R”) went online in 1953; three others followed. A PUREX processing plant was built at Savanna River to separate out the plutonium.
By 1977, the U.S. and Soviet Union each had tens of thousands of nuclear warheads. And tons of plutonium.
PUREX is, technically, a type of recycling, as it might have been invented by Dr. Strangelove.
If Cold War plutonium extraction and Victorian coal-ash sifting feel like ghosts from Christmas Past, let us briefly summon the Ghost of recycling Future.
By 2050, the International Renewable Energy Agency estimates that 78 million metric tons of solar panels will have reached the end of their life. Solar e-waste, the Agency estimates, will then be coming in at the rate of 6 million metric tons per year. Used lithium-ion batteries will haunt the world well before that day. Electric vehicles (EV) sales only crossed the 1 million mark in 2017. A Tesla Model S contains 7,700 individual 18650 batteries. The battery pack of a Mercedes-Benz EQC weighs 1,400 pounds. One projection has two million metric tons of used batteries coming from EVs by 2030.
Tesla has said it will recycle its batteries. It has funded a start-up in Nevada to work on this. What goes on behind the curtain when a lithium-ion battery is recycled, however, is not pretty. The electrolyte used in the batteries is notoriously flammable, being a chemical cousin of gasoline. Shredding the batteries must be done under what is euphemistically called “thermal protection.” The metal, which has some value, can then be recovered (cobalt, nickel, and copper, in that order). The current pyrometallurgical process melts everything down at very high temperature (around 1,500° C). This, of course, is energy intensive. As the impurities burn out, they also emit some nasty gases, such as fluorine, into the air.
There is a research race going on to improve lithium-ion battery recycling. The Department of Energy, to its credit, is running a prize competition to seed innovation in this area. DOE has also asked the Argonne National Laboratory for ideas.
Existing recycling programs for lead-acid automobile batteries are sometime put forward as a model for future lithium-ion battery recycling. But in much of the world, recycling lead-acid car batteries is a present-day Dickensian nightmare. Lead is a valuable metal. Breaking open old batteries,
dumping the acid, and melting down the lead is so simple a child can do it — and many do, according to UNICEF, the UN children’s agency. Measured by “disability-adjusted life years lost,” a metric that favors youth, lead-acid battery recycling is the deadliest occupation on the planet.
World-wide, about half of all used lead-acid batteries end up in the “informal economy,” which tends to be synonymous with dubious labor practices. In 2011, a New York Times investigation estimated that 20% of lead-acid batteries turned in for recycling in the U.S. found their way to the informal economy of Mexico. In India, the percentage of batteries “informally” recycled is closer to 90%.
In the 1950s, recycled plutonium was considered a blessing. Uranium was rare, to be hoarded like gold. To build its single uranium bomb during World Art II, the U.S. had relied on all the uranium ore that could be mined in the Belgian Congo and Canada. More uranium would be needed for the nuclear submarines Admiral Rickover wanted to build. In 1953, U.S. President Dwight D. Eisenhower promised the world peaceful nuclear reactors to generate electricity. Where was that uranium going to come from?
One answer was the one the Manhattan Project had come up with. Rather than use up that precious uranium, use plutonium. It was entirely possible — not trivial, but doable — to build a reactor that ran on plutonium. In 1961, the Los Alamos Molten Plutonium Research Reactor, LAMPRE, used a melted alloy of plutonium and iron at 1,200° F. for its fuel. Using molten metal as fuel allowed the reactor the run continuously, without being shut down for refueling. Like dirty oil circulating in a car engine, the fuel could be drawn off through a pipe at the bottom of the reactor core. The bad fission products were filtered out; the liquid metal was cycled back in with a little added fresh plutonium. “Closing the fuel cycle,” rather than wasting spent fuel, became a Holy Grail in nuclear engineering. It still is.
Plutonium reactors for electrical power offered a paradoxical, too-good-to-be true sounding promise of getting something for nothing. The reactor could be used to make its own plutonium. During the Manhattan Project, when the scientists had come up with the idea of using a uranium reactor make plutonium, they had paused briefly to come up with a name for the process. According to one story, Leo Szilard casually suggested the word “breeding” while walking in a group across the quad of the University of Chicago. Decades on, this would prove an unfortunate bit of branding.
In 1956, M. King Hubbert published an influential theory predicting future peaks in resources such as coal, oil, natural gas, and uranium. The plutonium “breeder reactor” would not only solve the uranium supply problem — it would solve it for all time. The idea had definite techno-utopian appeal. Plutonium was “the fuel of the future,” said the head of the Atomic Energy Commission, a Nobel laureate, in 1968. Among politicians, the plutonium breeder reactor was a coddled, if expensive, national project. Lyndon Johnson supported it. So did Richard M. Nixon. When Nixon announced the first U.S. national energy policy in 1971, he singled out the plutonium breeder as “our best hope today for meeting the nation’s growing demand for economical, clean energy.”
Yet peaceful plutonium had an image problem. Plutonium had — literally — been spawned by the military-industrial complex. It was being manufactured in an arms race that, to all appearances, had run amok. The very thought of plutonium “breeding” struck some people as creepy and unnatural, the sort of thing possible only in a deal with the Devil. In the meanwhile, the market had worked its magic on the high price of uranium. Prospectors had fanned out over the West, and discovered lots more of it.
By the mid-1970s, plutonium was ready to be transmuted once again: into Carolinum.
H.G. Wells’ novel The World Set Free, written in 1913 and published in early 1914, first used the term “atomic bomb.” In the book, the atomic war is so terrible that, in its aftermath, humanity agrees to form a World State to control the bombs. “We have to get every atom of Carolinum and all the plant for making it into our control,” the new leader of the World State vows.
United Nations control of nuclear weapons was discussed after the first non-fiction atomic war, World War II. But the United States, enjoying its monopoly over the bomb, was uninterested. The U.S. Atomic Energy Act of August 1946 — one year after Hiroshima and Nagasaki — committed it to keeping the “atomic secret” to itself. (The United States would not even share with its former ally, the British, who found this particularly galling, since British basic research, freely handed to the Americans mid-1941, had helped convince the Americans that an atomic weapon was possible in the first place.)
It only took a few years for the atomic hegemony to crumble. The Soviet Union tested a plutonium bomb in 1949. The Soviets skipped the uranium bit, knowing full well (in part from its spies) that plutonium was the easier path. Britain had its scientists — some of whom had worked on the U.S. Manhattan Project — re-do their work. The UK tested a plutonium bomb off Australia in October 1952. France exploded a plutonium bomb in the Algerian desert in February 1960. China was the only major power whose first test was a uranium bomb, in 1964.
Among the Cold Warriors and foreign policy wonks of Washington, D.C., each new national test brought on much lamentation and gnashing of teeth. The policy wonks eventually acclimated themselves to idea that five countries — the same as United Nations Security Council had permanent members — would possess nuclear weapons. (Israel, like Lord Voldemort, was not to be named.) The 1970 Non-Proliferation Treaty (NPT) tried to write the number “5” in stone.
Unfortunately, by 1970 the atomic secret could be looked up in a good encyclopedia. “Smiling Buddha” blew the policy wonks’ complacency to all to
hell in 1974. That India — the country of Gandhi — had developed an atomic bomb caught official Washington flat-footed. Gerald Ford, then president, ordered an investigation. Its results only added to the dismay of the Cold Warriors. All sorts of small countries had actual atomic bomb projects, or civilian atomic energy schemes they might decide to turn into bomb projects. These countries included not only the usual suspects, but some unlikely ones, such as South Africa and Brazil.
Understanding the motivation of foreign countries was never a strong suit of imperial Washington. The policy wonks might have gained some insight by looking at an old document from the archives. It came from a small country, with a weakened economy, that nonetheless concluded it had no choice but to work on its own atomic weapon. “No nation would care to risk being caught without a weapon of such decisive possibilities,” England’s MAUD Committee wrote in 1941.
As in a war on drugs or alcohol Prohibition, virtuous abstinence among the usual suspects could be encouraged by choking off their source of supply. There would be less moonshine if there were no way of distilling it. The little countries would be helped Just Say No by depriving them — if the U.S. had its way — of their means of making plutonium from their own spent civilian reactor fuel.
There was an obvious “Do As I Say, Not As I Do” problem with this lecture. The U.S. had been refining weapons-grade plutonium for decades, and had a huge stockpile of it. Jimmy Carter, who took over from Ford in 1977, was sensitive to the idea that other countries might complain about American hypocrisy. So, as an inspiration to the rest of the world, Carter decided — in an Executive Order, not involving Congress — the U.S. would swear off its own recycling of spent civilian reactor fuel. The U.S. had enough plutonium anyway.
H.G. Wells’ leader of the World State went after the factory that made Carolinum. Carter went after a nearly-completed spent-fuel recycling plant in Barnwell, South Carolina. Spent nuclear fuel reprocessing was then a highly specialized, but commercially viable, business. The Barnwell plant had been privately-financed to the tune of $250 million; it was state-of-the-art and highly automated. But, it needed regulatory permits. Its owners, Allied, Gulf Oil, and Royal Dutch Shell, knew one of its customers would have to be the U.S. government. Carter’s pronouncement killed it: “The plant at Barnwell, South Carolina, will receive neither Federal encouragement nor funding…”
Weapons proliferation by nation-states with industrial infrastructure is boring. Media pundits and screenwriters fixed on the sexier image of terrorists hijacking plutonium to make bombs in their garages. The writers were careful not to overly complicate the narrative journeys of their fictional bomb-makers. Many obstacles would have to be overcome. Plutonium is incredibly hard. It routinely breaks hacksaw blades. It is extremely difficult to shape or mill. Once milled, the little shavings combust spontaneously in the presence of oxygen. The fictional bomb-makers would have to assemble their projects through glove boxes in room filled with inert argon and nitrogen gas. Then there another, perhaps final, problem: the plutonium from civilian reactors has a high percentage of that pesky isotope plutonium-240, the one that fissions spontaneously. Any of the fictional bomb-maker could suffer from premature termination.
Carter’s Executive Order, with the stroke of a pen, transmuted plutonium from an asset, the fuel of the future, into a waste product.
With reprocessing off the table, the only thing to do with spent reactor fuel is bury it. The 97% unused uranium, the 1% plutonium, the fission products — all of it. Somewhere.
Lead in water pipes has no half-life. Neither does micro-plastic in the oceans. It’s worth asking what gives radioactive waste that special frisson in the human brain that other waste does not.
Many religions shun certain substances as impure, najasat, for all kinds of reasons. A substance that gives off invisible rays that may be harmful to your health is a certainly a good candidate for shunning.
The long half-lives of radioactive elements feel designed to make humans feel insignificant. These silently clicking clocks subconsciously remind us that Nature just goes on, with or without us.
Decaying radioactive elements force us to think about the future. This summons up vague images of posterity and what we owe it, with attendant emotions of hope, guilt and fear or all of the above. (When told we owe something to posterity, we should perhaps remember the Groucho Marx comeback, “What has posterity ever done for me?”)
That other waste product of the Anthropocene, carbon dioxide, is also invisible. On that one, we prefer to bury our head in the sand, just above the bright line. It is his carbon waste that gives H. sapiens his immodest claim to
be influencing the planet on a geologic scale. The amount of carbon in the atmosphere, pre-industrial era, was roughly 600 gigatons. Humans have already added nearly that much, 500 gigatons. If we continue emissions at the current rate, we will add another 2,200 gigatons tons by 2100. This carbon has a complicated half-life, but it doesn’t go away quickly. Even if we stopped all carbon emissions today, half that CO² will be with us in the year 3000.
In the 1950s, geologists spent three years crisscrossing the country looking for spots where the Department of Defense could bury radioactive waste. The geologists liked the Delaware sub-basin of the Permian that runs under New Mexico and Texas. It has salt layers that have been tectonically stable for 250 million years. The groundwater there doesn’t move up, down, or through the salt. The salt will even enclose and form a natural seal around whatever is put into it; even canisters are — technically — optional. The Department of Defense sensibly chose to locate its Waste Isolation Pilot Plant, the WIPP, east of Carlsbad, New Mexico.
The U.S. Congress was less sensible. Starting the 1980s it commenced an on-again off-again affair with Yucca Mountain, Nevada. This affair ended badly, some $9 billion later. At the moment, Yucca Mountain is twisting in the political wind. This is not, probably, a bad thing. It proved to to have geology considerably worse than the Permian salts. As in the aftermath any rocky affair, it is unclear of the politicians will be be able to move on.
U.S. politicians may not be capable thinking past the next election, let alone doing what anthropologist Vincent Ialenti calls “deep time thinking,” in an interesting book about the design of Finland’s Onkalo spent nuclear fuel repository. “Deep time” thinking requires mind-stretching mental exercises, such as picturing a locale as it was 100,000 years ago. The Finns considered how their depositary would be fare if Finland were again covered in ice, as it was during the last Ice Age.
In the design of the WIPP, a distinguished panel at the Sandia National Laboratories got what sounds like a fun job: designing the warning signs to post around the perimeter of the site. They had to
solve other problems, of course, like making sure the instruction manuals for the depositary wouldn’t get misplaced during the next 10,000 years. The one for Stonehenge got lost, and Stonehenge is only 4,000 years old.
A number of the Sandia scenarios were intriguingly post-apocalyptic. In one, the WIPP is stumbled upon by child-like Eloi, innocent flower children who have absolutely no clue about what the thing is. Another contemplated future teenagers exploring the WIPP’s tunnels, not unlike the premise of the German TV series Dark, but without the time travel. The warning sign ultimately adopted was selected to convey the message “Scram out of here” in any language.
Even more routine environmental studies make strange assumptions about future life on this planet. In the 1990s, the US National Academy of Sciences considered what effect technetium-99, present in spent reactor fuel, might have in a worse-case leak at Yucca Mountain site. Technetium-99 does remain radioactive for a long time, and does dissolve in water. (It’s also a very useful medical isotope; since the U.S. no longer extracts it from spent nuclear fuel, it buys it from Canada). The Academy looked at the impact the escaped technetium would have on subsistence farming directly above the Yucca Mountain site. Its members were evidently oblivious to a subtler point: if our descendants are reduced to subsistence farming in the Nevada desert, something else failed — big-time — long before the waste canisters rusted through.
Other scenarios assumed a reliable constant: human greed. Some future Indiana Jones might sneer at the ancient curse above the door of the WIPP and decide to break into the tomb anyway. There is real-world precedent of sorts. In 1987, thieves in Goiânia, Brazil stole a radiotherapy device from an abandoned hospital. Seeing a blue glow coming from a capsule inside, they speculated the substance might be valuable or even supernatural. They opened it up with a screwdriver. Eventually four died, including the wife and daughter of a scrapyard owner who took the capsule home with him for safe-keeping.
The case of technetium-99 underlines a different danger in discussing fission products: that of focusing all attention on half-lives, while ignoring the type of radioactivity and the pathways necessary for human exposure. Plutonium-239 has a long half-life, 24,100 years. But it is only weakly radioactive, giving off alpha particles that, in general, do not penetrate the skin. People have held pieces of it in their hand, mostly to feel the disquieting warmth that comes from the metal. To be a serious health risk, plutonium needs to be somehow ingested or inhaled. Because it burns so energetically, plutonium smoke is the most likely occupational hazard for those working with it. The CDC has documented — and carefully measured — thousands of cases of plutonium exposure since 1945. But the CDC has, in general, been unwilling to draw a definitive conclusion about its radiological risk.
This is not true for other fission products. Within 6 minutes of fission, many of the miscellaneous products have decayed away. Iodine-131 and barium-140 last about 4 months. In terms of human health, strontium-90 and cesium-137 are of most concern: both are energetic beta emitters, with half-lives in the neighborhood of 30 years. In the body, strontium-90 imitates calcium and seeks out the bones, making it a leukemia risk. Cesium-137 acts more like potassium, ending up in soft tissue. There is, by the way, a cesium-137 countermeasure that could have saved lives after Chernobyl: high-school chemistry class Prussian blue is a chelating agent and helps remove it from the body.
By math, strontium-90 and cesium-137 will have only a tiny fraction of their original radioactivity remaining after 300 years. This is the consensus lower bound on how long they need to stay buried.
One way to improve the sanity of the discussion about radioactivity is to normalize it relative either to that of uranium ore, or to the natural background level at a given location.
Sweeping assertions that nuclear waste “stays radioactive for 100,000 years” are typically made by people who want to shock or score points on Twitter. They rarely bother to be precise about which elements they are talking about. If the definition of waste includes the uranium that did not fission, it, of course, “stays radioactive” for billions of years. But then, if that uranium ore had been left in the ground, it would still be around, and equally radioactive.
Of all the various fission products, the minor actinides are the appropriate target for worry. Even here definitions can be confusing, sometimes intentionally so. An actinide is any of the series of fifteen metallic elements from actinium (atomic number 89) to lawrencium (atomic number 103) in the periodic table. All are radioactive.
Uranium and plutonium are then by definition actinides, but are sometime discussed separately in the context of spent nuclear fuel and sometimes not. The minor actinides that can be found in spent nuclear fuel include neptunium, americium, curium, berkelium, and californium. By mass, the quantities are small. One ton of spent nuclear fuel, for example, contains about 20 grams of curium.
Estimating health risk of any isotope requires an assumption about the mechanism of human exposure, and the duration of that exposure. Since many of the heath risks require ingestion, groundwater gets a lot of attention in the worse-case scenarios involving geologic depositories.
There’s another odd assumption about future humanity that comes up in our thinking about geologic depositories: they will not be able to correct our mistakes, whatever level of technology they may have. That puts the pressure on us to get everything right the first time. We will, in this thinking, only get one shot at this. It has to be perfect right out of the box.
In a geologic depository, after enough centuries have passed, the nasty stuff will have decayed down. What will be left will be primarily uranium and plutonium. A quip that has a very long history is: gold is something we dig from the ground in order to bury in vaults. Sir Walter Marshall, when head Britain’s nuclear power program, speculated that future humanity might thank us for leaving it “plutonium mines.”
When uranium was expensive, making use of the unburnt uranium in spent fuel was always part of the plan. The “low burn up” phenomenon happens because certain byproducts of the fission are serious neutron absorbers. As they build up, they work to shut the fission down. When the first Hanford reactor was fired up to make plutonium in 1944 it started to work, but then slowly and mysteriously lost power and quit by 6 p.m., like it was punching a time clock. The reactor could be restarted the next day, but the cycle repeated. After some sleepless nights, the physicists, including Fermi, traced the reactor’s diurnal rhythm to a fission byproduct, xenon-133. It has a half-life of nine hours.
Recovering unburnt uranium from spent reactor fuel involves the reprocessing step, either PUREX or one of its successors, which implies some plutonium will be recovered along with it. It is this plutonium that leads the anti-nuclear inclined to oppose spent fuel reprocessing altogether. The objection, again, is that plutonium recovered from civilian reactors might find its way into foreign or terrorist weapons. To those making this objection, it does not matter that this has never actually happened; that the plutonium is not weapons-grade; nor that, in the past few decades, reprocessors intentionally contaminate or “spike” the resulting plutonium to make it unusable for weapons.
Reprocessing has the potential to drastically reduce the volume of what has to be buried in some geologic depository. Writer James Maffery puts the proposition this way: if all the electricity used by a person in a lifetime were generated with nuclear power, and the nasty radioactive components separated out, that lifetime of waste would fit into a coke can, and weigh 2 pounds. One interesting proposal for geologic burial is to use horizontal drilling, developed for fracking, to send that coke can far underground.
Reprocessing allows “tailoring” the waste stream and would produce flexibility in how it is stored. The 10,000-year timescale of the “bury everything” approach is determined by the longest half-life in the miscellany of radioactive fission products, even those present in minute amounts or with low health risk. Strontium-90 and cesium-137, for example, if separated out could go into a 300-year depository, not a 10,000-year one. Technetium-99 has a very long half-life, 213,000 years, but is radiologically weak, and a medically useful isotope. By itself, the technetium-99 in spent fuel barely exceeds EPA clean-up standards.
The origins of disposing of waste by burning — incineration — are shrouded in myth. Archaeologist William Rathje, the godfather of garbology, speculates that the original fires of Hell may have been methane gas vents in the Valley of Hinnom, near Jerusalem. These were used for ritual burnings and, perhaps, human sacrifice. In rabbinic literature, Gehenna was shorthand for Hell.
In nuclear engineering, “burning” is refers to fission, not the chemical reaction involving oxygen. As an alternative to burial in a coke can, the nasty fission products can themselves be fissioned away in the right circumstances. The key to this goes back to Fermi’s “fast” and “slow” neutrons. The first experimental reactor to use “fast” neutrons was built in 1946. The second was the Experimental Breeder Reactor, EBR-I, the brainchild of Walter Zinn, first director of the Argonne National Laboratory. It went live in 1951 as a proof of concept.
Fast reactors have some distinct engineering challenges. They run “hotter” than conventional reactors, both in temperature and in terms of radiation. Water, which slows neutrons down, is out; exotic materials, such as liquid sodium, must be used to cool them. High levels of sustained radiation means advanced materials must used for the reactor vessel and inner works.
In the politics of the 1970s, the compound “fast breeder reactor” became an epithet. “Fast,” of course, refers to only to the speed of the neutrons, while breeding plutonium was an optional extra. In the U.S., fast reactors as a species were driven into extinction along with the breeders. Bill Clinton and John Kerry terminated all advanced reactor research at the U.S. national laboratories in 1994.
One reactor shut down was the EBR-II in Idaho. This has a stellar reputation among reactors, having run for some thirty years with only the minor glitches. In April 1986 the EBR-II was famously used to demonstrate “passive safety,” now a standard feature in any new reactor design. The grid electricity was intentionally switched off, anticipating what the tsunami would do decades later at Fukushima. The operators were not allowed to intervene. EBR-II, which had been running at full power, shut itself down gracefully.
Ironically, the Argonne project in Idaho owed its existence in part to Jimmy Carter. As a sop to the scientists, Carter in the 1970s had challenged the national labs to come up with solutions to the waste and proliferation problems. EBR-II’s successor, the Integral Fast Reactor (IFR), was almost finished when Clinton pulled the plug on it in 1994. Its solution was to do small-scale electro-chemical recycling of fuel at the reactor site — hence the word “integral.” Proliferation risk was avoided because once the plutonium entered the site, it would never leave. When asked to explain why the administration was cancelling the IFR project, when it would have been less expensive to finish than abandon it, a Clinton official told a protesting Charles Till of Argonne that the actual merits of the IFR didn’t matter. It is was a “symbol.” The U.S. has not had a operating fast reactor since.
Fast reactors can burn up plutonium, and for incinerators, running hot and fast is an advantage. Russia has a fast reactor, the BN-800, that it has been running reliably since 2016. It uses it to get rid of weapons plutonium now surplus in the various arms reduction treaties. The UK received a serious proposal for a comparable fast reactor in 2012; at the time, the chief scientist at the UK Department of Energy estimated the plutonium stockpile contained enough energy to power the country for 500 years.
The nasty actinides found in spent nuclear fuel can also be fissioned in a fast reactor. They will burn to some extent as fuel, creating energy. It is also possible to transmute them into substances considerably less nasty. A hybrid approach irradiates, but does not attempt to fission, the troublesome actinides with fast neutrons. Like everything in nuclear, this has an acronym, P&T, partition and transmute. Middle-school students sometimes ask why spent nuclear fuel can’t be put on rockets and shot into the sun. Fusion reactors on earth, however, may offer long shot solution for nasty actinide destruction.
As a result of the slow sea-change in public opinion about nuclear power — which began the time carbon emissions and global warming began to be taken seriously — the U.S. government is now doing something, as opposed to nothing. The Department of Energy is currently building a fast-neutron test reactor, the Versatile Test Reactor (VTR), in Idaho, which it hopes to have operational by 2026. In the modern SpaceX style, DOE will likely have the VTR built by private partners with an entrepreneurial bent, probably GE-Hitachi and TerraPower. (TerraPower cannot be mentioned without mentioning Bill Gates. I would like to break with tradition on this.)
A number of other companies have fast reactor designs that get some support, at least moral, from the DOE. Most designs are qualified by their choice of cooling method; there are lead-cooled fast reactors, lead-bismuth eutectic (LBE)-cooled fast reactors, helium-cooled fast reactors, and molten-salt fast reactors. A start-up in Sunnyvale called Oklo — after the Gabon natural reactor site — makes the potential use of its small reactors for actinide destruction a part of its pitch.
The politics of the past few years has given new importance to epistemology. We’ve learned that nonsense campaigns and conspiracy theories, like pearls, often form around some small grain of truthiness, their shiny appeal coming from endlessly accumulating layers.
The nuclear waste issue does has an irritating grain of truth at its center. It would be dishonest to deny it and lazy not to attempt to solve it. But for those opportunistically opposed to nuclear power, that single grain is an excuse to give up on a potentially historic solution to a waste problem of the modern industrial era.
If we don’t think about future generations, they will never forget us.
You can follow my writing on Twitter @Will_Bates_sci