Wednesday, October 28, 2020

60 Years of Science - Part 21

This post is the next in a series that dates back several years.  In fact, it's been going on for long enough that several posts ago I decided to upgrade the title from "50 Years of Science" to "60 Years of Science".  And, if we group them together, this is the twenty-first main entry in the series.  You can go to https://sigma5.blogspot.com/2017/04/50-years-of-science-links.html for a post that contains links to all the entries in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science when he wrote the book (1959 - 60).  In this post I will review two sections, "Fission" and "The Atom Bomb".  Both are from his chapter "The Reactor".  We have now arrived at the last chapter in the book.  So the end of the series is now in sight.

In "Fission" Asimov starts with the observation that "rapid advances in technology in the twentieth century have been bought at the expense of a stupendous increase in our consumption of our earth's energy resources".  He saw this as simply a "supply" problem.  "Where will mankind find the energy supplies needed"?

This tees up a discussion of various traditional sources like timber.  I'll get back to that in a minute.  What was not widely appreciated at the time was the cost in terms of pollution.  We are now well aware of air pollution, water pollution, plastic pollution, chemical pollution, and the like.  But, except for a few outliers like Rachel Carson with her book "Silent Spring", this was not a front line issue back then.

And the whole idea of worrying about an increase in the minute amount of carbon dioxide in the air?  At the time, there weren't even any outliers worrying about this problem.  But the '50s was when the foundations for our current concerns were being laid.

Carson's book was an indictment of the effects of DDT because of its unintended side effects.  It was a very effective insecticide.  But it also killed off all kinds of animals who were not its targets.  This was one of the first analyses of the unintended consequences of various kinds of modern behavior.

Since then, we have learned of the deleterious effects of sulfur pollution spewed out by coal fired power plants (acid rain).  We have learned that Freon refrigeration coolant damages the ozone layer.  And on and on and on.

Regular measurements of carbon dioxide levels in the atmosphere (specifically at a remote astronomical observatory in Hawaii) were begun in the '50s.  Studies demonstrating the dangers of inhaling cigarette smoke were started in the '50s.  People knew that the extraction of oil, and particularly coal, made a mess of things.  But that was assumed to be the primary negative effect of petroleum extraction.

Nobody, or at least almost nobody, worried then about what the effect of the "lead" additive, put into gasoline to boost engine performance, would be when it ended up in the atmosphere, and subsequently the lungs, and later the brains, of children.

It was soon determined that the effects were so devastating that lead additives were banned from gasoline.  We later moved on to worrying about the lead that the paint we put on walls contained.  Lead based paints were eventually banned.  We now use latex based paint instead.

But all this was for the future.  Asimov gives us a brief history of the use of timber by various civilizations.  Forests were cut down in Greece, North Africa, the Near East, and in other places.  This was done for fuel and so the land could be converted to agricultural use.

Asimov pegs the beginning of this behavior at a thousand or so years ago.  It had actually started much earlier.  But Asimov was forced to rely on the "historical" (i. e. written) record of events.

The ability of archeological and geological (i. e. unwritten) record had not been developed sufficiently back then to shed much light on these kinds of questions.  That was to come later.  Those sources eventually pointed to a much earlier date for the changes Asimov highlights.  

Asimov notes that, not only did this change eliminate many of the handy sources of wood leading to a wood shortage.  But the land cleared as a result was allowed to deteriorate.  Now, most of it is no longer in good enough shape to be appealing to farmers.  The result of this change in the characteristics of large tracts of land is a low density of occupancy that can't support modern civilizations.

Instead it is populated by people Asimov describes as "ground down and backward".  We might quibble with Asimov's characterization of the worth of these people.  But it is undoubtedly true that only low density activities were possible now that the land's ability to support high intensity agriculture had been lost.

This "cut down the forests" trend continued into the middle ages in Europe.  This resulted in little remaining forested land there.  The arrival of Europeans in the Americas saw a similar transformation.  "Almost no great stands of virgin timber remain . . . except in Canada and Siberia."

Except that we now know that natives in both North and South America had a profound effect on forest structure well before Europeans arrived.  What looked like "virgin" timber to European eyes was actually anything but.  And it turns out that the situation was not as dire as Asimov painted it at the time.

Satellite imagery now allows us to accurately map the extent of forests.  There was a lot more healthy forest around than is apparent from Asimov's statements.  They were just in smaller stands, which he ignored.

But we have since gone a long way toward cutting those down too in the half century since.  We have also denuded large areas of Amazonia, Asia, and other places that weren't even on Asimov's radar.

As Asimov notes, civilization writ large moved on to "fossil fuels", coal and oil.  They are resources that "cannot be replaced".  As a result, "man is living on his capital at an extravagant rate".  So we will reach "peak production" in the '80s.  It turns out that people were talking about this as a problem even back in the '50s.

Asimov's prediction was remarkably accurate.  He missed by a decade or so.  His miss was caused by various unanticipated events like the Arab Oil Embargo.  But we reached peak oil pretty much when he said we would.

So, why don't we now have an oil shortage?  A technology fix came along that Asimov couldn't have anticipated.  Fracking, fracturing the rock that held oil so that it could escape and be pumped to the surface, was unknown at the time the book was written.

Horizontal drilling, and other technological tricks that make fracking economically feasible were also beyond the technology of his day.  But they were also not needed back then.  The oil fields of West Texas and the Middle East were producing vast quantities of oil that was easy to extract using unsophisticated techniques.

All forecasts, including Asimov's, assumed that the technology would get better and that the price, after adjusting for inflation, would rise.  A price rise makes it economically feasible to employ more complex and more expensive technology.

And for many decades the oil industry hewed closely to those assumptions about the state of the technology and the state of the market.  (The Arab oil embargo's effect was only to the timing of price increases.)  It was only when we actually reached, or appeared to reach, peak oil that the industry became willing to try "crazy" ideas.  Horizontal drilling and fracking were two of the crazy ideas that panned out.

There has always been far more coal in the ground than there is oil.  As a result, estimates of when "peak coal" would hit have pointed to a date in the relatively far future.  Asimov's estimate of  the twenty-fifth century was in line with estimates of the time.  What has done coal in is not availability.  There is still plenty of it around.  Instead, it has been economics.

Coal has a much lower energy density than oil.  It is also much messier to make use of.  There was a substantial industry devoted to making various chemicals out of coal in the 1800s.  But when oil became readily available industry quickly switched to oil and never looked back.  Today coal is pretty much restricted to being used to make steel and electricity.

Coal is dirt that contains a lot of carbon.  But it's still dirt.  Separating the carbon from the dirt leaves a lot of  nasty, useless, stuff behind.  Coal also throws lots of nasty stuff into the air when you burn it.  For a long time the alternatives to coal were expensive enough that people put up with these disadvantages.

But inexpensive natural gas has been widely available for several decades now.  It is far cheaper to ship than coal.  All you need to ship it from here to there is a pipe from here to there that is a few inches in diameter.  Natural gas does throw a lot of carbon dioxide into the air.  But it throws far less than coal does.  And carbon dioxide is pretty much the only nasty thing it does throw into the air.

The list of nasty things that coal throws into the air over and above carbon dioxide is nearly endless.  I will just mention three.  First, there is the sulfur that I noted earlier.  Then there is arsenic.  Yes the poison featured in countless murder mysteries.  Finally, there is Mercury.  It too is truly nasty stuff once it's airborne.

In Asimov's time, and for a couple of decades afterwards, the oil industry threw natural gas away by "flaring it off", literally burning it to get rid of it.  But eventually they caught wise.  It's low expense, convenience, and widespread availability has made it quite popular.

To make a coal fired power plant you need to build a large, dirty, expensive, and complex structure that is a terrible neighbor.  To build a natural gas fired power plant capable of producing the same amount of power you need a few modified jet engines connected to generators.

"Gas" power plants are cheap to build, cheap to maintain, require far less land, and don't make much noise or mess.  So they can be sited almost anywhere.  Natural gas fired power plants have done far more to kill off coal than everything else combined.

Asimov then moves on to the question of efficiency.  Theoretical efficiency, the best efficiency that thermodynamics allows, has been covered previously, both by Asimov and by me.  It will not be revisited.  Asimov notes that thermocouples, devices that convert heat directly to electricity, are only capable of an efficiency of 10%.  A steam generator of the time was capable of an efficiency in the  30-40% range.

We are still mining "efficiency" as a method of stretching supplies.  We now have appliances and light bulbs that are much more efficient than they used to be.  Jet engines used on airplanes are much more efficient than the ones used in 1960.  Insulating homes better increases efficiency.

Building lighter cars, and other vehicles, helps.  The Ford F-150 pickup truck now contains a lot of aluminum in order to improve its efficiency by reducing its weight.  We now have Hybrid cars.  They can get by with a much smaller and, therefore, lighter engine.  The search goes on.  Increased efficiency is helpful but not a game changer.

Asimov then mentions "renewable energy".  See!  The idea is older than you think.  He mentions wood (you can always grow more), wind, and water.  At the time hydro-electric dams were popular, especially in my neck of the woods.  The problem is that by 1960 most of the best locations in the U. S. for building a hydroelectric dam were already in use.  Not much room for growth.

And two major problems have since come to light.  The one that gets the most ink is the fact that dams screw up the ability of fish to migrate.  The less noted problem is that dams also block the silt and debris that rives wash down to the sea.  This results in the reservoir behind the dam "silting up".  But it also interferes with that silt and debris moving down stream where it turns out it is needed.

Beaches are not permanent.  Instead, they are maintained by being continuingly renewed by sand that is transported from upstream by rivers.  As this and other problems have emerged we have moved from building dams to tearing them down.

Wind had been a source of energy for hundreds of years by this point.  The Dutch windmill being only an obvious example.  In 1960, however, wind power was seen as only appropriate for use in a few niche categories.  In fact, in the roughly 50-100 year period preceding the publication of Asimov's book, maritime commerce had converted from wind power in the form of sails to fossil fuel power in the form of engines powered by coal or oil.

Since then, high efficiency "wind turbines" have been deployed widely.  And the rate at which they continue to be rolled out is only accelerating.  The idea of using wind to generate large quantities of electricity wasn't on anybody's mind in 1960.

Asimov then moves on to sun power.  There is the direct method, using mirrors to concentrate the sun's heat.  This was only a gleam in the eye of various futurists in 1960.  Direct sum power is now widely used as the power source for desalinization plants.  It is used in a few places to make electricity.  But the technology has not caught on an any big way.

Asimov then moves on to something that has caught on in a big way, what we now call the solar cell.  Solar cells were at the "proof of concept" stage of development at the time of the book.  A few satellites used them as a power source.  That was about it.  The problems were practical.  Solar cells were too inefficient (they were only capable of capturing a few percent of the power contained in sunlight) and they were expensive to make.

Tremendous progress has been made on both fronts.  Solar cells that have efficiency ratings in the 15-20% range are widely available.  Solar cells with efficiencies in the 20-30% range, and using a number of different formulations, look to be widely available soon.   And they can now be economically manufactured literally by the acre.

The efficiencies have improved to the point that people find it cost effective to cover the roof of their house with them.  Such a setup can easily power an entire house and leave power to spare.  And this even applies to the more northerly parts of the U. S.  Commercial "solar farms" now provide power that is as cheap as or cheaper than power from traditional power plants powered by coal or natural gas.

In the case of both wind and solar there is a problem to be solved.  They are "intermittent".  They either depend on the sun shining or the wind blowing.  There are a couple of ways to handle this.  They boil down to a much more capable and robust national electrical grid.  This would allow surplus power generated here to make up for shortages there.  So far, nearly zero money has been invested in this.  That is criminal.

The second approach is storage.  If we have enough power stored to run the entire national grid for two days then we should be able to smooth any dip that comes down the pike.  (Probably considerably less capacity would be sufficient.)  Here, the problem is technological.

There is no storage technology that scales to that quantity at a reasonable cost.  As a result, what's currently being done is to install gas fired generating plants all over the place and use them to backstop renewable sources.  It works but it is not the right solution.

And, it turns out, all the attention showered by Asimov on these various technologies is just a setup for what he actually wants to talk about.  As the title gives away, what Asimov wants to talk about is "atomic energy".

Most of what we think of when we think of energy is "chemical energy".  It comes from rearranging the bonds between electrons orbiting various atoms.  Chemists call these rearrangements "chemical reactions".  Any form of fire or explosion is chemical energy in action.  The amount of energy available may seem enormous.  But it is tiny compared to the "nuclear reactions" physicists study.

The amount of chemical energy available in a gram of material is modest.  It might amount to what you get by striking a match.  At best, it amounts to the amount of energy released by a small firecracker.  The amount of nuclear energy that can be released by the same gram of material can literally level a city.  It leveled Hiroshima and Nagasaki.

It was a slow process discovering what nuclear reactions were capable of and how they worked.  Chadwick was the first to get an inkling of how to study the nucleus in 1932.  The neutron was electrically neutral. That made it a good choice as a probe to use to study the nucleus without having to worry about pesky electrical effects.  Fermi was the first to observe that "slow" neutrons worked better than "fast" ones.

Rather than going with Asimov's explanation of slow neutrons versus fast ones, try this.  Imagine that the nucleus of an atom is a water droplet and a neutron is as particle of sand.  If the sand particle hits the nucleus at high speed it just drills though leaving the water droplet pretty much unchanged.

If, however, the sand particle is moving very slowly it gets absorbed by the water droplet.  A slow neutron being absorbed by the nucleus of an atom allows it to interact with the other nuclear particles.  That interaction was what Fermi was looking for.  (We'll get to medium speed neutrons later.)

Physicists summarize this behavior by talking about the "cross section".  In a given set of circumstances the nucleus has a given cross section.  For a fast neutron the cross section is very small, the size of a catcher's mitt, for instance.

For a slow neutron. the nucleus has a big cross section, something like the broad size of a barn.  This led physicists to cheekily invent a unit called the "barn" to describe nuclear cross sections.  Other than noting that it is very small, I am going to leave its actual value unspecified.

Atomic nucleuses are very complex beasts.  We know a lot more about them now then we did in 1960.  But, for our purposes, we are going to ignore all that and just assume that atomic nucleuses consist of a bunch of protons and neutrons somehow all stuck together. We are going to dive into nuclear chemistry only far enough to note that changing the number of protons or neutrons in a nucleus changes the nature of the beast.

Since neutrons are neutral, if we change the number of neutrons we don't change what kind of element we are dealing with.  Changing the number of protons changes that.  There is literally a one-to-one correspondence with the number of protons and the type of element.  But changing the number of neutrons does make a difference.  Often it changes the degree to which the atom is radioactive.

In the simplest case, Hydrogen with no neutrons is not radioactive.  Hydrogen with one neutron is radioactive.  Hydrogen with two neurons is highly radioactive.  If an element is highly radioactive it has a good chance of decaying, blowing up into two or more assemblies, each containing an assortment of protons and neutrons.

Several scientists set out to bombard Uranium with slow neutrons.  They thought that the neutron would be absorbed and somehow turned into a proton.  This would result in the creation of a small amount of whatever element 93 was.

Fermi took a crack at it.  Hahn and Meitner took a crack at it.  (Meitner had to stop work and flee because this was '30s Europe and she was Jewish.)   Strassmann replaced her and the work continued.

Eventually they figured out what had happened and what had happened was a big surprise.  What had happened was what we now call "nuclear fission".  (Surprises happen all the time in science.)  As you might have guessed by now, instead of absorbing the neutron and staying intact, the Uranium nucleus had undergone fission.  It had broken into pieces and one of those pieces was a Radium atom.  

Except that turned out to be wrong too.  What had actually been created was an atom of Barium.  Marie Curie's daughter Irene and Irene's associate Savitch were among the most prominent to go down the Radium rabbit hole and end up with nothing to show for it.

It was Meitner (yes, the same Meitner), in an article in "Nature" in 1939, who was the first to point to Barium.  Her insight was later confirmed by a number of groups.  She also was the one who named the process "fission".  On to "The Atom Bomb".

This chapter is really just a continuation of the previous one.  Asimov continues the story without missing a beat.  He starts out by noting that the fission of a Uranium nucleus, specifically a U-235 nucleus (he skips over this detail), produces about two neutrons.

If both of these neutrons each end up causing the fissioning of an additional Uranium nucleus then we have the makings of a "chain reaction".  And, since each fission results in the release of a tremendous amount of energy, we have the makings of a source of a whole lot of energy.

Numerous physicists saw the possibilities of nuclear chain reactions.  The fissioning of a single ounce of Uranium produces the same amount of energy as burning 90 tons of coal, or 2,000 gallons of fuel oil, or 600 tons of TNT, according to the calculations Asimov publishes.

If Asimov's numbers are to be believed, the Hiroshima bomb would have been the result of fissioning about a pound of Uranium.  I believe Asimov's figures deliberately understate the amount of energy produced by fissioning an ounce of Uranium, possibly because the correct value was classified at the time.

As Asimov notes, this discovery was made in 1939 on the even of World War II, a War that everybody could see coming well before it arrived.  The military applications of nuclear fission were obvious and troubling.  Most scientists viewed the possibility of Nazi Germany making use of this information with alarm.

Asimov then goes on to review the story of the mostly American effort that resulted in the development of the Atomic Bomb and its use against Japan.  Szilard went to Einstein who, in turn, wrote a letter to FDR.  He, in turn, authorized the "Manhattan Project", so named because it was run by General Lesley Groves, director of the Manhattan Engineering District of the U. S. Army Corps of Engineers.  Groves, in turn, recruited Robert Oppenheimer as lead scientist.

Asimov goes into some detail about both the technical details of how an atomic bomb works and the difficulties involved in building one.  I am going to skip over most of that.  If you are interested, check out "The Making of the Atomic Bomb" by Richard Rhodes. it is excellent.  In lieu of what Asimov and Rhodes have to say on the subject, here are some observations.

The Manhattan Project was, by far, the biggest, most expensive, and most difficult project undertaken anywhere in the world up until that time,  It involved constructing massive facilities at Hanford Washington (primarily Plutonium production), Oak Ridge Tennessee (primarily enriched Uranium production), and to a lesser extent, at Los Alamos New Mexico (Research and Development, final bomb assembly).  Although it was a mostly U. S. effort, it involved a great deal of help and support by the United Kingdom.  It also involved substantial help and support by Canada and several other countries.

As we all now know, the U. S. succeeded in building three working devices, the test bomb that was exploded at the "trinity" site near Alamogordo New Mexico, and the two production bombs, one of which was exploded over Hiroshima Japan and the other over Nagasaki Japan.  Fortunately for all of us, the German effort ended up going nowhere.  If you want to know more about this very interesting story, I recommend "Heisenberg's War" by Thomas Powers.  Back to Asimov.

As he notes, the U. S. had a monopoly on the Atomic Bomb for only four years.  Unbeknownst to most, the Russian intelligence agencies had completely penetrated the Manhattan Project.  They managed to spirit away all the information they needed to build their own device.  Sakharov, their chief scientist, took no shortcuts, however.  So the Russians were able to develop a robust program that was soon able to move beyond just the cloning of American designs.

No doubt, the intelligence the Russians collected sped things up.  But, once everyone knew such a thing was possible, duplicating the feat was simply a matter of devoting the necessary resources.  The Russians succeeded in 1949.

The British succeeded in '52.  The French succeeded in '60.  The Chinese succeeded in '64.  Since then, several other countries have succeeded.  The newest member of the "nuclear club" is North Korea.  South Africa is unique in having developed the expertise necessary to join the club, but then shutting everything down and walking away from it.

He then moves on to what was at one time called the "Super", a bomb based not on nuclear fission but on nuclear fusion.  Again, I am going to skip over the details.  If you are interested, I would suggest reading "Dark Sun" by Richard Rhodes.  Here too I will confine myself to some observations.

In the immediate aftermath of the War many nuclear scientists were not even sure what later came to be called a "Hydrogen Bomb" could even be built.  The nuclear part of it was well understood. Smash together two Hydrogen nucleuses under appropriate circumstances, and they will fuse to become a single Helium nucleus.  And that fusion will release a tremendous amount of energy.

The problem was in creating the "appropriate circumstances".  It turns out that X-Rays were the key ingredient.  An Atomic Bomb can be tuned to release tremendous amounts of the appropriate kind of X-Rays.  Then it was determined that an appropriate type of mirror could be used to focus the X-Rays on a tank of Hydrogen, which could be located off to the side.  With these two ideas in hand, the problem of how to build an H-bomb, as it came to be known, was solved.

This turned out to be all you needed to build a "proof of concept" device.  But the design was not practical as a weapon. "Mike", the only bomb built using this design, weighed something like 42 tons and was too big to fit into an airplane. But then another idea, dissolving the Hydrogen in Lithium, came along and enabled a "miniaturized" design that was practical for use as a weapon.  The rest, as they say, is history.

Again, the U. S. was first, but not for long.  The U. S. set "Mike" off in 1952 and had working miniaturized devices shortly thereafter.  The Russians were not far behind.  They first succeeded in '53.  They later went on to set off the largest H-bomb ever exploded in the late '60s.  It was a 100 Megaton (million tons of TNT equivalent) design that had been deliberately downrated to only 50 megatons.

It's still holds the record for the largest H-bomb ever set off, but not because bigger bombs can't be built.  It's because anything over 10 megatons is a complete waste.  Large H-bombs blow the top of the atmosphere off.  This causes a "stovepipe" effect.  All the energy flows up the stovepipe and out into space.  Viewed from anywhere but space, all large H-bombs behave just like a 10 megaton H-bomb.  Such is the strange logic of nuclear warfare.

Asimov finished off his discussion with the fission-fusion-fission bomb.  I am going to skip it.  Instead, I am going to leave you with an optimistic thought.  Everyone, from science fiction writers to scientists and philosophers, who seriously contemplated nuclear weapons during this period (say 1935-1965), came to the conclusion that "if it can be built then it will be used".  But no nuclear weapon has been used in battle since 1945, seventy-five years ago.  And the chances of that streak continuing indefinitely keeps getting better and better.  Peace out.

Saturday, October 10, 2020

Nuclear Waste

 In my most recent post (see:  https://sigma5.blogspot.com/2020/09/a-brief-history-of-nuclear-power.html) I addressed the subject of nuclear power.  In it I indicated that I was unhappy with anti-nuclear activists.  A friend, whose opinion I highly respect, sent me a "but how about" email in response.  It did not contain anything I had not heard before.

In some ways the situation reminds me of the arguments that continue to surround evolution.  In the first edition of his "On the Origin of the Species", Darwin addressed all of the objections to his conclusions that he could think of.  "Origin" went through several editions.  As new and novel objections were raised, Darwin added material to these subsequent editions that knocked them down too.

It is disappointing that most of the arguments advanced today against evolution are arguments Darwin successfully refuted in one edition or another of "Origin".  Yet to this day, anti-evolution people either haven't read "Origin" or ignore what can be found there.

There is no equivalent to Darwin and "Origin", when it comes to the disagreements that surround nuclear power.  But the situation is similar in that the same old objections are raised over and over again.  They have all been addressed, but not at one time, and not in one place.

This post is my effort to at least partially fill that void.  And the place I want to start is with something that is a common feature that is found in many anti-nuclear arguments.  Worse yet, this feature is not just confined to the discussion of nuclear waste.  It is a frequent component of discussions of issues to numerous to enumerate.

Anti-nuclear people point to the dangers of nuclear power.  But they do so in contrast to some theoretically perfect alternative, an alternative that does not exist because it can not exist.  The production of power using nuclear processes inevitably produces nuclear radiation, which is dangerous.  But pretty much everything is dangerous.

Air is dangerous.  You can die from "the bends" by breathing regular air in the wrong circumstances.  Water, even distilled water, is dangerous.  You can drown in it.  What we eat is dangerous.  Someone recently died because he ate too much black licorice.

What we drink is dangerous.  Until lactose intolerance was understood, some people could get sick from drinking milk.  Whatever we do, whether it is walking down the street (we could be struck by lightning), or cowering in our basements (more on this one later), some danger is involved.

The right question to ask is "how dangerous is radiation compared to the alternatives?"  This is the question that should be asked but almost never is. Instead, the wrong question is asked, namely "is radiation dangerous?"  

In a fantasy world where some completely safe alternative exists, a world whose existence is implied by the latter question, the answer is, of course, "yes".  But such a perfectly safe alternative does not exist.  It can't exist.  But we are already off to the races discussing just how dangerous radiation is when compared to a fantasy.

But wait, it's worse.  A related question that is seldom asked is "is it possible to avoid all radiation?"  The answer to this is "no".  Reprising my above list, air is radioactive.  Water is radioactive.  Food is radioactive.  Drink is radioactive.  Walking down the street exposes us to radiation.  Cowering in the basement exposes us to radiation.  In fact, one source of the radiation we are exposed to is ourselves.  We are radioactive.

Many people would be surprised by the information contained in the previous paragraph.  They believe that "of course it is possible to avoid all radiation.  In fact, it is easy."  The reality is, as should be clear by now, it is literally impossible to avoid all radiation.

I don't even know if it is possible for scientists to construct an artificial radiation free environment.  Even if we give ourselves a pass when it comes to the radiation contained in our own bodies, I don't know if it is possible to construct a structure that is made entirely of components that are not radioactive, then stock it with air that is free of any radioactive components.

All this should make any argument that ignores the fact that everything is already radioactive ridiculous.  Yet that is implicit in many of the arguments made by anti-nuclear people.  But those arguments are routinely made and almost never challenged.

Scientists have a term that captures the reality of the situation.  They often refer to "background radiation", the amount of radiation that is normal for a particular situation.  And the amount of background radiation present differs with circumstances.

There is an overall "average" that captures various common environments.  If you are on land at sea level and not near any unusual sources that produce above average radiation, then there is a level of radiation that is pretty common.  This amount is what scientists are usually talking about.

But the "normal" amount of radiation increases with altitude.  So if you live in Denver, "the mile high city", you are exposed to more than the normal amount of radiation due solely to the fact that Denver is situated in a place that is a mile above sea level.

There are various other common contributors to an elevated level of "background" radiation.  I don't know if standard ocean water is more or less radioactive than "standard" land, whatever that is.  As I noted in my previous post, granite is radioactive, and to an extent that is more than "normal".

The Part of Pennsylvania where the Three Mile Island power plant is located contains a lot of granite at or near the surface.  Large parts of the U. S. (and many other parts of the world) also have lots of granite at or near the surface.

Whenever there is a lot of granite around, the local environment will feature a slightly elevated level of "background" radiation.  Granite is only one example of something common that can elevate the level of background radiation.  And, in granite's case, we know that about it because of the Three Mile Island accident.  Here are more details about how this came about.

As a result of the accident it was decided to install new, more sensitive, radiation detectors to monitor the radiation levels of employees as they entered and left the complex.  These new detectors showed that some employees did have elevated radiation levels.

A search of the complex didn't turn up any reason for this.  Then someone noted that, at least in the case of one employee, he was more radioactive when he showed up for work in the morning than when he left for home in the evening.  That lead to a search for radiation sources outside of work.  And that led to his basement.  And that lead to granite.

Granite contains trace amounts of Uranium but not enough to account for the increased amounts of radiation.  But Uranium radioactively decays.  And one of the products of this decay process is Radon gas.  Radon is highly radioactive and, since it is a gas, small quantities of it can escape from the granite.  It was this escaped Radon, which would pool in basements, that turned out to be the source of the radiation.

And it must be pointed out that this small amount of Radon that the employee was picking up while watching TV in his basement gave him a higher dose of radiation than what he was picking up from working at a nuclear power plant.  And it is also important to note that the particular power plant he was working at had recently suffered a complete core meltdown, a "China Syndrome" grade catastrophe. 

So what is the source of background radiation?  Chemical elements come in "isotopes".  The number of protons in the nucleus, the "atomic number", determines what kind of element a particular nucleus represents.  If the nucleuses of two atoms of the same element contain differing numbers of neutrons, then they represent two isotopes of the same element.  Hydrogen is the simplest example of this.  It comes in three isotopes.

H-1 is regular Hydrogen.  It's nucleus contains one proton and no neutrons.  According to the "Handbook of Chemistry and Physics", the standard reference on the subject, 99.985% of all naturally occurring Hydrogen is H-1.  It is also not radioactive.  The rest, 0.015%, is H-2, called Deuterium for historical reasons, consisting of one proton and one neutron.  It too is not radioactive.

There is, however, a third "artificial" isotope of Hydrogen.  H-3, called Tritium for historical reasons, consists of one proton and two neutrons.  With a half life of 12.26 years, it is extremely radioactive.  It is common for some isotopes of an element to be radioactive and others not to be.

Given that the universe is over 13 billion years old and the Earth is over 4 billion years old, any Tritium left over from past times is gone. But tiny amounts of Tritium are created by natural means.  And even more is created by man made or "artificial" means.  That's why there is enough around to study.

I could walk down the table of elements step by step, but instead I am only going confine myself to talking about only one more element, Carbon.  Carbon is everywhere.  It's in our bodies.  It's in our food.  It is in paper.  It's in the wood used to build many of the houses we live in.  It is the best single example of why everything is radioactive.

98.90% of the carbon we encounter in our daily life is C-12 (6 protons, 6 neutrons).  It is not radioactive.  1.10%, essentially the rest, is C-13 (6 protons, 7 neutrons), also not radioactive.  C-13 has not acquired a name.  But C-14 (6 protons, 8 neutrons) has.

It is called radiocarbon because it is radioactive.  And the fact that it has made itself so useful is why it has acquired that name.  It has a half life of 5,730 years.  That's short enough to cause all the "naturally occurring" C-14 to be gone.  But some is still around ("it's everywhere") because there is a natural process that keeps making more of it.

That "some" is a truly tiny amount.  Roughly one part in a trillion of atmospheric carbon is C-14.  That's not much.  But it is enough because we have developed exquisitely sensitive instruments for detecting it.  And that points out one of the dilemmas surrounding radioactivity.  We now have the instrumentation to easily detect infinitesimal amounts of radioactivity.  So we do.

Very sensitive chemical tests are capable of detecting a few parts per billion of various chemicals.  When it comes to radiocarbon (and other radioactive isotopes), we are talking about tests that are a thousand times more sensitive.  This would be all to the good, except people often read "detectable" but think "dangerous".

And, to give you a better idea of the exquisite accuracy with which radioactivity can be measured, consider this.  "Radiocarbon dating" is now a routine tool used in archeology and other disciplines where the age of something is important.  These tests don't depend on measuring the presence of radiocarbon.  They depend on accurately measuring exactly how much radiocarbon is present.

One part per trillion is the same as 1,000 parts per quadrillion.  If a sample is measured and the answer comes back about 1,000 parts per quadrillion, then we know that the sample is, at most, a thousand or less years old.  If it comes back about 500 parts per quadrillion we know it is roughly 6,000 years old.  And so on.  

But a sample that is 30,000 years old will contain less than 40 parts per trillion of radiocarbon.  Still, these measurements can often be done with sufficient accuracy to determine within a hundred years, just how old the sample is.

But wait, there is a complication.  A radiocarbon measurement can only be turned into an age if we know exactly how much radiocarbon was in the atmosphere at various times in the past.

We now know that that amount of radiocarbon in the atmosphere has varied both with time and with place.  This means that the "carbon age" of a sample must be adjusted in a complicated way to turn it into a "calendar age".  Again, this would make no difference if we couldn't very accurately measure these extremely tiny amounts of radiation.  But we can so it does.

The fact that we are talking about exquisitely small amounts of radiation in many cases never enters the discussion, if anti-nuclear people are allowed to have things their way.  Doing it the right way would interfere with their implicit argument that "any measurable amount of radiation is a dangerous amount of radiation".

So we have the "any amount of radiation is a deadly dangerous amount of radiation" argument.  In their defense, while this isn't true, the statement "high levels of radiation are very dangerous" certainly is true.  But there are two problems here.

The first problem is obvious.  Just what constitutes "high amounts" of radiation.  This subject has been studied extensively, particularly in the context of the Hiroshima and Nagasaki atomic bombs.

There is a level of radiation that will not kill you outright but will cause you to die within a few days.  But there is also a level of radiation that will allow you to live a relatively normal life.  I say "relatively normal" because it seems to increase your chances of dying of cancer.  Here again, the "theoretically perfect alternative" problem arises.

If the Hiroshima and Nagasaki bombs had never been dropped then some people living in those areas would have died of cancer anyway.  Cancer is cancer.  There is no difference between a radiation induced cancer and a normally induced cancer.  So scientists are reduced to attempting to calculate the "excess cancers above the norm".

If it is normal for about one person in a thousand to die of a particular kind of cancer, and all of a sudden 10 people in a thousand are dying, then it seems reasonable to assume that 9 people in a thousand are dying due to some unusual cause like radiation from a nuclear explosion.

There is a long running program that is funded jointly by the Japanese and the American governments that has tracked Japanese bomb survivors over the many decades since 1945.  A concerted effort was made to estimate the amount of radiation they received, either directly or indirectly, from the bombs.  This is matched with when and how they died.  The result is a "dose response curve".

It is petty reliable for situations in which people receive high doses of radiation.  They die pretty quickly and their symptoms are unusual.  That makes it easy to assign their deaths to the effects of the radiation they received as a result of the bombings.  But as time passes things get more complicated.

The data clearly shows that your chances of getting sick or dying declines as the dosage declines.  The large number of deaths early on made it easy to get the "high dose" part of the dose response curve nailed down.

But as the time between when the bombs were dropped and when a person died has gotten longer and longer, it has become harder and harder to know what was a radiation induced death, and what was death arising from unrelated causes.

And, since all the people who received high doses have already died, we are have more and more been dealing with people who received lower and lower doses.  Despite the increased level of difficulty, the problem remained manageable while most of the deaths were of people who had received a "medium" dose of radiation.

For the last couple of decades the commission has been dealing with people who received low doses and are now old.  That has made it almost impossible to figure out what the effect of low doses of radiation is.  These people are old enough that we expect a considerable number of them to die from cancer, for instance.  These are also the people for which the initial dose estimate is the most error prone.

And that meant there are a wide range of reasonable shapes for this part of the dose response curve.  But this is also where most people fall on the curve.  So the low end of the curve is, in many ways, the most important part.

If low radiation exposure makes a thing 10% worse for a million people, that's a very bad thing.  But we really don't know if the curve predicts that something will be 10% worse, 1% worse, or no worse at all.  It all depends on which of the various shapes is the right one.

Since there is no solid reason for favoring one shape over another, it is a matter where reasonable people can disagree.  And disagree they do.  Not surprisingly, the anti-nuclear people tend to argue in favor of one of the "it's worse" shapes.  I can't honestly prove that they are wrong.  But they can't honestly prove that they are right either.

But actual experience argues that, whatever the effect, it is a small one.  There is an extensive "surveillance" system attached to the medical community that tracks cancers.  And this system makes it easy to detect "clusters", relatively small areas where cancer is more prevalent, than it should be.  We would see clusters of cancers, or premature deaths, or other indications, if the nuclear power industry was highly dangerous.  But we don't.

What I have been doing so far is making "apples to apples" comparisons.  I have been talking about nuclear radiation and its risks.  But there is an "apples to oranges" comparison that is also appropriate.  And that's a comparison of the dangers associated with nuclear power and other kinds of power generation.

That too is a comparison the anti-nuclear people never make.  That's because these alternatives are not perfect because they are not theoretical.  I am going to break the various non-nuclear power generation methods into three groups, fossil fuel based alternatives, traditional "renewable" alternatives, and modern "renewable" alternatives.  Let me start in the middle of my list.

The traditional renewable alternatives are dams and geothermal.  Dams have been around long enough that we now know that they come with unintended consequences.  They were originally supposedly built to perform flood control.  But it cost little to add a power generation capability.  So larger dams all ended up with power generation capability.  And it turns out that there is a tight connection between nuclear and dams.

The Tennessee Valley Authority in the east and the Bonneville Power Administration in the west were both built in the '30s for multiple reasons.  Besides the official reason of flood control, they were also designed to be giant Federal Jobs programs.  They were built out very quickly because that meant more jobs.  But this fast construction schedule was justified as a way to provide flood control for a greater area.

But the larger dams in both projects included an electric power generation capability.  And in the early days, the amount of power generated was well in excess of demand.  When the Manhattan Project, the top secret project to build the atomic bomb, came along it quickly became apparent that fantastic amounts of electric power would be needed.

So, the Oak Ridge facility was sited where it was so that it would have easy access to the TVA's surplus electricity.  And the Hanford facility was sited where it was so that it would have easy access to the BPA's surplus electricity.

As jobs programs, both the TVA and the BPA's time has long passed.  We still depend on the flood control they provide.  But it's the electric power each generates that we now tend to focus on.  We focus on it to such an extent that we almost forgotten about the flood control aspect.  And, when it comes to electricity, demand has long since caught up with supply.  It has been many decades since either has had a significant surplus of power generation capacity.

In the early days of the TVA and the BPA, both projects were considered 100% beneficial and 0% detrimental.  But, over time, the unintended consequences have come to the fore.  A dam's job is to block things.  The water that ends up backed up behind the dam quickly comes to mind.  But it turns out that dams also block things other than water.

The most obvious thing, at least in my neck of the woods, is fish.  The Pacific Northwest used to be known for its gigantic runs of migrating Salmon.  Not so much any more.  Salmon have difficulty dealing with dams.  During the downstream phase they tend to get beat up going through sluice gates or through "penstocks", devices that funnel water to giant turbine blades.  The Turbine blades in turn, spin electric generators.

The trip upstream is even more fraught.  Salmon are noted for their ability to jump.  But they can't jump over a dam.  "Fish ladders" have been built to provide an alternative.  But the fish fail to figure them out.  Various other tips and techniques to help the Salmon out have been tried over the decades.  But they don't work.  Fish runs have declined to the point where many of them are now a hundredth of what they were a century ago.

We think about, and argue about, and litigate about the fish.  But there is another thing rivers transport.  And that we tend to forget about.  Rivers are part of the geologic process for turning mountains into plains. Rivers literally wear mountains down.  The resulting debris, silt, sand, gravel, and even cobbles of substantial size, are carried down from the mountain, perhaps to the lowlands below, or perhaps all the way to the sea.

This material ends up getting trapped behind dams.  The most direct effect is that the basin behind the dam, which is supposed to be used exclusively to store water, gets filled up.  That's a problem.  But so is the fact that this debris is blocked from moving further down the river.

A few years ago couple of hundred year old dams were knocked down not far from where I live.  Much of the debris that had been stuck behind the dams quickly washed away.  It ended up almost instantly reforming a beach, which had gradually disappeared over the last century,  River silt had been critical to its continued existence.

But that wasn't the only change, just the most dramatic.  Marshy areas quickly became established.  Just as quickly they became home to large populations of plant and animal life.  With the dams, these had been far less lush and productive areas.

Especially surprising was how quickly these changes happened.  Things changed dramatically within only a few years.  Equally dramatic was how fast Salmon rediscovered the spawning beds that they had been blocked from for a century.  It is this remarkable transformation that is driving efforts to tear down many more dams.  Enough about dams.

The other traditional renewable option can be dealt with quickly.  The center of the earth is hot.  Space is cold.  That results in a heat flow from the former to the latter.  "Geothermal energy" attempts to harness that flow.  So far only a few successful projects have been built.  They are all tied closely to volcanoes.

But there have been few successes and many failures in this area.  The fact that so many efforts have failed, and still more efforts have never managed to get off the ground, has led to this area being largely abandoned, Little attention is now focused in this direction.

The second main category of alternatives to nuclear that I am going to discuss is fossil fuels.  This can. for the most part, be divided into three sub-categories:  coal, oil, and natural gas.  Coal is the oldest, the dirtiest, and the least efficient.

Coal is basically dirt that contains a lot of carbon.  It is often not very pure.  Many of the impurities are nasty.  Often coal contains small amounts of mercury, arsenic, and sulfur, This gets spread around two ways.

First, there is the mining process.  Two main methods are used.  "Underground" mines are holes in the ground that leave as much of the non-coal material in place as is practical.  But only limited success is possible.  A certain amount of non-coal material ends up getting mixed in with the coal.  And occasionally underground mines catch fire.  Some old underground mines have been burning for decades.

The second mining method is "surface mining".  It is often shorthanded as "mountaintop removal" because that's what the process amounts to.  All the material that covers the coal is removed then the coal is scooped up and loaded into giant trucks.  This process suffers from the same contamination issues as underground mines do.

Surface mines don't catch fire.  They have a different problem.  Supposedly, when there is no coal left everything is put back to the way things were before the mine was built.  But this basically never happens.  So surface mines despoil large areas.

Underground mines nominally despoil less area than surface mines do.  But the amount of land that can be damaged indirectly can be substantial.  Underground mines often end up getting flooded.  Even if they technically don't flood, often large amounts of water are pumped out of them.

One way or another, a lot of water containing arsenic, mercury, and other very toxic chemicals ends up in nearby streams and rivers, which become polluted.  Even if the escaped material from either type of mine is benign, the very volume (very large) of it ends up clogging things up, thus making it hard for people, animals, or plants to make use what's left.

But the damage is not confined to the mining end of the process.  At the other end of the process the relatively pure coal is delivered to a "plant".  There, it is cleaned, leaving large amounts of debris, then burned.

The burning process used to be completely straight forward.  This resulted in a lot of carbon dioxide being inserted into the atmosphere where it contributed to global warming.  But lots of other things, like arsenic, mercury, and sulfur also went up the stack.

There are now regulations mandating "scrubbers" that are supposed to remove the bad things. But regulation is loose so the scrubbers don't need to be all that effective.  And in the end something has to be done with whatever gunk the scrubbers remove.  It's nasty stuff.  And, of course, whatever the scrubbers miss still ends up going up the stack and into the air.

Compared to coal, oil is clean.  The footprint of an oil well is tiny when compared to either type of coal mine.  Oil comes out of the earth so it picks up stuff the same way coal does.  But it picks up far less of it.  The most common pollutant in oil is sulfur.  "Sweet" oil is low sulfur oil.  "Sour" oil is high sulfur oil.  Oil can also be "light" easily flowing, or "heavy" oil that only flows with difficulty.

Light, sweet oil is the most desirable because it requires the least extra processing in the "refining" process.  The heavier the oil, or the more sour the oil, the more extra processing will be required.  And the more nasty stuff that is removed from the oil the more nasty stuff the refinery must somehow dispose of.

The refining process separates out oil's components and "cracks" some components to produce useful products.  These products include gasoline and other fuels, chemical feedstocks for industry, and other things.  Remember, all this ends up somewhere.  Fuels get burned.  Chemicals get made into things like indestructible plastic products..

When any kind of fuel is burned it always produces substantial amounts of carbon dioxide.  But it also can produce carbon monoxide, oxides of nitrogen, sulfur, small particles (soot) and other things.

Oil (or the products made from it) can be transported by ships which sometimes crash, trains, which sometimes crash and catch fire, trucks, which sometimes crash and catch fire, and pipelines, which sometimes rupture and spew oil all over the place.

The third member of the triad is "natural" gas.  It consists mostly of methane, a greenhouse gas that is far more potent than carbon dioxide.  For a long time the industry did not know what to do with natural gas.  There only seemed to be a small market for it.  So they "flared" most of it off.  They literally burned it up near the "oil well" that produced it as a then unwanted byproduct.

Eventually, its economic value was realized.  It is now piped, often for thousands of miles.  It is sometimes liquified and shipped in "gas tankers".  The liquid is returned to its gaseous state at the end of the voyage.

Natural gas leaks out of the earth in "gas fields".  Pipelines leak.  Gas appliances and gas powered industrial equipment leaks.  So a considerable amount of it ends up in the atmosphere where it contributes to our greenhouse woes.  And, of course, lots of carbon dioxide is created when it is burned.

In the last couple of decades a lot of oil and gas has come from "fracking".  The process involves hydraulically fracturing rock.  Rock that would otherwise hold whatever oil and gas it contained tightly in place, ends up with lots of small cracks in it.  These cracks allow the previously trapped oil and natural gas to get out.  From there, they are pulled up to the surface in the usual way.

The method used to fracture the rock is to inject a "soup" consisting of water and nasty chemicals into it.  Various techniques, high pressure, explosions, and the like, are then used to inject the fluid into the rock, thus creating the fractures.

Much of the soup is either left behind or injected back underground in special "disposal" wells.  Either way, it ends up deep underground where it can, for instance, pollute the Oglala Aquifer, a vast underground reservoir of fresh water that my correspondent was worried about.  (I worry a lot about it too.)

That leaves us with modern renewables as an alternative to nuclear power.  The two popular types are "wind" and "solar".  Wind, is shorthand for giant turbines, essentially high tech windmills, that harvest energy from the wind.

Solar, is shorthand for large sheets of solar cells that directly turn sunlight into electricity. There is an alternative technology that uses mirrors to heat water which, in turn, is used to turn turbines connected to electric generators.  But new facilities using this "mirror" technology are no longer being built.  The cost of solar cells has dropped so much that the "mirror" technology is no longer cost competitive.

Both wind and solar harvest the energy of the sun.  They don't do it very efficiently.  But then there is a whole lot of energy from the sun available.  They have now both gotten efficient enough.  They can deliver electricity at prices equal to or less than alternatives.

The most obvious negative is that they are "intermittent" rather than "on demand" sources.  Solar farms, large installations of solar cells, can only produce electricity when the sun is shining.  They can be turned off when their production is not needed.  But they can't produce power when the sun is not shining.

The same is true of wind.  If the wind is not blowing or, paradoxically, if it is blowing too hard, they can't produce power.  Again, they can be turned off when their output is not needed.

And, in both cases, if you decide to turn the facility off when it is capable of producing power (the sun is shining or the wind is blowing) then that production can never be gotten back later.  It's not like a dam where you can just leave the water in the reservoir for use at a later time.  With these sun based power sources, there is no "later".

So far, these renewable resources constitute such a small percentage of total electrical generating capacity that this "intermittency" problem has only been a minor one.  Enough power can usually be found elsewhere relatively easily and inexpensively when they are "off line".  But, as sun-based capacity gets built out, this problem will become more and more acute.

The most obvious "fix" for this problem is to connect everything together into a single giant "electric grid".  That way, excess supply located here can get shunted across the grid to match excess demand located there.

This has, in fact, already been done.  All commercial scale electricity generation in the U. S. is connected together.  So, in theory, east coast supply can be used to satisfy west coast demand, or vice versa.  The problem is that the current grid doesn't have the capacity to move the amounts of power that needs to be moved around.

We are also told that we need a "smart grid".  This is just shorthand for the capability to control things in a more complex and sophisticated way.  The need here has been obvious for more than a decade now.  But little progress has been made.

These are "systemic" problems.  The current system is not set up to fund these particular kinds of improvements.  Without the necessary funding, it will be impossible to either increase the capacity of the grid or smarten it.  If something can be changed, and it certainly can if the will to do so materializes, then we know how to solve both problems.

The other "solution" is storage capacity.  A giant "battery" would be used to store power in times of surplus and then later feed it back into the grid in times of shortage.  Here the problem is primarily technological.  No one knows how to do it.  (If somebody comes up with a practical fix then we will also need to solve the funding problem, but that's for later.)

Car batteries, cellphone batteries, any kind of actual batteries are too expensive and are incapable of storing the giant amounts of power necessary.  Alternatives to a literal battery, like storing air under pressure in caves, have so far failed to materialize.  Some small scale demonstration projects have been built.  But that's it.

Then, as my correspondent noted, there is the law of unintended consequences.  We have been doing all of the options except the newer types of renewable long enough to have an inkling of what the unintended consequences of these technologies might be.  I described some of the unintended consequences of dams, for instance.  But both wind and solar are too new for us to have a real inkling of what the unintended consequences might be there.

There is a consequence of nuclear power that is much discussed.  And that's the question of what to do with the radioactive waste generated.  It can conveniently grouped into "low level", "medium level", and "high level" waste.

COVID-19 has introduced us to the concept of PPE, Personal Protective Equipment.  PPE can pick up COVID-19 so it must be properly disposed of.  In the same way, dealing with radioactive materials responsibly involves PPE.  It too can pick up small quantities of radioactivity.

COVID-19 PPE is often burned and that is a perfectly safe method. The fire destroys the virus so that the resulting debris is perfectly safe.  That doesn't work with radioactivity.  Burning doesn't destroy it.  The standard solution is to bury it in specially designated landfills.  I see no problem with that.  The material is dangerous, but only modestly so.

I am going to skip over medium level nuclear waste for the moment and go straight to high level nuclear waste.  This is truly nasty stuff.  And there's lots of it not all that far from me.  I live in Washington State and the Hanford Reservation, the modern name for what was originally called the Hanford facility, is located in the southeast corner of my state.

As noted elsewhere, the two popular fuels for atomic bombs are Uranium and Plutonium.  The Manhattan project manufactured "enriched" Uranium at Oak Ridge and Plutonium at Hanford.  For reasons that I don't understand, whatever nuclear garbage that ended up being left behind at Oak Ridge by what took place there, generates little angst.  The opposite is true of Hanford.

Hanford used something called "breeder reactors" to make Plutonium.  I am going to skip over the details but the result, at least the result of whatever they did back in the day at Hanford, was a lot of radioactive material.  The facilities build and used, mostly in the '40s and '50s, ended up with highly radioactive material all over them.

But wait.  There's more.  Lots more highly radioactive material ended up in "temporary" storage tanks.  These tanks were designed to give the operators of the facility some breathing room.  The idea was that the material would be stored there, but only temporarily.  Later, when there was time to think about it, a more permanent fix would be developed and implemented.

As a result, those tanks were not designed with long life in mind.  There were a couple of generations of tanks.  But even that later, "better", designs were supposed to be temporary.  But here we are, seventy or so years later, and the tanks are still in use.

The problem is that when people finally had time to figure out what a good long term solution looked like, they also figured that it would be really expensive.  And no one wanted to pony up the money.  As a result, more time passed and problems got worse.

The tanks started leaking.  And people had only a vague idea of what was in each tank.  What they did know was that the contents of each tank was unique.  And each tank contained both highly radioactive material and extremely toxic chemicals.  There was just no easy way of dealing with their contents.

Eventually it dawned on everybody that further delay would only make the problem more expensive to handle. The result was the "Hanford Cleanup Project".  It's been going on since the late '90s and has already burned through billions of dollars.  The end is nowhere in sight.

The project can be broken up into two main components.  The first component works to clean up and render reasonably safe the manufacturing and other facilities.  This part of the project has been beset with delays, overruns, and numerous accusations of of incompetence and corruption.  But progress is being made.

The second component deals with the storage tanks.  There are 177 tanks that contain in total about fifty million gallons of truly nasty stuff.  And, besides being a mixture of radioactive materials and toxic chemicals,  the "stuff" is also a mixture of liquids and solids.

The idea people finally came up with was to extract all of this material from the tanks then somehow encase it in glass.  That would work for both the radioactive and the toxic materials, or so the thinking went.  It would also work for both the solids and the liquids.  Or again, so the thinking went.

These large chunks of glass were supposed to be safely and permanently contain it all.  Things have not gone well with this plan.  I am going to skip over all the problems getting the stuff out of the tanks and focus on the problem of turning it into glass.  After the expenditure of vast quantities of time, effort, and money, they are still working on how to pull that off.

And I think its a stupid plan.  But it is what the anti-nuclear people have demanded.  They want this stuff rendered safe for ten thousand years.  That's ridiculous.  That kind of thinking led to the Yucca Mountain Nuclear Storage Facility fiasco.

A Mountain in Nevada was chosen.  Why?  The theory was that Nevada, with its small population, would be able to put up only token political resistance.  And the "Nevada Test Site" was the location where more than a hundred above ground nuclear weapon tests had been performed between the late '40s and mid 60's.  Yucca Mountain was chosen because it was close to the Nevada Test Site.

I'm not going to go through all the twists and turns.  Suffice it to say that plan was eventually abandoned.  A major factor was that scientists couldn't absolutely guarantee that no nuclear material would leak out somewhere over the 10,000 year time period that the anti-nuclear people demanded.

Since the "safe for 10,000 years" standard was the nominal justification for both the location and the design of the facility, and since Nevadans turned out to be way better at playing the politics game than expected, supporters were eventually forced to give up.

And, of course, no other location, including Hanford, Oak Ridge, or Los Alamos (also located in a state with a small population and presumably not much political clout) could be shown to be any more suitable than Yucca Mountain, NIMBY, Not In My Back Yard, triumphed.

But why 10,000 years?  Isotopes with short half lives like Tritium are highly radioactive.  But the amount of radioactivity declines relatively quickly.  In 120 years only one tenth of one percent of the Tritium will be left.  Radioactive decay will have disposed of the rest.  With something like radiocarbon, it will only be about 40% as radioactive as it started after 10,000 years.

The calculations for how the amount of radioactivity would change over time for nuclear fuel is complicated.  But physicists have done them.  They can map the amount of heat generated as the fuel slowly decays in storage.

They can also map the degree of radioactivity at various times in that same time period.  The Yucca Mountain storage facility was designed to handle the heat generation for the entire period.  Supposedly, the facility was designed to hold everything safely in place and deep underground for 10,000 years.  Had it worked, that would have taken care of the radioactivity problem.

With Hanford the final product, the chunks of glass, are supposed to render it safe for people wandering around thousands of years into the future.  That may seem reasonable if you don't think about it.  But again, we are comparing something to the theoretical perfect alternative.  To see how ridiculous this is, compare this situation with a settling pond adjacent to a coal fired power plant.

Are these facilities safe for 10,000 years?  No!  Several ponds at plants that are still in operation have had the "containment" dams that keep them in place give way.  This has released millions of gallons of nasty stuff into local streams and rivers.  Any one of the hundreds of settling ponds sited all over the country contains more material than is contained in all the Hanford tanks combined.

On a gallon for gallon basis, the material in the coal ponds is far less dangerous than the Hanford material.  But it is still very dangerous.  And there is so much of it.  The Hanford Reservation is in an out of the way part of my state.  Many of these settling ponds are close to population centers.

Radioactive decay guarantees that over time the danger from the radioactive material at Hanford diminishes.  The half life of Arsenic as a poison is infinite.  It never gets any less poisonous.  The same is true of the mercury, sulfur, and other dangerous compounds found in the typical coal plant settling pond.

Yet there are literally hundreds of large coal ponds full of nasty stuff scattered around the countryside.  As time passes, many of them are associated with plants that have shut down.  The company that once ran the plant may now be completely out of business.  That means no one really cares whether the settling pond stays safe any more.  No one!  Yet few members of the general public are concerned.

And it's not just coal.  And it's not just the fossil fuel industry.  Google "Minamata disease" or "Bhopal disaster".  Large scale incidents involving large numbers of deaths or injuries are a regular occurrence.  All have to do with routine industrial activities.  Yet there is no NIMBY or a "it has to be safe for 10,000 years" kind of thing going on.

All of this has led me to a conclusion.  There are real risks and perceived risks.  Often the two are wildly divergent.  I have concluded that perceived risks are determined by pressure groups.  If there is an effective pressure group arguing that a certain risk is high then that's what he public will believe.  If there is an effective pressure group arguing that a certain risk is low then that's what the public will believe.

There is no effective pressure group arguing that coal fired power plants (and many other things) are dangerous.  There is an effective pressure group arguing that nuclear power plants are dangerous.  

In this latter case, I confess that the pressure group is pushing on an open door.  The fact that on two separate occasions an atomic bomb leveled a city and killed 100,000 people, is well known.  That scares the shit out of people, as it should.  If you couple that with decades of horror movies whose basic plot was "some nuclear thing that normal people don't understand goes horribly wrong", it is no wonder that it has been easy for the anti-nuclear people to fearmonger.

Companies spend billions of dollars on advertising every year trying to convince us that their products are safe and beneficial.  This provides a certain amount of cover for the processes necessary to manufacture and deliver those products.  "Coal keeps your lights on".  "Cigarettes make you look sexy and feel good about yourself".  People don't want to believe that a product or service they routinely consume is somehow evil.  So they tend to give industry a pass.

We can see this playing out in my neck of the woods.  We get some of our electric power from coal.  The companies providing that power get only a modest amount of pushback about the damage coal does. And there is no talk of "safe for the next 10,000 years" when it comes to what will be left behind when the last coal plant shuts down a few years from now.  In this, my neck of the woods is typical.

I think a new series of above ground tanks should be built at Hanford.  They should NOT be built to a 10,000 year standard.  The design should instead assume that they will receive regular maintenance.  It should be easy to detect leaks or other problems.  It should be easy to transfer material from one new tank to another.

And they should not be surrounded by high security.  A standard chain link fence with appropriate signage is adequate.  That is more than we do with much of the dangerous material our society creates, stores, transports, and uses.  (Google "2020 Beirut explosion", if you don't believe me.)

If some idiot wants to wander in and, as a result, gets a dose of radiation, that's on him.  We don't spend a nearly infinite amount of effort safeguarding future generations with other dangers we pass along to them.  Why should dangerous nuclear materials be treated any differently?

If we had adopted my approach to the Hanford storage tanks then we would have made a lot more progress by now.  The Hanford "Reservation" is very large.  There is plenty of room to spread around lots of relatively small tanks and keep them widely separated.  That way one can't affect another.

It is probably a good idea to have "mall cop" quality security on duty to deal with the clueless.  But if someone wants to make a determined effort to do something dangerous and stupid, let them.  Whatever harm comes their way, that's on them.  And if future generations do something stupid, that's on them too.

And that leads to what should be done with "medium" level nuclear waste.  It is more voluminous but less dangerous than the high level stuff.  Even so, there's just not that much of it.  Using a similar strategy should work fine for it.

Put up some more tanks at Hanford.  Put them in a separate area that is distinct from the tanks containing the high level waste.  Use the same type of fence and "mall cop" security.  Just change the signage on the fencing appropriately.

Even if you include material with a medium level of radioactivity, we are not talking that much material.  It takes something like a long train of hopper cars to transport enough coal to power a coal fired power plant for a day or so.  It takes one, or perhaps two, rail cars to fuel a nuclear plant for a year.  No mountaintops were removed in the making of the nuclear power industry.

One way to look at the advance of civilization is by measuring its ability to do difficult and dangerous things reliable and safely.  If the nuclear industry is allowed to take advantage of the march of technology then it is well within our capability for it to operate reliably, safely, and economically.

The anti-nuclear people have succeeded in stifling all efforts to replace old facilities with new and improved ones.  And they have driven costs up without improving safety.  Is it any wonder I have such a low opinion of them?