Friday, December 25, 2020

Gravity Waves - An Update

 This is an update to my October 2017 post on the subject.  Here's the link:  Sigma 5: Gravity Waves.  It's been more than three years.  Surely, something has changed.  Indeed it has.  But before proceeding both backward and forward, let me review the results I reported in my earlier post.

These results were produced by a "gravity wave observatory" called LIGO.  For more than a decade LIGO had nothing to report.  The reason was simple.  It's detector was just not sensitive enough.  But it went through several generations of upgrades.  That last one (Advanced LIGO) did the trick.  The data collection run, tagged "O1" for "Observation run 1", ran from from September of 2015 to January of 2016. It produced three events.  Each event was caused by two large black holes spiraling together till they merged.

The observatory was then shut down for minor upgrades.  At completion, the O2 run took place.  It ran from December 2016 to August 2017.  O1 and O2 together eventually resulted in 11 events being detected.  When I wrote my post five of them had been reported on.  Since then, another round of upgrades has been installed.  Upon completion, the O3 run was started.  It had to be shut down in the middle so it was informally broken into the O3a run (April 2019 to September 2019) and the O3b run (November 2019 to March 2020).

All together, 56 detection events have been identified.  And a third observatory (LIGO is actually two observatories, one located in Washington State, and the other in Louisiana) has been brought online.  VIRGO is slightly smaller than the two observatories that combine to make up LIGO, but is sensitive enough to detect many of the same events that LIGO can.  With three observatories measuring the same event, its location can be narrowed down to a much smaller slice of the sky.  And, in general, more information about the event can be collected.

LIGO is currently shut down so that still more updates can be installed.  The O4 run is currently slated to start in June of 2022.  And VIRGO is also getting upgraded.  And other facilities will be coming online soon.  They are scattered all over the globe.  There are even plans for "LIGO in space", a LIGO-like instrument that would be bigger than it is practical to go with an earth based observatory.  Once those first observations proved that it was possible to detect gravity waves funding has ramped up dramatically.

But that's enough of the present and the future for the moment.  Let's go to the past.  And let's do it by asking a simple question:  what's the speed of light?

It has been possible to make observations spanning distances like ten or twenty miles for millennia.  Back then it was obvious that sound traveled at a finite speed.  You could observe an action and then note a delay measured in seconds before the sound associated with that action reached you.  That made it obvious that, if light was not instantaneous, then it at least traveled much faster than sound.

Reasonably accurate measurements of the speed of sound were successfully made hundreds of years ago.  We have had a very accurate estimate of the speed of sound for perhaps two hundred years.  And scientists were able to establish that sound worked by oscillating something.  Normally, this was air.  But it could be water.  And the speed of sound through water was higher than that of air.  And sound couldn't travel through a vacuum at all.  So, scientists have long had a good idea of how sound worked.

And the obvious thing to do was to apply what they knew about sound waves to light waves.  If the analogy held then light should have a propagation speed.  But what was it?  "Fast" just doesn't tell us much.  Efforts to determine its speed date back to at least 1629.  Experiments with cannons and the like determined that it was too fast to measure using standard methods.

That led to an effort based on astronomy.  This effort produced the first measurement that was at least in the ball park.  Romer in 1676 made detailed observations of the orbits of the moons of Jupiter.  He calculated that in order for the observations to make sense it must take about 22 minutes for light to cross from one side of the Earth's orbit around the Sun to the other.  That would have been great if astronomers of the day knew exactly how big that orbit was.  They didn't.  The best guess put the speed of light at about 140,000 miles per second.

What was important about this is it told scientists just how small the time intervals were that they would need to be able to measure.  Say they wanted to measure the propagation time of light over distance of 10 miles. At 140,000 MPS light would take 0.00007 seconds to traverse that distance.  A stop watch just wasn't going to cut it.

An early scheme depended on a rapidly rotating wheel with teeth on it.  Taking light as particles for the moment to make the explanation simple, arrange for particles of light to be shot past the wheel and along to a distant mirror.  There they are reflected back, again past the wheel and on to a detector.  If things are arranged such that the light has to travel through the part of the wheel where the teeth are, then it will be blocked when a tooth is in the way but can pass freely when it hits a gap.

Now, spin the wheel at high speed.  If the wheel is spinning at just the right speed then a particle of light can pass through one gap between teeth on the wheel, bounce off the distant mirror, and then return just in time to pass through the adjacent gap.  This setup allows time to be sliced into very small intervals very accurately.  Simple calculations suffice.  And a different sized disk or a different rotation speed can used to fine tune the interval to whatever is necessary.

This admittedly inaccurate explanation gives you the idea.  The point is that with the proper equipment built along these lines things can be arranged so that the light passes through one slit going and a different slit coming.  A wide range of time delays can be accommodated.  It is simply a matter of dialing the setup in.  Once the right combination of rotation speed and disk/tooth size is found, it is a small step to translate the settings into the speed of light they represent.

And, as a result, Foucault was able to come up with a speed of 298,000 KM/s in 1862.  This is very close to the modern value of just under 300,000 KM/s.  Others improved his setup and came up with similar values.  By 1887 Michelson and Morley were confident that they could measure the speed of light very accurately.

Measuring the speed of light very accurately was of secondary importance to them.  Their primary interest was in learning was how fast and in what direction the Earth was moving.  To do that they needed to very accurately measure the speed of light.

In normal circumstances sound travels through air.  Air is the "medium of transmission", the thing that sound vibrates in order to move.  But what was the medium of transmission of light?  It had to be something, didn't it?

And there were all those things that weren't the medium of transmission.  After all, unlike sound, light can easily travel through a vacuum.  And since a vacuum is, by definition nothing, all the usual suspects get immediately eliminated.  So, scientists posited the existence of something called the "luminiferous aether".

Assuming something exists just because its existence is convenient is not good enough for scientists.  They need actual proof that it does exist.  And a good place to start is by trying to measure its properties.  And the fundamental property that aether had was its ability to transmit light.

And it was assumed that, everything else being equal, the propagation speed of light in aether was constant.  That was true for sound and air.

If you kept air moving at a constant speed and kept its temperature constant, and so on, then the speed of sound through it was constant.  Conversely, you could determine some of the attributes of air by measuring the speed of sound through it.  For instance, fast moving air would result in a different measured speed of sound than slow moving air.

So, assuming the parallel held, the speed and direction the aether was moving could be inferred from a careful measurement of the speed of light in various directions and at various times.  And it was assumed that the aether didn't move.  The Earth moved through the aether.  So, differences in the speed of light led to different speeds for the aether, which in turn led to a measurement of the speed of the Earth through the aether.

All this was speculation piled upon speculation and scientists knew it.  But the measurements were expected to turn up something, even if it wasn't exactly what "aether theory" predicted.  And repeated measurements should lead to some ideas about aether theory being discarded and other ideas being confirmed.  That was all par for the course.  But what everybody agreed on was that careful measurements would turn up differences in the measured speed of light.

After all, it was known that the Earth traveled around the Sun at relatively high speed.  And the direction of travel changed with the season.  That amount of change alone should have been enough to change the speed of light by a measurable amount.  The apparatus had been designed to easily and unambiguously detect changes of this magnitude.  If other changes turned up as the measurement process progressed, that would just be a bonus.

The problem is that the Michaelson-Morley experiment turned up the result that no one expected.  And this "unexpected result" phenomenon pops up in Science all the time.  It is a normal part of science.  Scientists expect it to happen regularly.  They just don't know when it will happen and when it won't.  And what this means is that all those "scientists reject my belief, not because it is wrong, but because it doesn't fit in with what they already believe" arguments are nonsense.

If someone provides hard evidence that current scientific thinking is wrong then scientists change their thinking.  That's what scientists were forced to do because Michaelson and Morley got the result they did.  No scientist liked the result they got.  But other scientists were able to reproduce the result in well conducted experiments.  So, scientists had to find a way to live with the result, which they eventually did.  Scientists reject "unscientific" beliefs, not because they are unscientific, but because they are not backed by solid evidence.

Scientists have been forced by the results of experiment to reject all kinds of sensible ideas. They have been forced to accept ideas that were far more weird and unnatural and unbelievable than anything an outsiders has thrown at them.  Why?  Because some well done experiment or observation has forced them to.  And the Michaelson-Morley result was one of many instances of this.

The Michaelson-Morley result eventually led Einstein to publish his Special Relativity theory in 1905.  Their result had dealt a near-fatal blow to the idea that the luminiferous aether existed.  But it wasn't until Special Relativity that scientists bailed completely on it.  The theory worked.  It also did away with the need for aether to exist at all.   The real kick in the pants, however, didn't come until ten years later.  Einstein introduced General Relativity in 1915.  That's when things got really weird.

In 1905 Einstein had built Special Relativity around the idea that the speed of light is constant.  That's pretty weird.  In order to make things work the theory demanded that all these other not-light things must stretch and shrink.  There were still lots of things that remained unchanging.  But still, some things that we had thought were unchanging, changed in these predictable ways in circumstances that Einstein laid out.

Okay.  That's lot to buy, but the Michaelson-Morley result demanded some kind of weirdness.  And Special relativity weirdness was pretty much the minimum amount of weirdness that would get the job done.  The problem is that General Relativity took weirdness to a whole new level.  We're now talking bat-shit-crazy weird.

You see, General Relativity requires space itself to stretch and shrink.  Space, to put it another way, is the luminiferous aether.  And it behaves in many ways like the air that sound travels through.  It's what vibrates to transmit gravity.

Newton said "objects in motion tend to continue in that motion".  Gravity works by literally warping space.   So an object thinks it is continuing to travel in a straight line.  But gravity causes space to warp and that caused the "straight line" course of the object to bend, not because the object has changed direction, but because "straight" is no longer straight.  Like I said, bat-shit-crazy.

And this "space is wiggly" business means that there are such a thing as "gravity waves", instances of space wiggling because, you know, gravity.  And I think you can now understand why I, for one, was not having any of it.  I was not convinced that gravity waves even existed even though lots of smart people whom I deeply respected believed that they did.  But the LIGO results did me in.  They were right and I was wrong.

And I have to admit that I am actually happy that I turned out to be wrong.  Because, as I observed three years ago, "[e]very time something previously invisible has become visible, tremendous discoveries have been made".  And it is important to understand that the first tremendous discovery has already been made.  We now know with absolute certainty that gravity waves exist.  That's a tremendous discovery if there ever was one.

Beyond that, we know that General Relativity computations about the characteristics of gravity waves work pretty well.  For instance, they get their strength about right.  Why not 100% right?  Maybe.  But we know so little about the events behind the observations at this point that we can't say with certainty.  All we know about many events comes from running LIGO data through General Relativity.  That results in, for instance an estimated mass.  Is the estimate correct?  At this point we have no way of knowing for sure.

But even if the calculations are off by some they still tell us things.  Remember that first estimate for the sped of light.  It was close enough to tell us where the decimal point went.  And that was valuable information.

And we now have 56 events to go on.  The first event was scary.  Was it some kind of screw up?  Was it some kind of unusual event or was it pretty typical?  With one event it's hard to judge.  With 56 events patterns emerge.  A lot of the events are two black holes spiraling together to merge into one.  We now have some idea of how common that event is.  One of the early events was a neutron star merging with a black hole.  Scientist got very excited about that one.

There have been some events that fall outside the accepted theories for how these kinds of events are supposed to progress.  The details are complex and I don't really understand them.  But the scientists are very excited by what they are seeing.  It would be nice if everyone else was too.  But they are not.

Something the general public doesn't understand is that scientists are actually happier when the data doesn't conform to current theory.  It's just more fun and interesting to be on the hunt for a new theory to replace an old broken one.  That's as good as it gets.  It's what made Einstein famous.

Next best is to come up with a modification to an old theory.  Sometimes you don't have to throw the whole thing away.  Maybe you change parts of a theory but leave the rest alone.  If the result is that the revised version now fits all the experimental data then that is a very good result.

It's progress but not the best outcome if you can change a theory and the new theory is a better but not a perfect fit for the experimental data.  That's an improvement, but it also is evidence that more work is needed.  Unlike many, scientists expect their theories to agree with all the data, not just most of it.

The scientists that feed off of the LIGO data have gone from not excited to very excited.  Before 2016 they had no data to work with.  That's not very exciting.  Now that they have data, and lots of it, to work with they are very excited.

Unfortunately, things have gone in the other direction in terms of general interest.  There was a flurry of press coverage back in 2016.  Although the first event LIGO observed happened in 2015 it wasn't announced until then.  For reasons I go into in the previous post the first event was suspicious.  No one wanted to make an announcement they would later have to take back.  So there was a long delay while things were checked and rechecked.

Fortunately, the second event came along pretty quickly.  That's when I and many others relaxed.  It was real.  And a third event followed shortly thereafter.  And VIRGO came on line.  This was enough to maintain interest until about the summer of 2017.  I wrote my post in October of 2017.  It turns out that interest by the press and by the public was already waning by then.  Press coverage since has been almost nonexistent.

But the data keeps pouring out.  The upgrade from the O1 setup to the O2 setup was modest.  But it was enough to increase the rate of event detection.  The modest upgrade that was sandwiched between the O2 and the O3 runs has also increased the rate of event detection.  LIGO will be down for a long time between the O3 and the O4 runs.  The currently scheduled starting date for the O4 run envisions a 27 month gap.  The gap is so long because the upgrade will be much more extensive.

Each upgrade increases the sensitivity.  That means that events similar in size to currently detectable events can be detected further out.  Since a "cube" law is involved, a 10% increase in sensitivity translates into a 33% increase in the volume covered.  Also, smaller events that happen within the old volume can now be detected.  The difference is not as dramatic, but it should result in still more events being detected.

So LIGO started out as what appeared to be a boondoggle.  For a long time it ate lots of money while producing no science.  But the project did a one-eighty in 2016 when that spectacular discovery of the first event was announced.  The discovery of the second event didn't strike the public as nearly as spectacular.  But in many ways it was more important.  It proved that the first event wasn't a one-off.

Unfortunately, the public saw not much difference between the first event and the second, so they started tuning out.  And the public was treating each new event as routine a long time before LIGO got to the 56th one.  And routine is not newsworthy.  So, the press has been checked out since 2018.  It is possible but unlikely that O4 will produce a result that is spectacular enough to put LIGO on the front page again.

That is sad.  The quality of the science is increasing by leaps and bounds.  A big reason for this is the large pool of events, the very thing that makes the whole enterprise boring to the public.  And the O4 run should make things worse at generating buzz by producing data much more quickly than any previous run.

But more data is good for science.  Many more events means that comparisons can be made and patterns can be confirmed or disproven.  There is lots more data to use to test theories against.  Most theories will be found wanting but that's okay.  It's how science works.

The practical effect of something as exotic as gravitational waves can not be predicted.  No one knew that the time of its development that an obscure and insanely difficult physics theory called Quantum Mechanics would prove to be the foundation upon which all the integrated circuits that power all of our modern electronic devices are built.

Some theoretical work, and at this point LIGO is all about the theoretical, never seems to lead to anything practical.  But time after time, something wildly theoretical and of no apparent practical use, ends up allowing us to go from "why are people we don't care about and who live in an obscure corner in China getting sick?" to "we are now making life saving vaccines out of something that the public has never heard of called 'mRNA'."  And these mRNA vaccines are so powerful that they can stop a deadly world wide pandemic in its tracks.  And only a year separates these two events.

Saturday, December 12, 2020

60 Years of Sceince - Part 23

This post is the last in a series that dates back several years.  In fact, it's been going on for long enough that several posts ago I decided to upgrade from "50 Years of Science" to "60 Years of Science".  And, if we group them together, this is the twenty-third main entry in the series.  You can go to Sigma 5: 50 Years of Science - Links for a post that contains links for all the entries in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book The Intelligent Man's Guide to the Physical Sciences as my baseline for the state of science when he wrote the book (1959 - 60).  In this post I will review the section titled  "Fusion Power".  This is the last section of the last chapter in the book.  But there is an appendix.  So I will finish up by taking a look at what's in it. Since there is no more to the book, there is no reason to continue the series.  To work.

"Fusion Power" addresses a potential that has remained unfulfilled to this day.  Nuclear fission potentially provides access to such large amounts of power as to be almost unimaginable.  This potential has been turned into reality in the form of fission powered electric power plants that supply a substantial portion of the electricity we consume.

For all their problems, and in spite of the fact that they have not lived up to the potential Asimov and many others saw back in 1960, facilities of this type actually exist.  And they actually produce large quantities of electric power on a routine basis.

The same can not be said for fusion based electric power production.  But before we go into how this sorry state of affairs has come to be, let's review what Asimov had to say on the subject.  He starts out by noting that at the time of the book, physicists had been dreaming of harnessing nuclear fusion for twenty years.  Why the interest?    Because fusion is the process that powers our Sun.

The Sun is at roughly the midpoint of the time it will spend as a type of star called a Yellow Dwarf.  At some time in the future it will go through a series of metamorphoses that will turn it into a type of star called a White Dwarf.  That doesn't sound so bad, but for us it is.  A White Dwarf is tiny.  And it only puts out an infinitesimal amount of the heat and light that a Yellow Dwarf star like out Sun produces.

Even so, thanks to fusion, the Sun has been able to continuously produce massive quantities of energy for billions of years.  And it will continue to be able to do so for several billion years more.  Is there any better argument for the potential represented by fusion power?

Asimov correctly concludes that "[i]f somehow we could reproduce such reactions on the earth under control, all our energy problems would be solved."  The "under control" part is important.  At that time we already knew how to build a large "H" bomb.  It used fusion to create an amount of energy that was measured in megatons.  That's far too much of a good thing.

It's not the inefficiency of fossil fuel burning that is the problem. It is the side effects, the greenhouse gasses, etc.  Other non-nuclear options have problems that I have listed elsewhere.  Fission, the other "nuclear" option, has turned out to have problems that I have also addressed elsewhere.  But, assuming it could be controlled, and assuming little or no radioactivity would be generated, a reasonable assumption, then fusion based power generation would be a wonderful thing.

Asimov opines that fusion power would produce no radioactive waste.  This is actually an open question.  Some designs produce no radioactive waste.  Others do.  But even the designs that do produce radioactive waste look like they would produce far less radioactive waste than a fission based power plant.  He also notes that pound-for-pound fusion produces 5-10 times more power than fission.  So what's the hold-up?

He postulates the development of a fusion reactor based on Deuterium.  It is far rarer than regular Hydrogen.  But, as he notes, traces can be found in regular ocean water.  If efficient extraction processes can be found or developed then the fuel supply becomes effectively unlimited.

Deuterium has long been a subject of interest to nuclear physicists.  It is much easier to induce it to fuse.  The easy way to think of the problem is in terms of temperature.  Deuterium requires super-high temperatures to induce it to fuse.  But regular Hydrogen requires ultra-high temperatures, temperatures far higher than Deuterium requires.  Both regular Hydrogen and Deuterium are non-radioactive.  Putting it all together, Deuterium seems like the smart way to go.

Asimov then goes on to practical considerations.  With fission, physicists already had a starting point when it came to figuring out how to control it.  The "nuclear pile" they had built while figuring out how to build an "A" (atomic - fission) bomb provided a working example of a small, controlled, fission environment.  The problem is that there is no pile-equivalent that was developed along the way to the creation of a successful "H" (Hydrogen - fusion) bomb.

All "H" bomb designs use an "A" bomb as the mechanism necessary to initiate the fusion reaction.  No one ever figured out a half-measure way to get the job done.  So the developers of a fusion based power plant had to start from scratch.

Asimov whined about a lack of effort when it came to fusion reactor design.  This might have been true at the time.  But the problem has since received a large and persistent amount of attention.  Asimov lays out the two big problems.

The first problem is achieving super-high temperatures.  He estimated that reaching 350 million degrees would be necessary.  That's no problem in the vicinity of an exploding "A" bomb.  But we want to be able to do it in a relatively normal building that is situated relatively close to homes and businesses.  His estimate turned out to be way to high.  But millions of degrees are certainly necessary.

The second problem is holding everything together long enough to extract the power and turn it into electricity.  An "H" bomb literally blows itself apart in a small fraction of a second.  A practical fusion power plant must be able to produce power steadily for minutes, hours, days, weeks, even years.  Asimov tackles the first problem first.

His suggestion is a magnetic bottle.  A donut shaped cavity is evacuated.  Deuterium is inserted and an extremely strong magnetic field is applied.  For reasons I am not going to get into, this mechanism can heat the Deuterium to extremely high temperatures.  This causes it to turn into a plasma.  I am also going to skip most of the differences between gases and plasmas.

I am also going to mostly skip over the fact that we took a vacuum and then added a gas.  Doesn't that ruin the vacuum?  It doesn't if only a small amount of gas is inserted.  And it turns out that a plasma acts effectively as if it is a series of wires.  So we can use magnetic fields to "pump" lots of energy into it from a short distance (a few feet) away.  That heats it up.

The fact that we are doing this in a vacuum means that, if we ae clever enough, we can keep the super-hot plasma from ever touching the cold walls of the donut.  This means that the plasma can stay hot as it not transferring any energy from itself to the cold walls.  Conversely, the walls can be kept at something like normal temperatures because they don't make contact with the super-hot plasma.

And it turns out that all of this works.  But only to a certain extent.  No one has been able to get a device to heat a plasma to a sufficiently high temperature and then keep it there for any length of time.  One unexpected problem turns out to be plasma instability.  The plasma starts forming waves.  And those waves keep getting bigger and bigger.  They quickly get big enough to mess everything up.  The temperature crashes, or something else goes wrong, and the whole thing quickly "quenches".

Asimov then moves on to the second problem.  The trick here is that plasmas conduct electricity.  That means that you can steer them with magnetic fields.  This technique is called "magnetic confinement".  Scientists were having some success with magnetic confinement at the time the book was written.  They have since had much more success.  But "plasma instability", the "wave" business I discussed above, has limited the degree of success.  If the plasma instability problem could be fixed then magnetic confinement would work just fine.

Since the time of the book a Russian idea called a Tokamak has become the leading candidate for the best design.  To the untrained eye it looks pretty much like the donut I have discussed above.  But the subtle differences apparently help a lot.  Many design ideas have been tried since the '60s and failed.  The current leading candidate is called ITER.  It is a European led initiative that is based on a Tokamak.

Many billions of dollars have been sunk into ITER.  It is years away from completion so we are many years away from learning how well it works.  And it is a "proof of concept" project.  If it works then a "new and improved" design will be needed.

It will be based on lessons learned from the current ITER.  This follow-on device is supposed to be the first device that can actually produce electricity.  And, if that design works but each device constructed according to that design costs ten or twenty billion dollars to build, then fusion based power production may never pencil out.

There are lots of alternatives to the ITER that some laboratory or another is tinkering with.  Funders have decided to go pretty much all in with ITER. So all these other ideas are starved for cash and operating on a relative shoe string.  So they tend to poke along.  But, if one of them happens to  produces spectacular results then it may displace the ITER/Tokamak design as the front runner.  Don't hold your breath.

In Asimov's time, people were pretty optimistic that fusion power could be pulled off.  But that was sixty years ago.  Since then a lot of designs have come and gone.  And many billions of dollars have gone.  And we are still a very long way from a practical and cost effective device.  Or even one that works at all.

That's where the main part of Asimov's book ends.  So, let's finish up by looking at the appendix.  It is titled "The Mathematics of Science".  It is divided into two sections, "Gravitation", and "Relativity".  Asimov confined himself to a little simple arithmetic for the main part of the book.  Here, he relaxes that restriction somewhat.

You can understand Galileo's take on gravitation by moving on from basic arithmetic to High School algebra.  But before Asimov dives into that he steps back to make a few general observations.

He credits Galileo for the transition from a "qualitative" approach, just describing what's going on in sufficient detail for someone else to be able to recognize it, to a "quantitative" one.  In this latter approach it is important to also be able to measure things with as much precision as can be managed.

That is much easier to do now, than it was then, Asimov notes.  Take time.  There were no clocks capable of more accuracy than a sun dial available to Galileo.  He started out timing things by counting his pulse.  But keeping your pulse even is almost impossible to do.  Galileo knew that so he tried to compensate by devising various water clocks.  I am going to skip over the design details but note that none of his designs was completely successful.

And he had no way to accurately measure very short time periods.  Here, he came up with a trick that worked very well.  Instead of dropping something he rolled it down a ramp.  The shallower the ramp the longer it took the object to roll down the length of the ramp.  This, in effect, slowed things down enough that he could measure things accurately with the tools at hand.

And what he found was that a ball rolling along a flat track maintained a roughly constant speed.  He attributed the minor amount of slowing to friction and decided that, in the absence of friction, the speed would be constant.  This can be represented by the simple algebraic equation "s=k".  "s" is the speed of the object and "k" is some constant that depends on circumstances.  We have now dipped our toes into algebra.

This observation later became the foundation of "Newton's first law of motion".  Newton generalized what Galileo had done, resulting in "v=kt".  Here "v" is a more complicated concept than "s".  "v" (velocity) incorporates the concept of speed but it also incorporates the concept of direction.  So any change in speed, or direction, or both, means that "v", the velocity, has changed.  "k" is our old friend a constant, and "t" is time, that thing Galileo couldn't measure very accurately.

Newton postulated that, when it came to gravity, velocity would change at a constant rate as time passed.  He further postulated that there was something called a "gravitational constant".  So, when applied to velocity in a gravitational field the equation became "v=gt", where "v" and "t" are as before, but "g" is a gravitational constant.  This is still pretty simple algebra, but it is more complex than where we started.

It turns out that the value of "g" depends on some things.  But in a lot of circumstances "g" is a specific value that doesn't change.  And, it turns out that there is a "G", that really is constant.  You mathematically combine "G" with some other things and you can calculate the value of "g" for a specific circumstance.  Asimov goes into this in some detail, but I am going to skip over it.

I will note that he ends up discussing "sine" (shortened to "sin" in many contexts) a "trigonometry function".  Trigonometry ups the ante when it comes to mathematics by quite a bit.  Trigonometry is normally studied in High School.  But only "math track" students are exposed to it.  Any serious study of the physical sciences involves a knowledge of trigonometry and the ability to use its associated functions.

Digging deeper into Galileo brings us to an equation I don't know how to accurately reproduce in a blog post.  An unusual formulation that is accurate is "d=10tt".  Now "d" is distance" and "tt" just indicates "t" (our usual time) multiplied by itself.  This is usually indicated with a single "t" to which a small superscript "2" is attached.  This indicated that two "t"s should be multiplied together and that value used.  But I don't know how to get the blog software to do the superscript thing.  Anyhow, powers of numbers (multiplying them by themselves multiple times) is another increase in the mathematical degree of difficulty.

Asimov now completely abandons Galileo to focus on Newton.  He starts in territory that requires only High School mathematics.  "A=4 pi r r" (spaces added to improve readability) is such an equation.  Here "A" is area, "pi" is a stand in for the common symbol for the ratio between the circumference of a circle and its diameter.  But I don't know how to get my blog software to spit that symbol out.  And "r", which must be squared, in this case stands for the radius of a sphere.

A more interesting equation associated with Newton is "f=ma".  "f" is force, "m" is mass, and "a" is acceleration.  But why "m" and not "w" for weight?  Because weight is the result of a gravitational field.  As the strength of the field changes, the weight changes.

Newton wanted something that was gravity independent.  If you know the mass and the details of the gravitational field you can calculate weight.  If you know weight and the details of the gravitational field you can calculate mass.  Interestingly, if you know weight and mass, you can calculate the strength of the gravitational filed.

Finally, there are some situations where gravity is not involved but it is useful to know mass.  This led to an interesting question.  We measure weight when we put something on a scale.  What the scale actually measures is force.  Using the formulas discussed above we can translate that into mass.  Specifically, we can calculate the "gravitational mass" of an object in this way.

But the "f" in "f=ma" doesn't need to be a force associated with gravity.  If we know the "f" and the "a" we can calculate the "m".  In many situations not involving gravity, what we are calculating is called "inertial mass".  Einstein asked the question, "is the inertial mass of an object always the same as its gravitational mass"? 

It turns out that there is no effective difference if "im / gm = k".  In other words, if the inertial mass ("im") of an object divided by the gravitational mass ("gm") of the same object always yields the same constant then the two are indistinguishable, so we might as well assume that they are the same thing.

Scientists, including Einstein, have looked for instances where the "im / gm" ratio varies.  So far they haven't found any.  So, until proven otherwise, scientists assume that "im" equals "gm".  If you can find an instance when "im" does not equal "gm", it's a safe bet that there will be a Nobel Prize in your future.

Asimov doesn't move on to the next obvious topic.  High School math is adequate to cover what is called "statics", situations where everything is static, i.e. unchanging.  But what about "dynamics", situations where things are changing?  For that you need calculus.  Newton wanted to study dynamic situations, celestial bodies orbiting other celestial bodies, objects falling in a gravitational field of varying intensity, things like that.

He literally had to invent calculus in order to perform the computations and analysis he was interested in.  The calculus he invented was limited.  As soon as he had developed as much of it as he needed to be able to answer the questions he was interested in, he stopped working on calculus and moved on to other things.

Fortunately for us, a German named Leibnitz developed calculus at the same time.  His version did not suffer from the limitations that Newton's did.  He was a mathematician, so he kept adding improvements and extensions for as long as he could.  In the end his version covered a lot more mathematical territory.

Engineers often use the Newtonian version because it is simpler and well suited to many of the problems they routinely encounter.  Everybody else uses the Leibnitz version.  And it has long since been demonstrated that in the areas where they overlap, they are both completely equivalent.

On to "Relativity".  Relativity consists of "Special Relativity", the version Einstein developed in 1905, and "General Relativity", the more complicated but more all encompassing version he developed in 1915.

But before going here Asimov spends a lot of time on the Michaelson-Morley experiment.  This was an experiment done in 1887 that attempted to measure the direction and speed (i.e. velocity) of the Earth as it travelled though space.  The experiment depended critically on the speed of light being a constant.  At the time no one could imaging things being otherwise.

The calculation that would turn the resulting measurements into the velocity of the Earth involve some fairly complex algebra.  But they were well within the capability of a High School student who had completed the "math" track.  I am going to skip over them.  For one thing, they are complicated.  For another thing, we don't need them.  The experiment failed.  The result said that the Earth was not moving through space at all.

We know now and they knew then that "it moves", to quote Galileo on the subject.  If nothing else, it circles the Sun once per year at a distance averaging 93 million miles.  The necessary "orbital velocity" is easily calculated.  That number was far higher than the sensitivity of the experiment.

The "it's not moving" result was shocking.  So lots of people tried unsuccessfully to find a flaw in the experiment's design.  And others reproduced the experiment and got the same result.  Einstein was the first one who was willing to say that "what's going on here is that the speed of light is variable".

In fact, Special Relativity follows directly from the idea that "no matter how you measure the speed of light, and no matter what circumstances you measure it in, as long as there's no acceleration involved, you will always get exactly the same answer".

Fitzgerald had already done some of the work.  He came up with a formula that calculated exactly how much things needed to "contract" to keep the measured speed of light constant.  Fitzgerald's equation included "c" the speed of light.  So the degree of contraction could be related to the number you got when you divided the speed of the object by "c", the speed of light.  Now, as a speed, "c" is very large.  It's conventionally quoted as 186,000 miles per second.

A fast car might go 100 or 200 or even 300 MPH.  That's a tiny fraction of "c".  A commercial airplane flies at just over 500 MPH. A high performance plane might go 2,000 MPH.  Both are only going a tiny fraction of "c".  The speed of pretty much anything we encounter in our day to day experience always amounts to a tiny fraction of "c".  Even a rocket going 20,000 MPH, what we would normally think of as being super-fast, is still crawling along when measured against "c".

Fitzgerald's equation said that for anything going a small percentage of the speed of light, the amount of contraction taking place would be miniscule.  Even if you were going 100,000 miles per second, roughly half the speed of light, the effect would be relatively modest.  You had to be going at 90% or 95% or 99% to see really large effects.  And if you could get to 99.9% or 99.99% then some really strange things would happen.

The fact that the effect was infinitesimal at "normal" speeds was why nobody noticed it, Einstein argued.  Also, note what happens if something is traveling at exactly "c".  Everything either goes to zero or infinity.  This is the basis of the statement that you can't reach the speed of light no matter how hard you try.

There is mathematics that says what might happen if you cold find a way to go faster than "c".  But, if you translate the results into the real world, you get complete nonsense.  And, before you ask, if you succeeded, the things that would happen would instantly render you dead.  (They would also destroy any instrumentation or machinery too, so whatever else you might want to try wouldn't work either.)

Special Relativity works the same as Newtonian Mechanics, in terms of the math required.  You can solve static problems using High School algebra.  You need calculus to solve dynamic problems.  But calculus is all you need.

Asimov does not discuss General Relativity, the version that can handle things when accelerations are involved.  There is a good reason for this from a mathematical perspective.  It took Einstein a decade to go from Special to General Relativity.  He spent several years trying to get anywhere at all.

He finally came up with a couple of key ideas.  But he quickly realized that, if he was going to handle these ideas quantitatively,  he would need to learn a type of mathematics called Tensor Calculus.  He had to devote the best part of two years to doing this.  Fortunately for Einstein, Tensor Calculus had already been invented.  He didn't have to invent it.  He just had to learn how to do it.

All I'm going to say about Tensor Calculus is that it is way harder than regular calculus.  Imagine that you are barely scraping by in High School algebra.  If that's an accurate measure of your mathematical ability then imagine how hard it would be to learn regular calculus.  That comparison gives you a feel for the difference in difficulty that lies between regular calculus and Tensor Calculus.

But the good news is that once Einstein had mastered Tensor Calculus, he succeeded in formulating his ideas in terms of Tensor Calculus, and then using it to compute results.  And, he showed that his results matched reality.  He was able to show that General Relativity provided the solution to several puzzles that had been bedeviling Astronomers.

He even famously made a prediction involving a star appearing to move when the light from that star came close to grazing the surface of the Sun.  The star didn't move, but the "gravitational lensing" caused by the gravitational field surrounding the Sun caused the path that light from the star took to bend on its way to Earth.  And that made it appear that the star had moved.

And with that, we're done.



Saturday, November 21, 2020

60 Years of Science - Part 22

This post is the next in a series that dates back several years.  In fact, it's been going on for long enough that several posts ago I decided to upgrade from "50 Years of Science" to "60 Years of Science".  And, if we group them together, this is the twenty-second main entry in the series.  You can go to https://sigma5.blogspot.com/2017/04/50-years-of-science-links.html for a post that contains links to all the entries in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book The Intelligent Man's Guide to the Physical Sciences as my baseline for the state of science when he wrote the book (1959 - 60).  In this post I will review two sections, "Nuclear Power" and "Radioactivity".  Both are from the chapter "The Reactor".  This is the last chapter in the book.  So, the end of this series is neigh.

The book was written in the middle of the Cold War.  Then, MAD, Mutual Assured Destruction, the ability of either the U.S. or Russia to start a nuclear conflagration that would literally bomb both countries "back to the stone age" was something in the forefront of people's minds.  But nothing happened.  The Cold War ended peacefully with the breakup of the Soviet Empire.

And various crises have since come and gone.  And wars have come and gone or, in some cases, lingered for what seems like forever.  And countries as stable as the United Kingdom and as fringe as North Korea have gotten "the bomb".  In all this time no one has exploded a nuclear weapon in anger.  So, most people now spend little time thinking about them.

Things were different back then.  Nuclear weapons, and the possibility of nuclear war, was a pressing concern.  This scared the shit out of people, and legitimately so.  As a result there was a real yearning for an alternative, an "atoms for peace" program of one sort or another.

But Asimov starts a little earlier.  He notes that there was a legitimate race to be the first to develop an Atom Bomb.  The Nazis did have a legitimate program that Hitler hoped would produce he could use.  And no one doubted that he would use it if he had it.

It turns out that we now know that they never even got close.  But that became clear only after the War was over.  In the mean time, this legitimate concern was part of the justification for moving forward rapidly with a U.S. program, a program that eventually succeeded.  (Many other countries helped.  Principle among them was the U.K.  But the U.S. provided all of the money and most of the resources.)

Against the background that, then and now, the U.S. is the only country to ever explode a nuclear weapon in anger, there was a yearning to balance the bad with the good.  And the most obvious good was to harness the "power of the atom", in this case nuclear fission, to produce power.  This power was first used to propel ships.  But it could also be used to produce electric power.

Both of these technologies emerged from "Project Plowshare", named for the biblical quotation about "beating swords into plowshares".  But producing power that could be harnessed was not the only idea Plowshare explored.  Another was to use atomic bombs for earthmoving.  The obvious candidate was a canal from the Atlantic to the Pacific that would be dug by exploding a series of Atomic Bombs underground.

As a proof of concept a bomb was actually exploded underground in Alaska in an attempt to create an artificial harbor.  Another possibility was to use it for oil drilling.  The reason that you haven't heard of these and other ideas is that they turned out to be far more trouble than they were worth.  They were all abandoned.  Some persisted after Asimov's book was published.  But not for long.   The only Plowshares idea that turned out to have any legs was the nuclear reactor.

Demonstrator nuclear reactors of various kinds started popping up within a few years after the end of World War II.  But, as I have noted elsewhere, it costs a lot of money to come up with a design.  It costs even more money to turn the design into a working device.  That made commercial interests reluctant.  The U.S. Navy, on the other hand, was not reluctant.  As a result, the first nuclear reactor put to practical use was put to use powering a Navy submarine.

The thinking was that submarines are vulnerable on the surface but safer underwater.  And a power plant that required an ample supply of oxygen, as any kind of petroleum based engine does, demands considerable surface time.  Nuclear power requires no oxygen.   And, once a nuclear power plant was developed, a recent conventional submarine design was quickly reworked to make use of it.  The result was the "Nautilus", named for the submarine in Verne's 20,000 Leagues under the Sea.

It was so successful that almost all U.S. Navy subs that have been built since have been nuclear powered.  They can easily stay underwater for 6 months straight.  The biggest ships in the U.S. Navy's inventory were also soon adapted to nuclear power.  Since the '60s, all large Aircraft Carriers are nuclear powered.  These ships have a large fuel budget.  But it is for the planes they carry and not the ship itself.

Efforts to use nuclear power in other ship types has failed.  A nuclear powered cargo ship was built.  It was a technical success but a practical failure.  Everything worked just as it was supposed to.  But it was barred from most seaports for political reasons.  These same political reasons are the reason no other ship type has been attempted.

Most of this happened after Asimov's book was finished.  He spends some time on the Nautilus and mentions several other nuclear powered vessels.  For instance, the keel for the "Enterprise", the first nuclear powered Aircraft Carrier, had been laid down in time for that information to make it into the book.  But she had not yet entered service.

As CVN-65, she entered active service in 1961.  After over fifty years of active service, she was decommissioned in 2017.  Construction of a replacement of the same name, CVN-80, is scheduled to begin in 2022.  CVN-80 is scheduled to enter service in 2027 or 2028.

The first civilian nuclear power plant was built by the Russians in 1954.  The U.K. followed in 1956.  The U.S. joined the club in 1958.  At the time coal fired power plants were cheaper to build and cheaper to operate.  It was hoped that as nuclear power plant construction and operation moved down the learning curve, they would eventually become the cheapest option.

We now know that was never going to happen.  Outside of the Soviet sphere of influence, most designs differed little from each other.  They also differed little from the design used to power the Nautilus.  At the time Asimov wrote his book it was believed that Uranium was hard to find.  It turned out that there was a learning curve when it came to finding Uranium.

Uranium is now known to be plentiful.  It is also known to follow the same rule that applies to pretty much any commodity that is mined.  The higher the price, the more ore deposits there are that can be mined economically.  We are not going to run out of Uranium to mine any time soon.

The construction of many plants of similar design should have driven construction costs down.  But it didn't.  The anti-nuclear people got more and more effective.  They forced regulators to pile on more and more requirements that were supposed to improve safety.  They didn't.  What they did do was to keep pushing construction costs higher and higher.

Three Mile Island, followed by Chernobyl, followed by Fukushima have caused the pressure to only increase.  I have plowed this territory extensively elsewhere so I am not going to go over it again.  Suffice it to say that nuclear power does not now, and doesn't in the near future, look to be a substantial contributor to new electric power generation.

Asimov includes a schematic diagram of a "gas cooled" nuclear power plant.  It describes a design that is more sophisticated than the one used in most nuclear power plants operating today.  Instead of being "gas cooled", they are "water cooled".  But, other than the details of the cooling method, so little has changed since that it accurately portrays how most nuclear power plants work to this day.

A Uranium shortage was them a serious concern.  Asimov responded to this concern by noting that "breeder reactors", reactors that can covert the common U-238 isotope of Uranium into Plutonium, effectively multiply the amount of nuclear fuel available by many times.  Only minor design changes need to be made to turn a Uranium fueled design into a Plutonium fueled design.  He also discusses Thorium as a third alternative.

None of this went anywhere after the book was published.  The primary reason was the discovery that there actually was a lot of Uranium around.  Safety and proliferation issues doomed Plutonium.  It turns out to be relatively easy to harvest reactor grade Plutonium and turn it into a bomb.  The risk associated with Plutonium, and other concerns I am going to skip over, means that breeder reactors are only used in military programs designed to create fuel for bombs.

I am not familiar with the reasons Thorium never took off.  I suspect that it too was doomed by cheap and widely available Uranium.  But I don't actually know for sure.  On to "Radioactivity".

Asimov characterizes radioactivity as a new threat.  He justifies this on the basis that naturally occurring radiation is usually of a pretty low intensity.  High intensity radioactivity he associates with new man made activities like Atom Bombs.  He is correct in the sense that "the bomb" made people acutely aware of radioactivity.

Scientists had known about if for about fifty years by then.  But outside of certain scientific circles it was pretty much unknown.  To his credit he does discuss early radiations induced deaths and illnesses.  Two early victims were Marie Curie and her daughter.  For a while X-Rays were considered completely benign.  But that slowly changed.  Now, of course, safety protocols are routinely followed in places like dentist offices.

Dots of a mixture of Radium and phosphors that would light up were applied to watch dials to make watches easier to read in the dark.  The work was done by women using small brushes.  They would often lick the brushes as they worked.  This resulted in horrible cancers of the face and mouth, and sometimes death.  This practice was outlawed but I don't know whether this happened before or after Asimov's book came out.

Asimov speculated on whether enough radiation would be unleashed to cause widespread harm.  We now know that the answer is no.  But even very small amounts of radiation can be easily measured.  This has allowed scientists to perform some very unusual "tracking" experiments.

Oceanographers have been able to accurately measure the amount of radiocarbon in ocean water.  It spiked during the short period when extensive above ground bomb testing was occurring.  The sharp edge between radiocarbon enhanced water and water containing normal amounts allows them to calculate just how "old" the water was.  That is, how long it's been since the water was at the ocean's surface.

Another interesting development was the discovery of natural nuclear reactors.  Chain reactions depend of the concentration of Uranium being unusually high.  But there are natural events that concentrate Uranium.  And in some cases, these have resulted in chain reactions taking place.  We know this because this situation leaves distinctive isotope profiles behind.

Concentrations never reached the levels necessary to cause a nuclear explosion.  But it never occurred to anyone to think that even a low level chain reaction was possible.  That is, until someone accidently stumbled across the first one.  Since then, many more have been found.

Asimov quickly moves on to a discussion of the mechanics of radioactive decay.  These are subjects I have already covered elsewhere.  He just hits the highlights.  But a lot was known at the time and far more is now known.  But it is detail.  The main picture is clear and hasn't changed in the sixty years since the book was written.

He discusses the concept of a "decay chain".  This isotope decays into that isotope, which then decays into some other isotope.  He also notes that an isotope may decay in several ways.  But in all cases the probabilities are fixed.

He moves on to "half life", a subject which I have already discussed extensively.  From there, he goes on to note that some kinds of radiation are deadlier than other kinds.  The converse of this, which he doesn't discuss, is that it is easier to create an effective shield against some kinds of radiation than it is to create one against other kinds.

He then segues from the fact that everything is radioactive to the subject of "background radiation".  This is another topic I have already treated.  He notes but doesn't go into detail on the idea that background radiation can contribute to evolution.

DNA had just been discovered.  We now know that radiation can damage DNA.  This can result in mutations.  A mutation can be either beneficial or detrimental.  Over time, the beneficial mutations cause species to evolve.  But there are cellular mechanisms for repairing DNA damage, regardless of the cause.  And their are many other ways to cause damage.

Other big causes of mutations are transcription errors, reading errors, and the like.  DNA gets duplicated.  The duplication process is not 100% accurate.  Various processes "read" DNA.  As an example, the cell manufactures thousands of different proteins.

The blueprint describing the specifics each of the many different proteins a cell manufactures is found in the DNA.  A process that is different from, but related to, the duplication process is used to read the DNA.  But the information found that way is used much differently.

Instead of being used to duplicate the DNA itself, a translation process is used to drive a protein assembly process.  DNA provides the details that determine the order and type of the subunits that snap together to make each specific protein.  An error in this process causes the wrong protein to be made.

It is not a wonder that things go wrong with these cellular processes.  What is a wonder is just how infrequently they do go wrong.  It is thought that cancer is caused by key cellular mechanisms going consistently wrong.  Scientists are attacking cancer by figuring out how to get these broken processes back on track.

Various efforts are now under way to reclassify cancers.  The current methods of classifying cancers depend on the symptoms or what organ is affected.  The new method depends on classifying what cellular mechanism goes wrong and how it goes wrong.  This may lead to a single cure that is successful against many cancers, not just one or a few.

This deeper understanding of DNA, the way radiation damages DNA, and all that follows has taken place since Asimov wrote his book.  So, let's get back to it.

He moves on to the "nuclear waste disposal" problem.  This is also something I have discussed extensively elsewhere.  Before moving on I will note that he assumes that the nuclear power industry will grow rapidly.  He also assumes it will eventually become quite large.  That would have resulted in a large amount of nuclear waste.  But the industry did not ever grow very large.  So, the waste disposal problem is actually quite modest.

And, since he overestimates the size of the problem, he ends up taking off on what now look like tangents.  One of them involves building devices that produce small amounts of power for long periods of time.  They work just fine.  But they have not gone into general use due to the public's fear of radiation.  They have only found one use.

We routinely send space missions to the outer solar system.  These missions need power.  The standard solution is solar panels.  Various Mars rovers, the International Space Station, and all manner of other space gadgets, use solar panels for power very successfully.

But the farther from the sun, the less bright sunlight is.  And that means you need giant arrays of solar panels to produce the necessary power.  Queue the RTG, the Radioisotope Thermoelectric Generator.  It is based on the SNAP device Asimov discusses.

Modern RTGs use Plutonium for fuel.  They are radioactive enough to be dangerous.  So they are often put on the end of a boom that distances them from the bulk of the spacecraft.  RTGs power both Voyager spacecraft, now the two most distant man made objects.  One powers the spacecraft that did the flyby of Pluto.  (BTW, that spacecraft is still working fine.)  Their other successes are too numerous to list.  But this application is the only one where "Isotope Power" is used routinely.

Asimov discusses various other attempts at the peaceful use of radioactive materials.  There has been some successful use of radioactive materials in medicine.  That success continues but it is modest.  The other things he discusses never ended up going anywhere.  The public fear of radioactivity eventually blocked any chance of success they might have otherwise had.

He then returns to how to dispose of radioactive material.  It would be nice if the topic had advanced productively since Asimov's day.  But it hasn't.  The same old options are still on the table.  The same arguments are still advanced against each option.  The fact that radioactivity poses no unusual danger, and the fact that we are talking about a very small volume of material, are both still being ignored.

He then moves on to radioactive fallout and the fact that very tiny amounts could be detected, even back then.  He concludes from this that "it is virtually impossible for any nation to explode a nuclear bomb in the atmosphere without detection".  That truth eventually became self evident.  It led to the "Nuclear Test Ban" treaty, which outlawed above ground testing.

At the time that left a loophole.  Countries cold explode bombs in caverns below the ground.  But seismology has grown in sophistication by leaps and bounds since Asimov's time.  It was then possible to detect the underground detonation of a medium or large sized nuclear weapon.  But what about a small one?

In Asimov's time it was thought that such a detonation stood a good chance of going undetected.  But, as I said, seismology has since gotten much better.  It eventually became apparent that even the detonation of a small nuclear weapons would be detected.  There was some nonsense thrown up postulating that there were circumstances under which a detonation could still go undetected.

But the arguments were nonsense and this eventually became apparent.  There is now a treaty banning underground nuclear explosions.  But the U.S. and a number of other countries have not signed it.  Most conspicuous among the non-signers is North Korea.  But that hasn't stopped all of their underground nuclear tests from being detected.

No one has succeeded in concealing an underground nuclear test and no one will.  But that doesn't mean that a country won't develop a nuclear weapon and test it.  North Korea did just that.  It just means that, if they do so but try to keep it a secret, everybody will still find out what they did.

Asimov then launches into a long discussion of the isotope Strotium-90.  It is highly radioactive.  It is particularly dangerous because it is readily absorbed into the bones of growing children.  In this situation, it doesn't take a lot to constitute a dangerous amount.

Another highly radioactive isotope is Iodine-131.  It is particularly dangerous because it is taken up and concentrated by the thyroid gland in the neck.  Again, as a result it doesn't take a lot to constitute a dangerous amount.  Asimov does not discuss Iodine-131.

You will typically see a lot of press coverage of Strontium-90 and Iodine-131 whenever there is an event that releases a lot of radioactive material.  These two materials were discussed extensively in conjunction with the Fukushima nuclear disaster, for instance.  Now you know why they rightly attract so much press attention.

And on that cheery note, . . .

Thursday, November 12, 2020

Cars - The State of Play in 2020

 I like to periodically return to subjects to see how things have evolved since I last wrote about them.  The most obvious example is my long running series, "60 Years of Science".  But it is far from the only example.  This post brings together updates that are joined by the fact that they all have to do with cars.  Let me start with self-driving cars.

I last opined on this subject three years ago.  Here's the link:   https://sigma5.blogspot.com/2018/01/robot-cars-now-new-and-improved.html.  For a long time the conventional wisdom was that Autonomous Vehicles, or AVs for short, would arrive in 2020.  Well, have they arrived?  Nope! The "wisdom" referenced in the post consisted of, in part, an article in Science, the premier scientific journal published in the U.S.  (It is #2 in the world behind the British journal, Nature).

A December, 2017 article in Science opined that AVs would appear "somewhere over the rainbow".  Elsewhere in the same article the author described widespread use of AVs as "still decades away".  Ouch!  Then, as now, I find that prediction too pessimistic.  So, where do we stand now that we are at the end of 2020?

Well, then and now there are several companies working on the subject.  Waymo, the Google subsidiary, is generally assumed to be in the best shape.  But, like everybody else, it is still running "demonstration" and "pilot" projects.  One problem is that a car that was part of one of these demonstration/pilot projects managed to kill a lady in Arizona.  It wasn't part of a project run by Waymo but the death put a pall on the whole industry.

Several people have also died while driving Tesla cars in "Autopilot" mode.  A Tesla car operating in Autopilot mode is not a full up AV.  And the drivers were allowing Autopilot to drive their cars under conditions where they were supposed to be closely monitoring it.  Instead, they adopted a "hands off" attitude, literally.  But these are technicalities that don't influence the thinking of the general public.  The public is extremely concerned about the safety of AVs.

And that has caused everybody to go slow, everybody, that is, except Elon Musk, the CEO of Tesla.  He says that the new iteration of Autopilot will be capable of autonomous operation.  Details are skimpy so nobody knows quite what he means.  The general consensus is that the Tesla Autopilot feature lacks many of the capabilities necessary for true autonomous operation.  So, most people are in "let's see what he actually delivers" mode.

And, there is a great deal of confusion.  The SAE (Society of Automotive Engineers) has defined various "levels" from fully manual to fully autonomous.  Their "level 5" is fully autonomous.  And everybody agrees with the criteria that they have laid out.  That's not the source of the confusion.

The source of confusion is that people expect a level 5 vehicle to be able to operate completely autonomously in all conditions.  They expect to operate in the daylight and at night.  They expect it to operate in good weather conditions and bad.  They expect it to work on city streets, on freeways, on country roads, and even off road.  An AV capable of that level of autonomy truly is decades away.

But the ability to safely and consistently handle in all those conditions is not necessary in order for large numbers of AVs to be on the road and operating successfully.  There is general agreement that off road is the hardest to manage.  So, it will be the last to appear.  On the other hand, there is some disagreement as to whether city driving or freeway driving is the easiest to manage.

One's first impression is that freeway driving is the easiest.  And that's true in most conditions.  But there is a famous example of an AV test car being unable to exit a Phoenix freeway.  The other drivers on the road ganged up on the test car.  They repeatedly blocked it from finding a slot it could use to move into the exit lane.

Trouble exiting, or trouble changing lanes when you need to, is a common occurrence.  If there's a lot of traffic it is hard to find an opening without some cooperation from other drivers.  The solution to a lack of cooperation is to "barge" and force an opening.  But that can be dangerous.

With a little jockeying, and possibly some hurt feelings, it can almost always be pulled off.  But it may involve a game of "chicken" and that's not "safe and sane" driving.  It often involves judging the psychology of the other drivers.  You need to pick on someone who will back off rather than remaining assertive.

It is possible to "program" that kind of behavior (all but the psychological part) into the AV system.  But companies don't want to do that.  Besides being risky, it is bad for public relations.  Self driving cars adhere to speed limits and follow all the other rules of the road.  That makes them far more timid than the average driver.  But it is also the posture that is most reassuring the public.

I don't know if the AV companies have solved the "Exiting" problem.  The incident I heard about happened a couple of years ago so they have had time to work on it.  The solution may have been as simple as going with unmarked vehicles.

But many autonomous designs feature various pieces of distinctive hardware poking out of the roof of the car.  Such a design obviates the need for the cars to be marked in order to be easily identifiable.  A car with a "taxi" bubble on the roof looks like a taxi regardless of whether it says "TAXI" on the side or not.

So, urban areas may turn out to be easier.  I'm sure the AV companies have gathered reams of data on this subject.  I expect them to first introduce AVs into whatever environment they think will be the easiest to get the cars to work in.

There have been some "fully autonomous" licenses issued by states, etc.  This allows companies to put AVs on the road in some places without having to have a test driver onboard.  But nothing of this sort has been rolled out on a scale large enough to attract the attention of the press yet.  (And it may be that COVID has been a major reason for the delay.)

I truly think that we will see some AVs in operation in limited areas by 2022.  The obvious choice is Uber/Lyft.  These companies know exactly where passengers are departing from and exactly where they want to go.  They can also give customers the option to opt in or opt out of a trip in an AV.  That allows them to keep things tightly controlled.

Uber and Lyft are very interested in AVs as they very much want to eliminate the cost of the driver.  And they have the technology that allows them to handle whatever constraints AV operations throw at them.  They can only handle a limited geographic area.

They can handle time-of-day or weather constraints.  They can handle passenger consent issues.  And they can handle changes in any or all of these constraints.

My thinking about AVs and Uber/Lyft mirrors almost everybody else's.  But so far, neither Uber nor Lyft have started so much as a pilot project to dispatch AVs for use by the general public in a limited area and under limited conditions yet.  I'm sticking to my "by 2022" prediction, but I have no special insight into this.

And that's how I expect the AV market to evolve.  It will start out only being used in small, limited ways,  If things go well then AV use will expand into more and different environments.  Eventually, the coverage will be broad enough to encompass a large number of trips but not all trips.  That's good enough.  I do think that it will be a long time before we see off-road AVs.  But that's okay.

Tesla's experience with Autopilot tells us a lot about how fast the psychology of the public can change.  A small set of drivers pushed it far beyond its actual limits.  That got several of them killed.  In spite of this, Autopilot is very popular.  It gets a lot of use, mostly in a responsible way.

But some people still push its use beyond what is safe.do in spite of the well documented risks.  With familiarity comes comfort.  And it doesn't take very long  People quickly came to trust Autopilot, in many cases more than was wise.  Right now, almost no one has any actual experience riding in an AV.  But, once they do, one trip will be enough for most people to become comfortable with the experience.

Next, I want to introduce a closely related subject.  Cruise Control got a lot more capable during the Obama Administration.  In older cars I have owned, all Cruise Control was capable of was maintaining a constant speed as the car went up hill and down dale.  The last car I bought was capable of a lot more.  And the Tesla Autopilot I discussed above is capable of still more.  All this is part of the path to a true AV.

But, as the capability of Cruise Control systems increased by leaps and bounds, lots of people figured out that it would be a good idea for these automated systems to communicate with each other.  Maybe the car in front could see an obstacle your car couldn't see.  And it is useful to know that a close-by car is going to change lanes or speed up or slow down or whatever.

The natural progression is for each car company to independently develop its own system then expect the other car companies to come to them.  (I saw this play out dozens of times in the computer business.)  Except companies adopting standards developed by other companies was never going to happen.  (It almost never did in the computer business.)

The solution is to have a "neutral" standard that everybody would use.  That way all suitably equipped cars could talk to other suitably equipped cars regardless of the make.  The specification is called "V2V" (vehicle to vehicle) communication.

The Obama Administration quickly put together an umbrella group to help this along.  The Government wouldn't set the standard.  They would just facilitate the industry coming together to set the standard.  That way it would be an "industry" standard not a "government" standard.  Better for everybody that way.

All this was moving along nicely when we had an Administration change.  As an Obama initiative, the Trump people wanted nothing to do with it.  Also, in spite of the fact that the government wasn't setting the standard, it smelled like "government regulation" and they were averse to that sort of thing.  So they shut it down.

Caught is the same net was the V2X initiative.  The idea was to extend the V2V specification so that it didn't just cover vehicle to vehicle communication.  It would instead be "vehicle to everything".  A smart stoplight would be able to tell oncoming vehicles when the signal was going to change.  Or it could inform the vehicle that a pedestrian had requested a "walk" cycle.

Anything that might improve things could be included.  Weather alerts could be sent out.  Construction areas could be signal cars what their extent was.  Warnings about ice on the road could be communicated.  Use your imagination.  This program was also effectively shut down.

We could be a lot further along at this point if standards had been adopted.  I'm sure the initiative will eventually come into being.  But the auto industry manufactures about fifteen million vehicles per year.  None of those vehicles are V2V or V2X capable at this point.  The time when most vehicles include V2V and V2X  capabilities will take just that much longer to arrive.  And the benefit will, therefore, take just that much longer to arrive.  So sad.  So unnecessary.

Okay.  On to my next topic, electric cars.  It turns out that I have never dedicated a blog post to this subject.  I'm pretty sure I have peripherally mentioned the subject.  But, in perusing the titles of all of the posts I have made, I don't see anything that is likely to have addressed the subject in any depth.  So, here goes.

In theory, electric cars are a great idea.  DC electric motors (all you really need to know is that this is the kind used in electric vehicles) are capable of providing 100% torque at 0 RPM.  To translate that into English, torque is a measurement of how hard the motor is pushing in an attempt to get the car to go faster.  0 RPM is the situation where you are stopped and you want to start going.  In short, DC electric motors are great if you love jackrabbit starts.

And being able to accelerate quickly (i.e. a jackrabbit start) is what people (mostly guys) use to decide whether a care is "powerful" or not.  Elon Musk made sure that Tesla cars accelerate quickly.  It was a great decision from a marketing point of view.

And there are lots of very powerful electric motors out there.  Diesel train locomotives aren't really "Diesel".  They are actually "Diesel-Electric".  The thing that is turning the wheels is actually an electric motor.  Nuclear Aircraft Carriers, the largest and heaviest moving objects in the world, use electric motors to turn their propellers.  So, if it is possible to build a powerful electric car, what's the problem?

The problem is the battery.  Engineers put wimpy electric motors into cars because a powerful electric motor can quickly drain the battery.  There is a direct tradeoff between a vehicle that feels powerful and responsive and a vehicle that goes a long way between recharges.  (You can try to get the driver to back off but drivers never do.)

If batteries worked well then there would be no problem using powerful electric motors in electric vehicles.  Car makers put powerful gas motors and big gas tanks into cars all the time.  That gives "gas" cars lots of power and lots of range.  Unfortunately, this approach is not possible when it comes to electric cars using current battery technology.

And it's not like we haven't known how to make electric cars until recently.  The "Baker Electric"  was a popular car in the early 1900s.  But its top speed is about 12 miles per hour and it doesn't go far between charges.  Why?  Because it uses "lead acid" batteries.  This is the type of battery that can be found in "gas" and diesel cars.  But it is both large and heavy in terms of how much energy it can store.

Modern hybrid and all-electric cars use "lithium" batteries.  They are similar to the battery in your mobile electronic device.  They are a big improvement over a lead acid battery.  But they still suck.  Lithium batteries are also very expensive.  The "gas tank" in a car costs a few dollars.  The battery pack in a hybrid or all-electric car costs many thousands of dollars.

The battery pack is much larger than a gas tank.  It is also much heavier than even a full gas tank.  And it can't propel a vehicle nearly as far as a tank full of gas can.  The manufacturers of hybrid and all-electric vehicles are forced to make trade-offs.  And none of their options are good.

A smaller battery pack is cheaper, takes up less space, and weighs less.  The lower weight improves the electric vehicle version of fuel economy.  Performance is not determined by the size of the battery pack.  Instead, the most important factor is the size of the electric motors.  This all sounds good so what's the problem?

The problem is range.  In the same way a small gas tank restricts range, a small battery pack restricts range.  Powerful the electric motors also restrict range.  They can drain the battery pack more quickly than small motors.  Of course, small motors translates to wimpy performance.

So vehicle makers go with the smallest battery pack (and often the smallest motors) that they think they can get away with.  Tesla cars come with a relatively large battery pack.  But that makes them expensive.  And, since Musk emphasized performance, if you drive a Tesla in "high performance" mode, it doesn't go very far before it needs a recharge.

You can get a lot of mileage between recharges with a Tesla.  But you have to put the vehicle into "economy" mode.  Then the vehicle delivers anemic performance.  And a Tesla is an expensive vehicle.  If you want to keep the cost down you have to go with a small battery pack.  That guarantees anemic performance.

And that has tilted the market towards hybrids.  These have a small gas engine and a very small battery pack.  The engine is used to recharge the battery pack on the fly.  If both the engine and battery pack are delivering power to the wheels, a hybrid can deliver so-so power.  On the other hand, if the battery pack has run down and only the power from the engine is available, then the car can barely get out of its own way.

But the combination delivers a lot of range.  Buyers have shown a marked preference for the extended range hybrids deliver over reasonably priced all-electric vehicles.  This trade off is forced by the current state of the art in battery technology.

Lithium batteries are far superior to lead-acid batteries.  But what is really needed is a battery that is as superior to a lithium battery as the lithium battery is superior to the a lead-acid battery.  Unfortunately, scientists currently have no clue as to how to create such a battery.

What I find surprising is that there is a role that all-electric vehicles can fill right now.  That role is with respect to delivery vehicles and the like.  As noted above, all-electric vehicles have a limited range.  But it is more than sufficient to satisfy the needs of these vehicles.  So no technological or other improvement is needed.  Why we don't see all-electric delivery vehicles all over the place is a mystery to me.

Drivers are concerned, perhaps excessively so, with the battery running down at an inconvenient time.  But delivery vehicles are used for short trips.  They don't rack up that many miles in a single day.  If the place where they are parked at night is equipped with fast chargers (chargers that run at 220 volts rather than the 110 volts that most household plug sockets deliver) then they can be recharged overnight.

As long as they have enough range to handle the number of miles these vehicles put in each day, an electric vehicle should work fine.  Fleet operators know all the statistics concerning how many miles per day their vehicles rack up.  And they rack up a lot of miles in a year so fuel costs are very important.  To me, it sounds like a perfect fit.

And there is an Uber/Lyft equivalent operating in this space. It's Amazon.  Amazon has a large delivery fleet.  It is tasked with getting packages "the last mile" from the fulfilment center to the customer's door.  Amazon is said to be working busily on an all-electric delivery vehicle.  But so far, it's all talk and no actual vehicles on the road.

I find that quite surprising.  Tesla is working on an "18-wheeler" long haul truck.  I find it difficult to believe that is practical given the current state of the art when it comes to batteries.  But it looks completely practical to do a delivery van.  Or, for that matter, any commercial vehicle that "comes home" every night, and that racks up a modest number of miles per day.

There are currently gas stations everywhere.  It takes five-ten minutes to "gas up".  More and more charging stations are being installed.  But all-electric vehicles don't recharge in five or ten minutes.  You are lucky if it can be pulled off in five hours.  That means that a successful all-electric vehicle needs to be based somewhere and not wander too far from base.

New houses now often feature a 220 volt circuit in the garage.  This is easy to do.  The circuitry is the same as that used to support electric stoves, dryers, and water heaters.  A house may come with the natural gas versions of these appliances.  But any commercial electrician knows how to string the necessary wiring.  It is easy to include in a new house.  It is harder, but usually not that much harder, to retrofit such a circuit into an existing house.

Almost all all-electric vehicles are now sold to consumers that can afford to own multiple vehicles.  They can use their all-electric vehicle for their short trips, and most trips are short tripe.  When they occasionally need to go a long way then they can use one of their other vehicles.

That is not, and will never be, a large part of the consumer vehicle market.  All-electric vehicles need to sell in large numbers if the cost is to be driven down.  Short haul delivery vehicles should enlarge the market substantially.  And that's good.

There are some other market segments that all-electric vehicles should eventually be successful in.  But their success will be limited until the main problem with electric vehicles is solved, the creation of a much better battery.

Finally, I want to talk about supercars.  These have been around for something like twenty years.  They certainly didn't exist when I was a kid.  The situation in the '60s was typical of what came before and what continued for some time after.

When I was a kid the fancy car was the Cadillac.  Sure, a few Hollywood Moguls and the like drove (or were driven in) a Rolls Royce.  But that was more of a fantasy than a reality.  And here's the thing.  A Cadillac didn't cost that much more than a regular car.

My dad bought a Plymouth in the '60s.  The Plymouth was a step up from the Dodge.  The Chrysler was a step up from the Plymouth.  The Chrysler was supposed to be comparable to a Cadillac.  But there is only one Cadillac of Cadillacs.

The Cadillac was the "top of the line" in its time.  But Cadillacs didn't cost all that much more than the Chevrolet, the "economy" entry in the General Motors product line at the time.

My father's Plymouth cost a little under three thousand dollars.  A Chevrolet of the period cost about two thousand dollars.  But the Cadillac only ran four to five thousand dollars.  The "top of the line" car of the time cost between two and three times what an "economy" car cost.

If we update the numbers to what they now are then an "economy" car runs between twenty and thirty thousand dollars.  Picking the high number would mean that a "top of the line" car now costs between sixty and ninety thousand dollars. And you can drive a brand new Cadillac off the lot for sixty to ninety thousand dollars.  So that part tracks.

But ninety thousand dollars doesn't even you get you in the door when it comes to super-premium cars now.  Those run hundreds of thousands of dollars.  And that doesn't even get you close to the top of the market.  Ultra-premium cars now run between a million and ten million dollars.

And it's now been a long time since Rolls Royce was as expensive as it got.  You can drive the Rolls Royce of your choice off the lot for only a few hundred thousand dollars.  It's still a super-premium car but it's not even close to being an ultra-premium car.

What happened, of course is that the rich got a whole lot richer and these people needed something to spend their money on.  There are now lots of people who have fleets consisting of dozens of ultra-premium cars.

I don't know what they do with them.  Traffic speeds have stayed the same since the '60s.  You can easily rack up a very expensive speeding ticket in an economy car.  So, there's literally no place to take advantage of what justifies the price of these vehicles.

And they are not "lap of luxury" vehicles.  The "go to" luxury limousine is a stretched version of the Chevrolet Suburban (or any large SUV like the Ford Expedition).  These vehicles feature roomy interiors with lots of headroom.  The interiors can be customized to make them quite luxurious.  But, no mater how tricked out the interior is, you are still only talking hundreds of thousands of dollars.

All the ultra-premium vehicles, the ones that cost a million or more, are set up as sports or performance vehicles.  The interiors may be well appointed.  But they are cramped.  Few feature more than minimal storage space.  But they all look like they go super-fast.  And most of them actually do.

In the late '60s the goal of "hot car" guys was to have a car that could go two hundred miles per hour.  Few of the vehicles of the period, even the custom ones like the Shelby Cobra, actually could even touch two hundred briefly.  And none of the "street legal" ones could sustain two hundred for any period of time.  That has changed.

The "production" versions of many of these cars are street legal and they can easily maintain a sustained speed of two hundred miles per hour.  And when lots of cars from lots of manufacturers can all do the same thing, it's time to up the ante.  This car can do two twenty.  No, that car can do two thirty.  And so on.  The goal recently became three hundred miles per hour.

And Bugatti the first car maker to pull it off.  They were able to get a "prototype" version of one of their production cars to clock in at 300 MPH while conforming to all the conditions necessary to be officially credited with the record.

The car was slightly modified "for safety reasons".  I don't know if Bugatti will even sell a car set up the same way the "300 MPH" car was.  The actual production (i.e. non-prototype) version of the car comes equipped with a "limiter" that makes the car top out at 261 MPH..  And plan on laying out $3 million or more for the "limiter" version.  Who knows what the non-limiter version would cost.

Bugatti said 300 MPH was fast enough.  I don't understand that.  Because 500 KPH is only 311 MPH.  Surely, they could have gotten the car to go a measly 11 MPH faster.  And 500 KPH is a much more satisfying number.  This is especially true since most of the world runs on KPH rather than on MPH.  If it was me, I would have gone for 500 KPH.

And, apparently I'm not the only one that thinks that way.  An outfit I had never heard of before, even though it is located in Washington, the state I live in, decided to go for it.  The company name is SSC, short for Shelby Supercar.  (And this Shelby is apparently no relation to Carol Shelby of Shelby Cobra fame.)  They make a street legal car called the Tuatara that they thought would go 500 KPH.

They were right.  They too made a run under the appropriate conditions.  Their official speed was 508.73 KPH (316 MPH).  They claimed that the car was unmodified.  It did use "race" fuel, a type of fuel that is commonly found at drag strips and race tracks.  But that was it.

And there have since been claims of "discrepancies".  This has caused the company to promise to redo the run under the most stringent of conditions.  The entire production run is already sold out.  But, if you could order one, your very own Tuatara would only set you back a measly $1.6 million.  A bargain, wouldn't you say?

Wednesday, October 28, 2020

60 Years of Science - Part 21

This post is the next in a series that dates back several years.  In fact, it's been going on for long enough that several posts ago I decided to upgrade the title from "50 Years of Science" to "60 Years of Science".  And, if we group them together, this is the twenty-first main entry in the series.  You can go to https://sigma5.blogspot.com/2017/04/50-years-of-science-links.html for a post that contains links to all the entries in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science when he wrote the book (1959 - 60).  In this post I will review two sections, "Fission" and "The Atom Bomb".  Both are from his chapter "The Reactor".  We have now arrived at the last chapter in the book.  So the end of the series is now in sight.

In "Fission" Asimov starts with the observation that "rapid advances in technology in the twentieth century have been bought at the expense of a stupendous increase in our consumption of our earth's energy resources".  He saw this as simply a "supply" problem.  "Where will mankind find the energy supplies needed"?

This tees up a discussion of various traditional sources like timber.  I'll get back to that in a minute.  What was not widely appreciated at the time was the cost in terms of pollution.  We are now well aware of air pollution, water pollution, plastic pollution, chemical pollution, and the like.  But, except for a few outliers like Rachel Carson with her book "Silent Spring", this was not a front line issue back then.

And the whole idea of worrying about an increase in the minute amount of carbon dioxide in the air?  At the time, there weren't even any outliers worrying about this problem.  But the '50s was when the foundations for our current concerns were being laid.

Carson's book was an indictment of the effects of DDT because of its unintended side effects.  It was a very effective insecticide.  But it also killed off all kinds of animals who were not its targets.  This was one of the first analyses of the unintended consequences of various kinds of modern behavior.

Since then, we have learned of the deleterious effects of sulfur pollution spewed out by coal fired power plants (acid rain).  We have learned that Freon refrigeration coolant damages the ozone layer.  And on and on and on.

Regular measurements of carbon dioxide levels in the atmosphere (specifically at a remote astronomical observatory in Hawaii) were begun in the '50s.  Studies demonstrating the dangers of inhaling cigarette smoke were started in the '50s.  People knew that the extraction of oil, and particularly coal, made a mess of things.  But that was assumed to be the primary negative effect of petroleum extraction.

Nobody, or at least almost nobody, worried then about what the effect of the "lead" additive, put into gasoline to boost engine performance, would be when it ended up in the atmosphere, and subsequently the lungs, and later the brains, of children.

It was soon determined that the effects were so devastating that lead additives were banned from gasoline.  We later moved on to worrying about the lead that the paint we put on walls contained.  Lead based paints were eventually banned.  We now use latex based paint instead.

But all this was for the future.  Asimov gives us a brief history of the use of timber by various civilizations.  Forests were cut down in Greece, North Africa, the Near East, and in other places.  This was done for fuel and so the land could be converted to agricultural use.

Asimov pegs the beginning of this behavior at a thousand or so years ago.  It had actually started much earlier.  But Asimov was forced to rely on the "historical" (i. e. written) record of events.

The ability of archeological and geological (i. e. unwritten) record had not been developed sufficiently back then to shed much light on these kinds of questions.  That was to come later.  Those sources eventually pointed to a much earlier date for the changes Asimov highlights.  

Asimov notes that, not only did this change eliminate many of the handy sources of wood leading to a wood shortage.  But the land cleared as a result was allowed to deteriorate.  Now, most of it is no longer in good enough shape to be appealing to farmers.  The result of this change in the characteristics of large tracts of land is a low density of occupancy that can't support modern civilizations.

Instead it is populated by people Asimov describes as "ground down and backward".  We might quibble with Asimov's characterization of the worth of these people.  But it is undoubtedly true that only low density activities were possible now that the land's ability to support high intensity agriculture had been lost.

This "cut down the forests" trend continued into the middle ages in Europe.  This resulted in little remaining forested land there.  The arrival of Europeans in the Americas saw a similar transformation.  "Almost no great stands of virgin timber remain . . . except in Canada and Siberia."

Except that we now know that natives in both North and South America had a profound effect on forest structure well before Europeans arrived.  What looked like "virgin" timber to European eyes was actually anything but.  And it turns out that the situation was not as dire as Asimov painted it at the time.

Satellite imagery now allows us to accurately map the extent of forests.  There was a lot more healthy forest around than is apparent from Asimov's statements.  They were just in smaller stands, which he ignored.

But we have since gone a long way toward cutting those down too in the half century since.  We have also denuded large areas of Amazonia, Asia, and other places that weren't even on Asimov's radar.

As Asimov notes, civilization writ large moved on to "fossil fuels", coal and oil.  They are resources that "cannot be replaced".  As a result, "man is living on his capital at an extravagant rate".  So we will reach "peak production" in the '80s.  It turns out that people were talking about this as a problem even back in the '50s.

Asimov's prediction was remarkably accurate.  He missed by a decade or so.  His miss was caused by various unanticipated events like the Arab Oil Embargo.  But we reached peak oil pretty much when he said we would.

So, why don't we now have an oil shortage?  A technology fix came along that Asimov couldn't have anticipated.  Fracking, fracturing the rock that held oil so that it could escape and be pumped to the surface, was unknown at the time the book was written.

Horizontal drilling, and other technological tricks that make fracking economically feasible were also beyond the technology of his day.  But they were also not needed back then.  The oil fields of West Texas and the Middle East were producing vast quantities of oil that was easy to extract using unsophisticated techniques.

All forecasts, including Asimov's, assumed that the technology would get better and that the price, after adjusting for inflation, would rise.  A price rise makes it economically feasible to employ more complex and more expensive technology.

And for many decades the oil industry hewed closely to those assumptions about the state of the technology and the state of the market.  (The Arab oil embargo's effect was only to the timing of price increases.)  It was only when we actually reached, or appeared to reach, peak oil that the industry became willing to try "crazy" ideas.  Horizontal drilling and fracking were two of the crazy ideas that panned out.

There has always been far more coal in the ground than there is oil.  As a result, estimates of when "peak coal" would hit have pointed to a date in the relatively far future.  Asimov's estimate of  the twenty-fifth century was in line with estimates of the time.  What has done coal in is not availability.  There is still plenty of it around.  Instead, it has been economics.

Coal has a much lower energy density than oil.  It is also much messier to make use of.  There was a substantial industry devoted to making various chemicals out of coal in the 1800s.  But when oil became readily available industry quickly switched to oil and never looked back.  Today coal is pretty much restricted to being used to make steel and electricity.

Coal is dirt that contains a lot of carbon.  But it's still dirt.  Separating the carbon from the dirt leaves a lot of  nasty, useless, stuff behind.  Coal also throws lots of nasty stuff into the air when you burn it.  For a long time the alternatives to coal were expensive enough that people put up with these disadvantages.

But inexpensive natural gas has been widely available for several decades now.  It is far cheaper to ship than coal.  All you need to ship it from here to there is a pipe from here to there that is a few inches in diameter.  Natural gas does throw a lot of carbon dioxide into the air.  But it throws far less than coal does.  And carbon dioxide is pretty much the only nasty thing it does throw into the air.

The list of nasty things that coal throws into the air over and above carbon dioxide is nearly endless.  I will just mention three.  First, there is the sulfur that I noted earlier.  Then there is arsenic.  Yes the poison featured in countless murder mysteries.  Finally, there is Mercury.  It too is truly nasty stuff once it's airborne.

In Asimov's time, and for a couple of decades afterwards, the oil industry threw natural gas away by "flaring it off", literally burning it to get rid of it.  But eventually they caught wise.  It's low expense, convenience, and widespread availability has made it quite popular.

To make a coal fired power plant you need to build a large, dirty, expensive, and complex structure that is a terrible neighbor.  To build a natural gas fired power plant capable of producing the same amount of power you need a few modified jet engines connected to generators.

"Gas" power plants are cheap to build, cheap to maintain, require far less land, and don't make much noise or mess.  So they can be sited almost anywhere.  Natural gas fired power plants have done far more to kill off coal than everything else combined.

Asimov then moves on to the question of efficiency.  Theoretical efficiency, the best efficiency that thermodynamics allows, has been covered previously, both by Asimov and by me.  It will not be revisited.  Asimov notes that thermocouples, devices that convert heat directly to electricity, are only capable of an efficiency of 10%.  A steam generator of the time was capable of an efficiency in the  30-40% range.

We are still mining "efficiency" as a method of stretching supplies.  We now have appliances and light bulbs that are much more efficient than they used to be.  Jet engines used on airplanes are much more efficient than the ones used in 1960.  Insulating homes better increases efficiency.

Building lighter cars, and other vehicles, helps.  The Ford F-150 pickup truck now contains a lot of aluminum in order to improve its efficiency by reducing its weight.  We now have Hybrid cars.  They can get by with a much smaller and, therefore, lighter engine.  The search goes on.  Increased efficiency is helpful but not a game changer.

Asimov then mentions "renewable energy".  See!  The idea is older than you think.  He mentions wood (you can always grow more), wind, and water.  At the time hydro-electric dams were popular, especially in my neck of the woods.  The problem is that by 1960 most of the best locations in the U. S. for building a hydroelectric dam were already in use.  Not much room for growth.

And two major problems have since come to light.  The one that gets the most ink is the fact that dams screw up the ability of fish to migrate.  The less noted problem is that dams also block the silt and debris that rives wash down to the sea.  This results in the reservoir behind the dam "silting up".  But it also interferes with that silt and debris moving down stream where it turns out it is needed.

Beaches are not permanent.  Instead, they are maintained by being continuingly renewed by sand that is transported from upstream by rivers.  As this and other problems have emerged we have moved from building dams to tearing them down.

Wind had been a source of energy for hundreds of years by this point.  The Dutch windmill being only an obvious example.  In 1960, however, wind power was seen as only appropriate for use in a few niche categories.  In fact, in the roughly 50-100 year period preceding the publication of Asimov's book, maritime commerce had converted from wind power in the form of sails to fossil fuel power in the form of engines powered by coal or oil.

Since then, high efficiency "wind turbines" have been deployed widely.  And the rate at which they continue to be rolled out is only accelerating.  The idea of using wind to generate large quantities of electricity wasn't on anybody's mind in 1960.

Asimov then moves on to sun power.  There is the direct method, using mirrors to concentrate the sun's heat.  This was only a gleam in the eye of various futurists in 1960.  Direct sum power is now widely used as the power source for desalinization plants.  It is used in a few places to make electricity.  But the technology has not caught on an any big way.

Asimov then moves on to something that has caught on in a big way, what we now call the solar cell.  Solar cells were at the "proof of concept" stage of development at the time of the book.  A few satellites used them as a power source.  That was about it.  The problems were practical.  Solar cells were too inefficient (they were only capable of capturing a few percent of the power contained in sunlight) and they were expensive to make.

Tremendous progress has been made on both fronts.  Solar cells that have efficiency ratings in the 15-20% range are widely available.  Solar cells with efficiencies in the 20-30% range, and using a number of different formulations, look to be widely available soon.   And they can now be economically manufactured literally by the acre.

The efficiencies have improved to the point that people find it cost effective to cover the roof of their house with them.  Such a setup can easily power an entire house and leave power to spare.  And this even applies to the more northerly parts of the U. S.  Commercial "solar farms" now provide power that is as cheap as or cheaper than power from traditional power plants powered by coal or natural gas.

In the case of both wind and solar there is a problem to be solved.  They are "intermittent".  They either depend on the sun shining or the wind blowing.  There are a couple of ways to handle this.  They boil down to a much more capable and robust national electrical grid.  This would allow surplus power generated here to make up for shortages there.  So far, nearly zero money has been invested in this.  That is criminal.

The second approach is storage.  If we have enough power stored to run the entire national grid for two days then we should be able to smooth any dip that comes down the pike.  (Probably considerably less capacity would be sufficient.)  Here, the problem is technological.

There is no storage technology that scales to that quantity at a reasonable cost.  As a result, what's currently being done is to install gas fired generating plants all over the place and use them to backstop renewable sources.  It works but it is not the right solution.

And, it turns out, all the attention showered by Asimov on these various technologies is just a setup for what he actually wants to talk about.  As the title gives away, what Asimov wants to talk about is "atomic energy".

Most of what we think of when we think of energy is "chemical energy".  It comes from rearranging the bonds between electrons orbiting various atoms.  Chemists call these rearrangements "chemical reactions".  Any form of fire or explosion is chemical energy in action.  The amount of energy available may seem enormous.  But it is tiny compared to the "nuclear reactions" physicists study.

The amount of chemical energy available in a gram of material is modest.  It might amount to what you get by striking a match.  At best, it amounts to the amount of energy released by a small firecracker.  The amount of nuclear energy that can be released by the same gram of material can literally level a city.  It leveled Hiroshima and Nagasaki.

It was a slow process discovering what nuclear reactions were capable of and how they worked.  Chadwick was the first to get an inkling of how to study the nucleus in 1932.  The neutron was electrically neutral. That made it a good choice as a probe to use to study the nucleus without having to worry about pesky electrical effects.  Fermi was the first to observe that "slow" neutrons worked better than "fast" ones.

Rather than going with Asimov's explanation of slow neutrons versus fast ones, try this.  Imagine that the nucleus of an atom is a water droplet and a neutron is as particle of sand.  If the sand particle hits the nucleus at high speed it just drills though leaving the water droplet pretty much unchanged.

If, however, the sand particle is moving very slowly it gets absorbed by the water droplet.  A slow neutron being absorbed by the nucleus of an atom allows it to interact with the other nuclear particles.  That interaction was what Fermi was looking for.  (We'll get to medium speed neutrons later.)

Physicists summarize this behavior by talking about the "cross section".  In a given set of circumstances the nucleus has a given cross section.  For a fast neutron the cross section is very small, the size of a catcher's mitt, for instance.

For a slow neutron. the nucleus has a big cross section, something like the broad size of a barn.  This led physicists to cheekily invent a unit called the "barn" to describe nuclear cross sections.  Other than noting that it is very small, I am going to leave its actual value unspecified.

Atomic nucleuses are very complex beasts.  We know a lot more about them now then we did in 1960.  But, for our purposes, we are going to ignore all that and just assume that atomic nucleuses consist of a bunch of protons and neutrons somehow all stuck together. We are going to dive into nuclear chemistry only far enough to note that changing the number of protons or neutrons in a nucleus changes the nature of the beast.

Since neutrons are neutral, if we change the number of neutrons we don't change what kind of element we are dealing with.  Changing the number of protons changes that.  There is literally a one-to-one correspondence with the number of protons and the type of element.  But changing the number of neutrons does make a difference.  Often it changes the degree to which the atom is radioactive.

In the simplest case, Hydrogen with no neutrons is not radioactive.  Hydrogen with one neutron is radioactive.  Hydrogen with two neurons is highly radioactive.  If an element is highly radioactive it has a good chance of decaying, blowing up into two or more assemblies, each containing an assortment of protons and neutrons.

Several scientists set out to bombard Uranium with slow neutrons.  They thought that the neutron would be absorbed and somehow turned into a proton.  This would result in the creation of a small amount of whatever element 93 was.

Fermi took a crack at it.  Hahn and Meitner took a crack at it.  (Meitner had to stop work and flee because this was '30s Europe and she was Jewish.)   Strassmann replaced her and the work continued.

Eventually they figured out what had happened and what had happened was a big surprise.  What had happened was what we now call "nuclear fission".  (Surprises happen all the time in science.)  As you might have guessed by now, instead of absorbing the neutron and staying intact, the Uranium nucleus had undergone fission.  It had broken into pieces and one of those pieces was a Radium atom.  

Except that turned out to be wrong too.  What had actually been created was an atom of Barium.  Marie Curie's daughter Irene and Irene's associate Savitch were among the most prominent to go down the Radium rabbit hole and end up with nothing to show for it.

It was Meitner (yes, the same Meitner), in an article in "Nature" in 1939, who was the first to point to Barium.  Her insight was later confirmed by a number of groups.  She also was the one who named the process "fission".  On to "The Atom Bomb".

This chapter is really just a continuation of the previous one.  Asimov continues the story without missing a beat.  He starts out by noting that the fission of a Uranium nucleus, specifically a U-235 nucleus (he skips over this detail), produces about two neutrons.

If both of these neutrons each end up causing the fissioning of an additional Uranium nucleus then we have the makings of a "chain reaction".  And, since each fission results in the release of a tremendous amount of energy, we have the makings of a source of a whole lot of energy.

Numerous physicists saw the possibilities of nuclear chain reactions.  The fissioning of a single ounce of Uranium produces the same amount of energy as burning 90 tons of coal, or 2,000 gallons of fuel oil, or 600 tons of TNT, according to the calculations Asimov publishes.

If Asimov's numbers are to be believed, the Hiroshima bomb would have been the result of fissioning about a pound of Uranium.  I believe Asimov's figures deliberately understate the amount of energy produced by fissioning an ounce of Uranium, possibly because the correct value was classified at the time.

As Asimov notes, this discovery was made in 1939 on the even of World War II, a War that everybody could see coming well before it arrived.  The military applications of nuclear fission were obvious and troubling.  Most scientists viewed the possibility of Nazi Germany making use of this information with alarm.

Asimov then goes on to review the story of the mostly American effort that resulted in the development of the Atomic Bomb and its use against Japan.  Szilard went to Einstein who, in turn, wrote a letter to FDR.  He, in turn, authorized the "Manhattan Project", so named because it was run by General Lesley Groves, director of the Manhattan Engineering District of the U. S. Army Corps of Engineers.  Groves, in turn, recruited Robert Oppenheimer as lead scientist.

Asimov goes into some detail about both the technical details of how an atomic bomb works and the difficulties involved in building one.  I am going to skip over most of that.  If you are interested, check out "The Making of the Atomic Bomb" by Richard Rhodes. it is excellent.  In lieu of what Asimov and Rhodes have to say on the subject, here are some observations.

The Manhattan Project was, by far, the biggest, most expensive, and most difficult project undertaken anywhere in the world up until that time,  It involved constructing massive facilities at Hanford Washington (primarily Plutonium production), Oak Ridge Tennessee (primarily enriched Uranium production), and to a lesser extent, at Los Alamos New Mexico (Research and Development, final bomb assembly).  Although it was a mostly U. S. effort, it involved a great deal of help and support by the United Kingdom.  It also involved substantial help and support by Canada and several other countries.

As we all now know, the U. S. succeeded in building three working devices, the test bomb that was exploded at the "trinity" site near Alamogordo New Mexico, and the two production bombs, one of which was exploded over Hiroshima Japan and the other over Nagasaki Japan.  Fortunately for all of us, the German effort ended up going nowhere.  If you want to know more about this very interesting story, I recommend "Heisenberg's War" by Thomas Powers.  Back to Asimov.

As he notes, the U. S. had a monopoly on the Atomic Bomb for only four years.  Unbeknownst to most, the Russian intelligence agencies had completely penetrated the Manhattan Project.  They managed to spirit away all the information they needed to build their own device.  Sakharov, their chief scientist, took no shortcuts, however.  So the Russians were able to develop a robust program that was soon able to move beyond just the cloning of American designs.

No doubt, the intelligence the Russians collected sped things up.  But, once everyone knew such a thing was possible, duplicating the feat was simply a matter of devoting the necessary resources.  The Russians succeeded in 1949.

The British succeeded in '52.  The French succeeded in '60.  The Chinese succeeded in '64.  Since then, several other countries have succeeded.  The newest member of the "nuclear club" is North Korea.  South Africa is unique in having developed the expertise necessary to join the club, but then shutting everything down and walking away from it.

He then moves on to what was at one time called the "Super", a bomb based not on nuclear fission but on nuclear fusion.  Again, I am going to skip over the details.  If you are interested, I would suggest reading "Dark Sun" by Richard Rhodes.  Here too I will confine myself to some observations.

In the immediate aftermath of the War many nuclear scientists were not even sure what later came to be called a "Hydrogen Bomb" could even be built.  The nuclear part of it was well understood. Smash together two Hydrogen nucleuses under appropriate circumstances, and they will fuse to become a single Helium nucleus.  And that fusion will release a tremendous amount of energy.

The problem was in creating the "appropriate circumstances".  It turns out that X-Rays were the key ingredient.  An Atomic Bomb can be tuned to release tremendous amounts of the appropriate kind of X-Rays.  Then it was determined that an appropriate type of mirror could be used to focus the X-Rays on a tank of Hydrogen, which could be located off to the side.  With these two ideas in hand, the problem of how to build an H-bomb, as it came to be known, was solved.

This turned out to be all you needed to build a "proof of concept" device.  But the design was not practical as a weapon. "Mike", the only bomb built using this design, weighed something like 42 tons and was too big to fit into an airplane. But then another idea, dissolving the Hydrogen in Lithium, came along and enabled a "miniaturized" design that was practical for use as a weapon.  The rest, as they say, is history.

Again, the U. S. was first, but not for long.  The U. S. set "Mike" off in 1952 and had working miniaturized devices shortly thereafter.  The Russians were not far behind.  They first succeeded in '53.  They later went on to set off the largest H-bomb ever exploded in the late '60s.  It was a 100 Megaton (million tons of TNT equivalent) design that had been deliberately downrated to only 50 megatons.

It's still holds the record for the largest H-bomb ever set off, but not because bigger bombs can't be built.  It's because anything over 10 megatons is a complete waste.  Large H-bombs blow the top of the atmosphere off.  This causes a "stovepipe" effect.  All the energy flows up the stovepipe and out into space.  Viewed from anywhere but space, all large H-bombs behave just like a 10 megaton H-bomb.  Such is the strange logic of nuclear warfare.

Asimov finished off his discussion with the fission-fusion-fission bomb.  I am going to skip it.  Instead, I am going to leave you with an optimistic thought.  Everyone, from science fiction writers to scientists and philosophers, who seriously contemplated nuclear weapons during this period (say 1935-1965), came to the conclusion that "if it can be built then it will be used".  But no nuclear weapon has been used in battle since 1945, seventy-five years ago.  And the chances of that streak continuing indefinitely keeps getting better and better.  Peace out.