Tuesday, September 11, 2012

50 Years of Science - part 3

This the third in the series.  The first one can be found at http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html.  Taking the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960) I am examining what has changed since. For this post I am continuing with the chapter Asimov titled "The Birth of the Universe".

In part 2 I discussed the age of the Earth.  In discussing the age of the Earth Asimov broaches the subject of "the solar paradox".  Cutting to the chase, Lord Kelvin did a calculation in the late 1800s that indicated that the Sun could be no more than 50,000 years old.  Why?  Because there was no known energy source that could keep it burning any longer.  The two main candidates:  "It's all coal", and gravitational collapse couldn't provide enough energy to explain the steady output of the Sun for any longer.  The discovery of radioactivity in 1896 provided an alternate energy source powerful enough to save the day.  Radioactive decay could provide enough energy to keep the Sun shining at its current level for billions of years.  Over the next forty years subsequent scientific progress allowed scientists to conclude that the Earth and Sun were each about 5 billion years old, very close to the modern figure of 4.7 billion years.  (Modern cosmology posits that the Sun and all the planets, including Earth, were created at almost the same time).

In examining the question of the age of the universe as a whole Asimov gives us a nice description of the Doppler effect.  Let's say you are driving on a road and an emergency vehicle is coming the other way.  Before it reaches you the siren will have a slightly higher than normal pitch.  After it has passed the siren will have a slightly lower pitch.  This shifting of the pitch as a result of motion is called the Doppler effect.  There are many references, including Asimov's book that can give you more detail.  But the bottom line is that this change in pitch can be used to calculate the speed of the other object.

Doppler, the physicist the phenomenon is named after, decided in 1842 that this effect could be used to calculate the speed toward or away from the earth of celestial objects by examining the "spectrum" of these objects.  For reasons that were not well understood until Quantum Mechanics were developed about 1930 when you heat something to an appropriate temperature it will glow.  The intensity of various colors in this glow are called the spectrum of the object.  An individual spectrum will contain features.  At some frequencies the intensity will be particularly bright (emission features) and at other frequencies the intensity will be particularly dim (absorption features).   An object that has the same composition and temperature will always have the same spectrum with the same emission and absorption features.  And the combination of the temperature and the atomic and molecular composition precisely determines the details of these spectral features.  In short, from the spectrogram of an object you can determine its precise composition and temperature.  The process may be very complicated for objects with a complex composition but that's the idea.

Note that I indicated above that an object's spectrum depends solely on its temperature and its composition.  But if the object is moving with a speed that is a noticeable percentage of the speed of light (and the amount of speed that is needed to qualify as "noticeable" keeps dropping as scientific instruments keep getting better), the spectral features will shift.  If the object is traveling toward the earth the frequency will shift higher and the wavelength will shift lower.  Astronomical short hand for this is "blue shift".  If the object is traveling away from the earth the frequency will shift lower and the wave length will shift higher.  The astronomical short hand for this is "red shift".  The amount of shift allows the relative speed to be calculated precisely.  Astronomers make very precise measurements of the spectrum of an object.  Then they identify well known features in the spectrum.  Then they calculate how far and in which direction (higher or lower) the feature has shifted.  From this information a simple calculation yields the speed at which the object is moving and whether it is moving toward or away from the earth.

Now if astronomical objects moved randomly you would expect that about half would show red shift and half would show blue shift.  But it turns out that almost every astronomical object shows a red shift.  Almost everything is moving away from us.  An astronomer named Silpher was the first to notice this in 1914.  Before going on let me discuss the issue of "standard candles".

How do you figure out how far away something is?  Well the simplest and most reliable method is to simply pace it off and measure it.  But what if the distance involved is too great to measure directly?  For longer distances there is a trigonometry based technique called parallax.  Again assume you are in a car.  You are driving down a straight rural road and staring sideways out the window.  This is OK because you are a passenger, not the driver.  Notice that the sections of fence near the road whiz by quickly.  But if you look out across a field at a house or barn it will move slowly as you drive along.  Finally if you look at a mountain a long ways away on the horizon it doesn't move at all.  That's the basic idea behind parallax.  You need to dress it up with trigonometry and careful measurements but if you measure the distance you travel down the road and the change in the angle of the barn or house you can calculate the exact distance it is from the road.  Taking the basic idea and applying the proper measurements and trigonometry is how astronomers can measure distances across space.  But before continuing let me take a second digression and talk about astronomical distances.

People really don't understand how big space is.  Say you get in a car and drive for an hour on a straight road at 50 miles per hour.  (I know, I know, no road is straight for that long but work with me on this).  Everyone has done something like this and it gives them some emotional idea of how far 50 miles is.  Now imagine driving at 50 miles per hour (I have picked this speed because it makes the math easier) for ten hours straight.  You have now gone 500 miles.  Now most people who are stuck in a car for ten hours straight tend to day dream a good part of the time even if they are the driver.  So even a distance of 500 miles, while intellectually comprehensible in terms of our ten hour trip at 50 miles an hour, loses a lot of its sense of concreteness.  I contend that 500 miles is about as far as people can realistically have a concrete feel for.  It is possible to get in an airplane and go thousands of miles.  But you get in the plane.  You may even look out the window for the whole trip.  But a plane ride is emotionally like using a teleporter but with a couple of hours of delay thrown in.  You don't get a real sense of the distance involved.

Now imagine a trip around the world at the equator, a distance of 25,000 miles.  In our car this would require 50 days of 10 hours per day driving.  If people tend to zone out in one 10 hour drive there is no way they are going to be paying attention every day for 10 hours for 50 days in a row.  So I contend that 25,000 miles, the circumference of the earth, is such a great distance that it is not really comprehensible.  But 25,000 miles is infinitesimal in terms of typical astronomical distances.  So all astronomical distances blur together and become "so large as to be unimaginable" in concrete terms to a person.  Scientists can do the math but the numbers are so large as to be meaningless to us.  And since we can't in any real sense comprehend these numbers we make really wild mistakes all the time.  Some numbers are really a lot bigger than other numbers.  But they are all so large that our emotions misread them and we think of them as being nearly the same size or we get it wrong as to which is really the larger and which is really the smaller.  Back to the subject at hand, namely parallax.

Parallax works well enough to be able to estimate distances within the solar system with a reasonable degree of accuracy.  The most useful "baseline" for measuring these kinds of distances is the orbit of the earth around the Sun.  It is about 100 million miles.  Compare this to the circumference of the earth at 25 thousand miles, a distance I said was too great to be emotionally comprehensible.  Well this distance is 40 times as great.  It seems inconceivably large.  But it is actually quite small.  And things get a very slight bit better.  The earth goes all the way around the Sun.  So at one point it is 100 million miles this way and six months later it is 100 million miles that way.  So the distance between the extremes is 200 million miles, a number that is twice as big.

If we want to use the parallax technique to figure out how far away something is then what we want to do is wait for the earth to be on one side of the Sun and then carefully measure the angle to the "something".  Then we wait 6 months and measure again.  We are now 200 million miles away from where we started so the angle should change a lot, right?  Well, this is where our intuition goes wrong because we are comparing these giant numbers.  The closest star to us that is not the Sun is Proxima Centauri.  Most people think it's Alpha Centauri because that's what a lot of people say.  Alpha Centauri and Proxima Centauri are very close together but Alpha Centauri is a lot brighter so people usually go with it.  But Proxima Centauri is actually a little closer.

Anyhow with this giant baseline of 200 million miles it should be a piece of cake to do the parallax thing to find out how far away it is.  And the parallax trick actually works for Proxima Centauri (and Alpha Centauri too) but just barely.  The reason is because the star nearest our own is actually a very long way away.  Let's see how long "very long" is.  To do this I am going to figure distances in "light minutes".  A light minute is the distance traveled by a photon of light in a minute.  Trust me, it's a very big number.  Now the light from the Sun takes a little over 8 minutes to get here from there.  So a hundred million miles is about 8 light minutes.  And 200 million miles is about 16 light minutes.

Now Proxima Centauri is 4.25 light years away (the distance light goes in 4.25 years).  Again this is a really big number if we put it in terms of miles.  But let's put it in terms of light minutes.  It still turns out to be a pretty big number.  Proxima Centauri is about 2.2 million light minutes away.  So to do the parallax thing to figure out how far away Proxima Centauri is we create a triangle.  One side of the triangle is 16 light minutes long.  The other two sides are 2.2 million light minutes long.  In geometry there is a concept called "similar triangles".  By using similar triangles we can throw all the "million" parts away.  So imagine a triangle with one side that is 16 inches long and the other two sides are 2.2 million inches long.  It turns out that the 2.2 million inch sides are each over 35 miles long.  Now to get the parallax thing to work we need to measure the tiny angle between the two 35 mile long sides.  Remember on one end they meet and on the other end they are 16 inches apart.  It is a brilliant piece of work that Astronomers have actually been able to measure that super tiny angle.

Now let's try to do the parallax technique on a star that is twice as far.  That means that we need to measure the angle between the two sides that are now 70 miles long.  Remember that they meet at one end and are separated by the same 16 inches on the other end.  Astronomers have only been able to use the parallax technique to measure the distance to only a few of the nearest stars.  I think you now understand why.

So if the parallax technique only works for a few very close stars what do we do about the rest?  The answer finally gets us back to the "standard candle" technique that I mentioned a long time ago.  Imagine having a 100 watt light bulb.  Now measure how bright it is from 100 yards away.  There is a standard mathematical formula that tells us exactly how bright it will be when viewed from 200 yards away or a thousand yards away.  So if we know we are looking at our 100 watt light bulb (so we know exactly how bright it is) and we can very accurately measure how bright it appears to be (called its "apparent brightness") then we can calculate how far away it is.  That's the idea behind "standard candle".  If we know how bright something is from close up, its "intrinsic" brightness, and we can measure its apparent brightness, and we know that everything is clear between it and us, then we can calculate how far away it is.

Now most of space is pretty empty.  So it conforms to the "everything is clear" requirement to use the technique.  Sometimes this is not true.  There are dust clouds and other things that get in the way.  And these present real problems for some measurements scientists would like to make.  But in a lot of cases there appears to be no obstruction and in other cases scientists come up with techniques to allow them to adjust for the amount of obscuring going on.  So a lot of the time this "everything is clear" requirement is met.  That leaves the problem of knowing the intrinsic brightness of what you are looking at.

A solution to this problems is discussed by Asimov.  It involves the use of Cepheid variables.  These are a kind of variable star. The brightness of the star varies in a predictable way.  What makes this important is that Astronomers came to understand enough about how Cepheid variables worked that they could predict the intrinsic brightness of the star based on the specifics of its pattern of variability.  Originally work determined that specific types of Cepheids all had the same intrinsic brightness.  This allowed the development of a relative distance scale.  This item is twice as far away as that item, that sort of thing.  Soon a large number of relative distances were known.  But to turn the relative distance scale into an absolute distance scale it was only necessary to determine the actual distance to one Cepheid.  That was only achieved recently when high precision measurements using the Hubble Space Telescope and other techniques became available.

At the time Asimov wrote the book only relative distances were known for sure.  Astronomers used a number of techniques to estimate the intrinsic brightness of Cepheids with more or less success.  At the time the book was written there was still a lively discussion as to what the correct value for the intrinsic brightness was.  This resulted in a number of respected Astronomers using a number of different estimates of intrinsic brightness.  As time went by Scientists also determined that there were several classes of Cepheids and members of each class displayed a different intrinsic brightness than apparently similar members of a different class.  General agreement as to how to place a specific Cepheid into the right class and the correct intrinsic brightness for each class are now pretty much sorted out.  But bringing everything into alignment was not completed until many years after Asimov's book was written.  Astronomers were very aware that there were problems with Cepheids at the time the book was written.  But there was no better way to determining distances at the time.  And Astronomers of the time were careful to acknowledge these kinds of issues.

Also at the time Asimov wrote the book Cepheid variables were the brightest standard candle available.  But for really large distances they are too dim to work.  Since then Astronomers have developed another standard candle called a "Type 1A Supernova".  As a supernova it is way brighter than a standard star like a Cepheid so it works for much greater distances.  All of the details of how the intrinsic brightness of a type 1a supernova has been worked out are different.  But the general idea is the same.  Certain attributes that can be measured from far away allow the intrinsic brightness to be determined.  There have been problems to work through with the type 1A supernova as a standard candle.  But Astronomers think they have things worked out pretty well at present.  Now back to the main line of the story.

In 1929 Edwin Hubble, who had been studying Galaxies published Hubble's Law.  Using Cepheids as standard candles Hubble had found that if you ignored a few close in Galaxies it appeared that the farther away a Galaxy was the faster it was moving away from the earth.  He posited that there was a single "Hubble Constant" that was the ratio between the recession speed and the distance from the earth.  Do to the problems with the Cepheid standard candle he couldn't establish the specific value for the Hubble Constant but he established that it appeared to be a constant across the range of relative distances he could measure.

This turned out to be a very remarkable observation.  Using Hubble's Law one could project to the point where galaxies would be receding from each other at the speed of light.  This in turn meant that the universe had a specific age.  This idea was shocking.  Before, scientists had not spent much time thinking about the age of the universe.  They knew it was vastly older than the 6,000 or 10,000 years that biblical scholars had calculated.  Other than that, most thought, when they thought about it at all that the universe was either a vast unspecified age or that it had always been there in something similar to its current state.  Hubble's ideas ushered in the modern era of cosmology.

As these ideas spread among first the Astronomical community and then to the broader scientific community speculation soon settled down into two main competing theories.  The "steady state" theory was championed by among others Einstein and a British Astronomer named Fred Hoyle.  It stated that the universe had always looked pretty much as it does now.  The competing theory, named by Hoyle with the most ridiculous name he could think of, was called the "big bang" theory.  By the time Asimov's book was written the evidence against "steady state" had become overwhelming.  So "big bang" was winning by default.

It didn't take scientists long to note some convenient features of Quantum Mechanics in their efforts to flesh out the big bang theory.  The most relevant item was something known as the Heisenberg Uncertainty Principle.  Most simply (and I don't want to get into yet another diversion so this is all you get) the Principle said that there was a fundamental uncertainty about the universe and that the smaller the thing you were studying the more uncertain its characteristics were.  Astronomers latched on to this and posited an extremely small piece of space.  It was so small that the energy content was vastly uncertain.  This was taken as the seed out of which the whole universe would explode.  As the universe exploded it would cool (that's what gasses naturally do as they expand) and eventually the temperature would drop to what we see today and the size of the universe would grow to the size we see today.  That was roughly the state of the big bang theory at the time Asimov wrote his book.

You are probably thinking that seems inherently implausible.  Scientists slowly came to the same conclusion.  And the big bang theory has evolved considerably from the humble roots I outlined above.  The biggest change is to add something called "inflation".  The subject is complex and I again want to avoid digression.  But the basic idea is that from its early tiny seed (which may have looked greatly different than our tiny exploding point) the universe inflated to a fantastic size in a fantastically short period of time.  This may sound even weirder than the very weird original big bang theory I outlined above.  But it turns out that there is actually some evidence for inflation.  Yet again in an attempt to avoid another large diversion I will note that the most compelling of this evidence consists of the measured variations in something called the Cosmic Microwave Background and leave it at that.

Asimov does a nice job of going into Hubble's work and that of subsequent scientists up to the time he wrote the book.  Given all the uncertainties scientists supported age estimates for the universe ranging from about 11 billion years to 42 billion years.  Since then the uncertainties have been greatly reduced and the consensus number today is 13.7 billion years.

Since then another startling development has transpired. It looks like the Hubble Constant is not constant.  There is evidence that the rate of expansion of the universe has changed over time.  There have also been related developments in scientist's views on the constitution of the universe.  At the time Asimov wrote the book what Astronomers could see were bright things like stars.  Generally this is referred to as Baryonic matter.  A couple of decades ago Astronomers noticed a problem.  They could roughly weigh a galaxy by doing some calculations based on the light the galaxy generated.  They could then use Newton's theory of gravitation to predict how fast portions of the galaxy should rotate.  Everything came out wrong.  Eventually Astronomers decided that there was a large amount of what they dubbed "dark matter" surrounding galaxies.  They still have no idea what dark matter is but there seems to be a lot of it.  The recently measurements that have led to the idea that the Hubble Constant is not constant has let scientists to posit something called "dark energy".  They know even less about dark energy than they do about dark matter.  But their current thinking is that the universe consists of about 4% Baryonic matter, 26% dark matter, and 70% dark energy.  So scientists in 1960 knew about only 4% of the mass that current day scientists think the universe actually contains.

And this leads me to my final subject for this post.  Scientists in 1960 envisioned three basic fates for the universe.  The first option was that the universe would explode (big bang), expand for a while, then collapse back on itself.  This was dubbed the "cyclic universe" theory.  At the other extreme the universe would explode then keep on growing.  It would get bigger and bigger.  Everything would spread farther and farther apart until each component of the universe was an island so far away from any other component as to be completely isolated.  The third option was the happy medium one.  The universe would explode and expansion would gradually slow down due to gravity but everything would be on a balance point,  It wouldn't expand forever but it wouldn't collapse back either.  Which would be the fate of the universe?  Well it all depended on the density of the universe.  If it was too dense it would expand then collapse.  If it was not dense enough then it would expand forever.  And if the density was just right if would end up on the balance point.

In 1960 these calculations had been done and it appeared that the universe had exactly the right density to end up at the balance point.  But scientists were completely at a loss as to why the density of the universe was exactly right.  Even a little too much or a little too little would tip the universe one way or the other.  Since then we have this whole dark matter / dark energy thing going.  Factoring everything in, Baryonic matter, dark matter, dark energy, the universe seems to have exactly the correct density.  But current measurements indicate that the density is so ridiculously exactly the correct amount that they are even more puzzled by the whole thing than they were in 1960,

And that gets us to the end of the chapter.  

No comments:

Post a Comment