Thursday, September 27, 2012

Education Reform

This post is about Kindergarten through High School education, generally referred to as K-12.  The U.S. has a reputation for providing more elite post-secondary (e.g. college and grad school) institutions than any other country in the world.  It also has a very good reputation for its non-elite schools in this category.  So the general consensus is that this category doesn't need fixing.  K-12, however, is a different matter.  This category has been argued over for generations and has been a political punching bag for at least a generation.  I have no special expertise in this area.  But that's not going to stop me from pitching my two cents in anyhow.

I am, of course, a consumer of this product.  I received my K-12 education in the U.S.  There are probably few people whose trajectory through this system is exactly typical.  Mine isn't completely typical but it's not very different and it is a common trajectory.  I was educated in a Parochial School run by the Roman Catholic parish where my parents attended church for my first 8 years.  Then I attended public schools for the last four years.  In my opinion I got a good education.  And I am in a position to personally compare the Parochial School experience with the Public School experience.  The comparison is very enlightening.

Parochial School was a "stick to basics", "no frills" experience.  The curriculum was completely standard except for the addition of a one hour religion class each day.  When I hit Public School I found I was well prepared.  I could read well.  My mathematics was up to snuff.  My social studies abilities and skills were up to snuff.  I can't say what kind of shape I would have been in had I attended Public School for those eight years but I see no reason to believe that it would have been much different.

But physically the experience was different from what I would have experienced in a Public School.  There were no shop classes.  There was no PE (Physical Education) classes.  There were no music classes.  I remember that one day a teacher brought in a Chemistry Set.  It was the kind a parent would buy for a child.  No particular use was made of this.  All of these types of amenities cost money.  You need a Gym and showers for PE.  My school had an auditorium but no showers.  You need musical instruments to do music.  My school had no musical instruments.  The school had no lab space suitable for biology or chemistry.  The physical plant of the school consisted of the aforementioned auditorium, class rooms, and a playground with a couple of basketball hoops and a few tether ball poles.  No other athletic equipment was provided.

But wait, there's more.  I do not remember attending a class with less than thirty other students.  One teacher was supposed to maintain discipline, teach all the classes, and provide whatever one-on-one attention students needed.  In short this school did a number of things that are supposed to be exactly the wrong way to educate students.  There were too many students in the class room.  There was little or no "enrichment" (e.g. music, shop, athletics).  But I got an excellent education anyhow.  And I was not an anomaly.  Many of my fellow students followed my same path.  They did a number of years in Parochial School, transitioned to Public School, and did just fine.  I was a typical exemplar of students turned out by Parochial School, not an outlier.

And this is generally true.  Parochial Schools have a very good reputation in the US.  It is so good that many Parochial Schools have a large percentage of their students coming from non-Catholic families.  Parochial Schools are the only alternative to a Public School option that is financially possible for many families.  Parochial Schools have a reputation for providing a high quality low cost option to the standard Public School option.  And they prove that a lot of the conventional wisdom about how to fix the public school system is bunk.

I certainly enjoyed my time in Public School and felt I got a good education there too.  But it is important to remember that this school district was in a well off suburban area.  The school system had and still has a very good reputation but it also has more money and a more stable social environment than many public school systems.

So if many of the standard nostrums for fixing the public school system are wrong then what's right?  Here I am very disappointed with what the Bill and Melinda Gates Foundation have come up with so far.  You can read a position paper from them here:  http://www.gatesfoundation.org/postsecondaryeducation/Documents/nextgenlearning.pdf.  I didn't think much of it.  It is  mishmash of jargon almost completely devoid of clear thinking and any kind of data driven foundation for what little it contains.

One of the ideas that the Gates Foundation and others push is Charter Schools.  Charter Schools have been around long enough so that if they did a substantially better job of educating kids we should see some clear data to support this.  But what little data I have seen indicates that Charter Schools perform about on a par with Public Schools.  Another is reducing class size.  This too has been tried a lot.  I have seen no strong evidence that this works particularly well either.  Another idea is technology in the classroom.  As a computer guy I should be all for this.  But again there is no strong evidence supporting the idea that this makes a big difference.

Parochial Schools are not much different than Charter Schools.  My Parochial School experience argues against expecting much from reduced class sizes or introducing a lot of technology.  It also argues against the great benefit of a richer experience (e.g. sports, music, labs,. etc.).  Now some of you may be about to argue that I have just contradicted myself.  If Charter Schools are a lot like Parochial Schools then they should be working well.  And they should.  But the data says they don't.  And I remember seeing a "60 Minutes" (I think it was 60 Minutes) episode where they talked to a Parochial School Principal.  She said she would not be able to do even as well (her school was rated noticeably better than the surrounding Public Schools) if she had to follow the same rules and regulations as the Public School administrators did.  So what do I think works?

One of the problems with most of the analysis of what's wrong stems from looking in the wrong places.  If you are not looking in the right places the answer to your question contains a high percentage of noise and little or no pattern will emerge.  I think the wrong things are being measured to try to find the success factors.  Here's my list of success factors:

1.  The most important thing is whether a kid comes to school willing and able to learn.  Key to this is whether the kid thinks it is important to learn.  And key to this is whether the parent(s) think education is important.
2.  If we have met the first criteria, the second criteria is a good teacher who is allowed to do her job the way she thinks it should be done.
3.  The school environment must be safe from violence and from bullying and other activities that discourage a kid that wants to learn from learning.
These are the keys.  Of lesser importance are:
*  A good and safe physical plant (e.g. no pealing paint, broken windows, lights that work, etc.)
*  Smaller classes.
*  A richer experience.
And so on.

Someone somewhere has succeeded in teaching whatever "unteachable" group you can think of.  It's not the native intelligence of the kid.  In most cases parents seek out these experiences once a program attains a good reputation.  And that is important.  Because if a kid has parents (or parent) that care how well the kid does in school then they keep on top of how the kid is doing.  This results in the kid being expected to learn.  If the kid fails the parent(s) get on him or her to improve.  In this environment most kids most of the time end up willing to learn.  And almost all kids are able to learn.  These "success story" situations also usually involve good teachers and a safe environment.  I think you can find successes that lack one or more of the remaining criteria.

How about this for an idea?  As far as I know it has never been tried.  Evaluate kids and pick out the under performing ones.  Then look at the home environment.  Look for kids with parents that are uninvolved or have low expectations.  Try to educate the parents to be better at motivating and monitoring their kids.  Then there are homes where the parents would like to do the right thing educationally by their kids.  But they can't due to poverty, language skills, violence, etc.  Here it would be nice to provide help.  But except in the language situation this should be a job for social services, not schools.  As far as I can tell low or non-existent educational attainment of parents (e.g. parental illiteracy) is not a factor. Kids of illiterate parents do very well even if the parents never learn to read.

With this as a foundation it is instructive to look at efforts to improve education.  Little or no effort is devoted to my most important item.  There have been many efforts devoted to item number two but they usually involve more not less interference with the teacher doing things the way she would prefer.  In fact there is a large industry dedicated to standards, tests, evaluations, etc., all of which have the effect of telling the teacher how to teach.  Efforts to address item number three are sporadic and not up to the task.  Instead most effort is devoted to the lower priority items or to items I didn't even list.

It is also important to pay attention to what ought not to be done.  One idea usually associated with liberals is to pass students whose work does not merit it.  I think this is a mistake.  If a kid is not doing fourth grade work or eighth grade work he should be flunked until he can demonstrate the proper proficiency.  Unqualified students provide a distraction to both teachers and other students.  In the long run they don't even do the kid any good.  For a short period of time his self esteem is left undamaged.  But society and the kid eventually figure out that the kid is not a high school graduate in terms of what he is capable of and things go rapidly down hill from there.

And there is the issue of focus.  The more things you try to do the less likely you are to do a good job of all of them.  I think schools and school districts should be focused completely on education.  In the same way that they should provide an honest evaluation of how educated a student is by not passing him along, they should not be responsible for public safety or caring for the needs of people with physical or mental problems.  If a kid is misbehaving then he should be kicked out of school and turned over to the juvenile justice or police system.  Let schools teach and let public safety organizations provide for the public safety.  Similarly, I am sympathetic to the plight of people with physical or mental handicaps.  But at some point they should become social service problems not problems to be solved by our educational systems.

Years ago the standard was to keep people with physical or mental handicaps out of sight.  This was wrong, particularly for people with mild handicaps.  The standard flipped over to "mainstreaming" everyone.  I believe this is an over reaction.  I think it is good to mainstream people with mild handicaps.  It's good for the individuals themselves.  It is also good for the other students and staff.  It broadens their experience base and makes them more tolerant, which is good.  But it is not the job of school systems to deal with all of these people, particularly with those with severe problems.  This means a line needs to be drawn.  How do you decide who is mild and who is severe?  I am not confident I have the right answer.  But I do have an answer.  How much does it cost to place and maintain a particular individual in a Public School environment.

I think you set a financial threshold.  My suggestion is six times.  This is a completely arbitrary line and it may be that some other cut off would work better.  But it is the one I have come up with.  If the additional cost is six or less times the cost of a normal student then the handicapped individual should be placed in school and become the responsibility of the school district.  If the expense will be higher then the responsibility should rest with social services.  If social services can come to an agreement with the school district and pay the school district whatever the additional cost over that of a normal student then it may be a good idea to place the individual in school.

And this is the usual "bright line" rule.  This may result in the school people and the social service people (or others) trying to game the system to get the individual over or under the threshold.  Bright line rules always result in these kinds of issues.  I am perfectly willing to entertain a "sliding degree of responsibility" where there is a sliding scale of financial and other responsibility for the individual.  But I think my idea is a good place to start the discussion from.

There is also an elephant in the room that I have not brought up yet.  That's politics.  Educational policy is a bigger political football than it has ever been before.  This is very bad for education.  It draws energy and money away from the current system.  In almost all cases more resources are better than fewer resources.  So whatever resources are eaten by the squabbling end up being taken away from where they are needed.  And if the fight gets hot enough a consensus may develop to starve the beast.  Certainly the time and effort that a good teacher spends dealing with paperwork, bureaucracy, and politics is time and efforts that can not be applied to teaching.  And good people don't like working in a politically charged atmosphere.  It's just not worth the aggravation.

Now let's look at Teach for America.  Teach for America is good intentioned.  It is designed to help solve a very real problem.  Our society depends heavily on what is generally called STEM, Science, Technology, Engineering, and Mathematics.  So our educational system needs to do STEM well.  But there is a large shortage of STEM qualified teachers.  And, as I said, this is a real problem.  Teach for America attempts to address this problem by finding STEM qualified individuals, running them through a fast "teaching boot camp" and putting them into the classroom.

It is better than nothing.  But it is a "paper it over" solution rather than an effort to address the problem directly.  The direct way to address the problem is to have teachers who are STEM qualified.  How do we do this?  Money!  If we paid teachers with STEM skills more money then we would find teachers getting STEM training and we would find STEM trained people getting teaching degrees.  But that would cost too much money.  So we have Teach for America.  The way you know that Teach for America is not the right solution is by looking at retention.  Do people who get in the program stay in teaching?  No!  Large numbers of them are gone after two years or less.  And remember the job market currently sucks.

I just don't believe that finding the right solutions to the education problem is that hard a problem.  So why aren't we well on the way to solving it?  Because the fundamental problem is money.  We need to spend more money and we need to spend it more effectively.  No one wants to spend what it would take to solve the problem so it becomes a big political football.  And in a highly politicized environment what money gets spent where becomes all the more important.  And those with the most political power, not those with the best ideas, tend to win the fights.  The overall result is that more and more money and effort is invested in making and enforcing rules, and in the kind of bureaucracy that grows up in a highly politicized environment.  This leaves less and less money to actually do what works.  And so things get worse and we start another round of political fighting that eventually makes things even worse.

Let me make a final observation on unions.  There is a large group of people invested in the idea that unions are evil and wasting lots of money and standing in the way of education reform.  In the current environment what are teachers supposed to do?  They are buffeted by every "trend du jour" and generally speaking no one cares what they think about anything.  In that environment a strong union with a mission to make teachers impossible to fire makes a lot of sense.  I think teachers unions have stood in the way of a lot of educational reform.  But it is hard to get angry at them.  There are a lot of others trying to mess up the educational system and harass teachers.  Perhaps if teachers didn't feel so much like the football in a Superbowl game they would be willing to be more flexible.

One thing Bill Gates has come to believe is that you can tell if a teacher is going to be a good teacher by seeing how they do in their first three years on the job.  If this is true then all you need to do is put in a "three year probation" rule.  If the teacher makes it through the first three years (assuming the evaluation procedure is a good one - not the teacher's responsibility) then it makes no sense to worry very much about how to get rid of teachers that have more than three years on the job.  They should be good teachers in almost all cases.  It is probably cheaper to carry the few "dead wood" teachers that make it past three years than it does to put in a lot of effort into a procedure for terminating experienced teachers.  And this would have a great benefit.  The many good teachers who made it past their three year probation could relax and focus on teaching for the rest of their carrier.  The morale boost would more than compensate for whatever it cost to keep the dead wood.

Wednesday, September 19, 2012

50 Years of Science - part 4


This is the fourth in a series.  The first one can be found at  http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html. Taking the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960) I am examining what has changed since. For this post I am starting with the chapter Asimov titled "The Death of the Sun".

Again Asimov starts with a review of thought on the subject, starting with Aristotle.  He starts out with a general discussion of whether the sky as a whole is unchanging.  He notes several instances of changes in the sky that would have been visible to the naked eye and, therefore, noticeable to the ancients.  The Greeks either didn't notice them or decided to ignore the changes.  But other ancients did notice some of these changes.  This leads to a quite general discussion of several stellar phenomenon.  He then starts moving toward a discussion of our nearest star, the Sun.  As part of this discussion he introduces the Hertzsprung-Russell diagram and the concept of the "main sequence".

The reason for this is that these ideas form the basis for understanding how stars evolve.  This, in turn, allows us to predict the life history and eventual fate of stars.  In short, large stars burn brightly and don't last very long.  Small stars burn much more dimly but last a very long time.  Our Sun is in the middle.  It is in the middle in terms of how bright it is and also in terms of how long it will last.  The H-R diagram also allows us to predict how our Sun will age.

According to this analysis our Sun is middle aged and will stay that way for several more billions of years.  Then it will become a Red Giant, a very large, very cool star.  Asimov then relates recent (relative to 1960) developments.  Stars burn Hydrogen to make Helium.  But then they can burn Helium to make Carbon.  This chain can continue so that stars can create large amounts of Oxygen and Neon.  Asimov also reports that Magnesium, Silicon, and Iron can also be created in the heart of a star.  If a star explodes (e.g. in a Supernova) then these elements can be spread throughout space.  This was the start of solving the problem of where these other elements come from.  Only Hydrogen, Helium, and and a very small amount of Lithium are created in the Big Bang.  Of course it did not solve the mystery of where all the other elements came from.  It turns out this mechanism can not create any of the elements heavier than Iron.  Research that took place after Asimov's book came out suggests that the Supernova explosion itself creates the other elements.

Once most of the Hydrogen is burned the evolution of a star speeds up tremendously.  All the other stages happen very quickly compared to the billions of years the Hydrogen stage takes for a star the size of the Sun.  And once a star hits the Iron stage it quickly runs out of energy.  A star like the Sun goes from a Red Giant to a White Dwarf in the blink of an eye at that point.  Asimov then moves to the Chandrasekhar limit.  A star with the mass below the limit (1.4 times the mass of the Sun) will relatively gently settle into the role of a White Dwarf.  Those above the limit, however, explode as the Crab Nebula did.  This supernova explosion was observed in 1054.  But current estimates put the nebula between 5 and 8 thousand light years away.  That means the supernova actually occurred between 3,000 BC and 6,000 BC.  The best guess is that it exploded about 4.300 BC.

Asimov wraps the chapter up with the observation that White Dwarfs last tens of billions of years.  So the Sun will be a White Dwarf for much longer than it will look the way it currently does.

Missing from the discussion are Black Holes and Neutron Stars.  These existed at the time as theoretical speculation.  A few years after the book was published Astronomers concluded that Cygnus X-1, an X-ray source in the constellation Cygnus, was a black hole.  There still exists no direct observations of Black Holes.  But out understanding of them has continued to improve.  Many likely Black Holes are now known.  And there is a class of Black Holes whose existence was not even suspected at the time of Asimov's book.  Astronomers now believe that many galaxies, including our own Milky Way and our nearest large neighbor galaxy, Andromeda, contain supermassive Black Holes.  These Black Holes weigh in at millions to billions times the mass of our Sun.  It is early days in terms of our understanding of these entities.  But they seem closely bound up in the formation and evolution of galaxies.

And in 1967 something magical was found.  A radio beacon was flashing once every 1.33 seconds.  No natural source of such a bright and quickly changing entity occurred to the Astronomers who discovered it.  So they initially christened it LGM-1 for the first signal from what might be Little Green Men or more formally space aliens.  As other sources were detected the name was changed to Pulsars.  Pulsars are Neutron stars.  They are small enough that they can rotate 1.33 times per second without violating the laws of physics.  So if they have an energy source somewhere on their surface it can flash like the rotating beacon in a lighthouse.  What makes it possible to have a very small very energetic object is the collapse of a star.

nucleuses jammed right up against each other with no surrounding cloud of electrons to keep them far apart.  If this happens you end up with what Astronomers have come to call a Neutron Star.  Such a star would be only a few miles in diameter but it would weigh more than the Sun.  It is easy to imagine such a small object rotating in a full circle in about a second.

So why were Neutron Stars, Pulsars, and Black Holes not discovered by the time Asimov wrote his book?  The answer is that a lot of the evidence for these objects can not be gathered from the surface of the Earth.  You have to put a satellite into orbit.  From there it becomes possible to observer the many kinds of electromagnetic radiation that are blocked by the earth's atmosphere.  Much of the early evidence for the existence of these objects and for the data that resulted in insight into their structure came from satellites launched in the '60s after the book was written.  Since then we have launched more sophisticated satellites that have been able to gather more and better data.  We have also improved our ability to make ground based observations.  We have learned how to tie multiple radio telescopes together.  We have even succeeded in tying multiple optical telescopes together in some cases.

Tuesday, September 11, 2012

50 Years of Science - part 3

This the third in the series.  The first one can be found at http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html.  Taking the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960) I am examining what has changed since. For this post I am continuing with the chapter Asimov titled "The Birth of the Universe".

In part 2 I discussed the age of the Earth.  In discussing the age of the Earth Asimov broaches the subject of "the solar paradox".  Cutting to the chase, Lord Kelvin did a calculation in the late 1800s that indicated that the Sun could be no more than 50,000 years old.  Why?  Because there was no known energy source that could keep it burning any longer.  The two main candidates:  "It's all coal", and gravitational collapse couldn't provide enough energy to explain the steady output of the Sun for any longer.  The discovery of radioactivity in 1896 provided an alternate energy source powerful enough to save the day.  Radioactive decay could provide enough energy to keep the Sun shining at its current level for billions of years.  Over the next forty years subsequent scientific progress allowed scientists to conclude that the Earth and Sun were each about 5 billion years old, very close to the modern figure of 4.7 billion years.  (Modern cosmology posits that the Sun and all the planets, including Earth, were created at almost the same time).

In examining the question of the age of the universe as a whole Asimov gives us a nice description of the Doppler effect.  Let's say you are driving on a road and an emergency vehicle is coming the other way.  Before it reaches you the siren will have a slightly higher than normal pitch.  After it has passed the siren will have a slightly lower pitch.  This shifting of the pitch as a result of motion is called the Doppler effect.  There are many references, including Asimov's book that can give you more detail.  But the bottom line is that this change in pitch can be used to calculate the speed of the other object.

Doppler, the physicist the phenomenon is named after, decided in 1842 that this effect could be used to calculate the speed toward or away from the earth of celestial objects by examining the "spectrum" of these objects.  For reasons that were not well understood until Quantum Mechanics were developed about 1930 when you heat something to an appropriate temperature it will glow.  The intensity of various colors in this glow are called the spectrum of the object.  An individual spectrum will contain features.  At some frequencies the intensity will be particularly bright (emission features) and at other frequencies the intensity will be particularly dim (absorption features).   An object that has the same composition and temperature will always have the same spectrum with the same emission and absorption features.  And the combination of the temperature and the atomic and molecular composition precisely determines the details of these spectral features.  In short, from the spectrogram of an object you can determine its precise composition and temperature.  The process may be very complicated for objects with a complex composition but that's the idea.

Note that I indicated above that an object's spectrum depends solely on its temperature and its composition.  But if the object is moving with a speed that is a noticeable percentage of the speed of light (and the amount of speed that is needed to qualify as "noticeable" keeps dropping as scientific instruments keep getting better), the spectral features will shift.  If the object is traveling toward the earth the frequency will shift higher and the wavelength will shift lower.  Astronomical short hand for this is "blue shift".  If the object is traveling away from the earth the frequency will shift lower and the wave length will shift higher.  The astronomical short hand for this is "red shift".  The amount of shift allows the relative speed to be calculated precisely.  Astronomers make very precise measurements of the spectrum of an object.  Then they identify well known features in the spectrum.  Then they calculate how far and in which direction (higher or lower) the feature has shifted.  From this information a simple calculation yields the speed at which the object is moving and whether it is moving toward or away from the earth.

Now if astronomical objects moved randomly you would expect that about half would show red shift and half would show blue shift.  But it turns out that almost every astronomical object shows a red shift.  Almost everything is moving away from us.  An astronomer named Silpher was the first to notice this in 1914.  Before going on let me discuss the issue of "standard candles".

How do you figure out how far away something is?  Well the simplest and most reliable method is to simply pace it off and measure it.  But what if the distance involved is too great to measure directly?  For longer distances there is a trigonometry based technique called parallax.  Again assume you are in a car.  You are driving down a straight rural road and staring sideways out the window.  This is OK because you are a passenger, not the driver.  Notice that the sections of fence near the road whiz by quickly.  But if you look out across a field at a house or barn it will move slowly as you drive along.  Finally if you look at a mountain a long ways away on the horizon it doesn't move at all.  That's the basic idea behind parallax.  You need to dress it up with trigonometry and careful measurements but if you measure the distance you travel down the road and the change in the angle of the barn or house you can calculate the exact distance it is from the road.  Taking the basic idea and applying the proper measurements and trigonometry is how astronomers can measure distances across space.  But before continuing let me take a second digression and talk about astronomical distances.

People really don't understand how big space is.  Say you get in a car and drive for an hour on a straight road at 50 miles per hour.  (I know, I know, no road is straight for that long but work with me on this).  Everyone has done something like this and it gives them some emotional idea of how far 50 miles is.  Now imagine driving at 50 miles per hour (I have picked this speed because it makes the math easier) for ten hours straight.  You have now gone 500 miles.  Now most people who are stuck in a car for ten hours straight tend to day dream a good part of the time even if they are the driver.  So even a distance of 500 miles, while intellectually comprehensible in terms of our ten hour trip at 50 miles an hour, loses a lot of its sense of concreteness.  I contend that 500 miles is about as far as people can realistically have a concrete feel for.  It is possible to get in an airplane and go thousands of miles.  But you get in the plane.  You may even look out the window for the whole trip.  But a plane ride is emotionally like using a teleporter but with a couple of hours of delay thrown in.  You don't get a real sense of the distance involved.

Now imagine a trip around the world at the equator, a distance of 25,000 miles.  In our car this would require 50 days of 10 hours per day driving.  If people tend to zone out in one 10 hour drive there is no way they are going to be paying attention every day for 10 hours for 50 days in a row.  So I contend that 25,000 miles, the circumference of the earth, is such a great distance that it is not really comprehensible.  But 25,000 miles is infinitesimal in terms of typical astronomical distances.  So all astronomical distances blur together and become "so large as to be unimaginable" in concrete terms to a person.  Scientists can do the math but the numbers are so large as to be meaningless to us.  And since we can't in any real sense comprehend these numbers we make really wild mistakes all the time.  Some numbers are really a lot bigger than other numbers.  But they are all so large that our emotions misread them and we think of them as being nearly the same size or we get it wrong as to which is really the larger and which is really the smaller.  Back to the subject at hand, namely parallax.

Parallax works well enough to be able to estimate distances within the solar system with a reasonable degree of accuracy.  The most useful "baseline" for measuring these kinds of distances is the orbit of the earth around the Sun.  It is about 100 million miles.  Compare this to the circumference of the earth at 25 thousand miles, a distance I said was too great to be emotionally comprehensible.  Well this distance is 40 times as great.  It seems inconceivably large.  But it is actually quite small.  And things get a very slight bit better.  The earth goes all the way around the Sun.  So at one point it is 100 million miles this way and six months later it is 100 million miles that way.  So the distance between the extremes is 200 million miles, a number that is twice as big.

If we want to use the parallax technique to figure out how far away something is then what we want to do is wait for the earth to be on one side of the Sun and then carefully measure the angle to the "something".  Then we wait 6 months and measure again.  We are now 200 million miles away from where we started so the angle should change a lot, right?  Well, this is where our intuition goes wrong because we are comparing these giant numbers.  The closest star to us that is not the Sun is Proxima Centauri.  Most people think it's Alpha Centauri because that's what a lot of people say.  Alpha Centauri and Proxima Centauri are very close together but Alpha Centauri is a lot brighter so people usually go with it.  But Proxima Centauri is actually a little closer.

Anyhow with this giant baseline of 200 million miles it should be a piece of cake to do the parallax thing to find out how far away it is.  And the parallax trick actually works for Proxima Centauri (and Alpha Centauri too) but just barely.  The reason is because the star nearest our own is actually a very long way away.  Let's see how long "very long" is.  To do this I am going to figure distances in "light minutes".  A light minute is the distance traveled by a photon of light in a minute.  Trust me, it's a very big number.  Now the light from the Sun takes a little over 8 minutes to get here from there.  So a hundred million miles is about 8 light minutes.  And 200 million miles is about 16 light minutes.

Now Proxima Centauri is 4.25 light years away (the distance light goes in 4.25 years).  Again this is a really big number if we put it in terms of miles.  But let's put it in terms of light minutes.  It still turns out to be a pretty big number.  Proxima Centauri is about 2.2 million light minutes away.  So to do the parallax thing to figure out how far away Proxima Centauri is we create a triangle.  One side of the triangle is 16 light minutes long.  The other two sides are 2.2 million light minutes long.  In geometry there is a concept called "similar triangles".  By using similar triangles we can throw all the "million" parts away.  So imagine a triangle with one side that is 16 inches long and the other two sides are 2.2 million inches long.  It turns out that the 2.2 million inch sides are each over 35 miles long.  Now to get the parallax thing to work we need to measure the tiny angle between the two 35 mile long sides.  Remember on one end they meet and on the other end they are 16 inches apart.  It is a brilliant piece of work that Astronomers have actually been able to measure that super tiny angle.

Now let's try to do the parallax technique on a star that is twice as far.  That means that we need to measure the angle between the two sides that are now 70 miles long.  Remember that they meet at one end and are separated by the same 16 inches on the other end.  Astronomers have only been able to use the parallax technique to measure the distance to only a few of the nearest stars.  I think you now understand why.

So if the parallax technique only works for a few very close stars what do we do about the rest?  The answer finally gets us back to the "standard candle" technique that I mentioned a long time ago.  Imagine having a 100 watt light bulb.  Now measure how bright it is from 100 yards away.  There is a standard mathematical formula that tells us exactly how bright it will be when viewed from 200 yards away or a thousand yards away.  So if we know we are looking at our 100 watt light bulb (so we know exactly how bright it is) and we can very accurately measure how bright it appears to be (called its "apparent brightness") then we can calculate how far away it is.  That's the idea behind "standard candle".  If we know how bright something is from close up, its "intrinsic" brightness, and we can measure its apparent brightness, and we know that everything is clear between it and us, then we can calculate how far away it is.

Now most of space is pretty empty.  So it conforms to the "everything is clear" requirement to use the technique.  Sometimes this is not true.  There are dust clouds and other things that get in the way.  And these present real problems for some measurements scientists would like to make.  But in a lot of cases there appears to be no obstruction and in other cases scientists come up with techniques to allow them to adjust for the amount of obscuring going on.  So a lot of the time this "everything is clear" requirement is met.  That leaves the problem of knowing the intrinsic brightness of what you are looking at.

A solution to this problems is discussed by Asimov.  It involves the use of Cepheid variables.  These are a kind of variable star. The brightness of the star varies in a predictable way.  What makes this important is that Astronomers came to understand enough about how Cepheid variables worked that they could predict the intrinsic brightness of the star based on the specifics of its pattern of variability.  Originally work determined that specific types of Cepheids all had the same intrinsic brightness.  This allowed the development of a relative distance scale.  This item is twice as far away as that item, that sort of thing.  Soon a large number of relative distances were known.  But to turn the relative distance scale into an absolute distance scale it was only necessary to determine the actual distance to one Cepheid.  That was only achieved recently when high precision measurements using the Hubble Space Telescope and other techniques became available.

At the time Asimov wrote the book only relative distances were known for sure.  Astronomers used a number of techniques to estimate the intrinsic brightness of Cepheids with more or less success.  At the time the book was written there was still a lively discussion as to what the correct value for the intrinsic brightness was.  This resulted in a number of respected Astronomers using a number of different estimates of intrinsic brightness.  As time went by Scientists also determined that there were several classes of Cepheids and members of each class displayed a different intrinsic brightness than apparently similar members of a different class.  General agreement as to how to place a specific Cepheid into the right class and the correct intrinsic brightness for each class are now pretty much sorted out.  But bringing everything into alignment was not completed until many years after Asimov's book was written.  Astronomers were very aware that there were problems with Cepheids at the time the book was written.  But there was no better way to determining distances at the time.  And Astronomers of the time were careful to acknowledge these kinds of issues.

Also at the time Asimov wrote the book Cepheid variables were the brightest standard candle available.  But for really large distances they are too dim to work.  Since then Astronomers have developed another standard candle called a "Type 1A Supernova".  As a supernova it is way brighter than a standard star like a Cepheid so it works for much greater distances.  All of the details of how the intrinsic brightness of a type 1a supernova has been worked out are different.  But the general idea is the same.  Certain attributes that can be measured from far away allow the intrinsic brightness to be determined.  There have been problems to work through with the type 1A supernova as a standard candle.  But Astronomers think they have things worked out pretty well at present.  Now back to the main line of the story.

In 1929 Edwin Hubble, who had been studying Galaxies published Hubble's Law.  Using Cepheids as standard candles Hubble had found that if you ignored a few close in Galaxies it appeared that the farther away a Galaxy was the faster it was moving away from the earth.  He posited that there was a single "Hubble Constant" that was the ratio between the recession speed and the distance from the earth.  Do to the problems with the Cepheid standard candle he couldn't establish the specific value for the Hubble Constant but he established that it appeared to be a constant across the range of relative distances he could measure.

This turned out to be a very remarkable observation.  Using Hubble's Law one could project to the point where galaxies would be receding from each other at the speed of light.  This in turn meant that the universe had a specific age.  This idea was shocking.  Before, scientists had not spent much time thinking about the age of the universe.  They knew it was vastly older than the 6,000 or 10,000 years that biblical scholars had calculated.  Other than that, most thought, when they thought about it at all that the universe was either a vast unspecified age or that it had always been there in something similar to its current state.  Hubble's ideas ushered in the modern era of cosmology.

As these ideas spread among first the Astronomical community and then to the broader scientific community speculation soon settled down into two main competing theories.  The "steady state" theory was championed by among others Einstein and a British Astronomer named Fred Hoyle.  It stated that the universe had always looked pretty much as it does now.  The competing theory, named by Hoyle with the most ridiculous name he could think of, was called the "big bang" theory.  By the time Asimov's book was written the evidence against "steady state" had become overwhelming.  So "big bang" was winning by default.

It didn't take scientists long to note some convenient features of Quantum Mechanics in their efforts to flesh out the big bang theory.  The most relevant item was something known as the Heisenberg Uncertainty Principle.  Most simply (and I don't want to get into yet another diversion so this is all you get) the Principle said that there was a fundamental uncertainty about the universe and that the smaller the thing you were studying the more uncertain its characteristics were.  Astronomers latched on to this and posited an extremely small piece of space.  It was so small that the energy content was vastly uncertain.  This was taken as the seed out of which the whole universe would explode.  As the universe exploded it would cool (that's what gasses naturally do as they expand) and eventually the temperature would drop to what we see today and the size of the universe would grow to the size we see today.  That was roughly the state of the big bang theory at the time Asimov wrote his book.

You are probably thinking that seems inherently implausible.  Scientists slowly came to the same conclusion.  And the big bang theory has evolved considerably from the humble roots I outlined above.  The biggest change is to add something called "inflation".  The subject is complex and I again want to avoid digression.  But the basic idea is that from its early tiny seed (which may have looked greatly different than our tiny exploding point) the universe inflated to a fantastic size in a fantastically short period of time.  This may sound even weirder than the very weird original big bang theory I outlined above.  But it turns out that there is actually some evidence for inflation.  Yet again in an attempt to avoid another large diversion I will note that the most compelling of this evidence consists of the measured variations in something called the Cosmic Microwave Background and leave it at that.

Asimov does a nice job of going into Hubble's work and that of subsequent scientists up to the time he wrote the book.  Given all the uncertainties scientists supported age estimates for the universe ranging from about 11 billion years to 42 billion years.  Since then the uncertainties have been greatly reduced and the consensus number today is 13.7 billion years.

Since then another startling development has transpired. It looks like the Hubble Constant is not constant.  There is evidence that the rate of expansion of the universe has changed over time.  There have also been related developments in scientist's views on the constitution of the universe.  At the time Asimov wrote the book what Astronomers could see were bright things like stars.  Generally this is referred to as Baryonic matter.  A couple of decades ago Astronomers noticed a problem.  They could roughly weigh a galaxy by doing some calculations based on the light the galaxy generated.  They could then use Newton's theory of gravitation to predict how fast portions of the galaxy should rotate.  Everything came out wrong.  Eventually Astronomers decided that there was a large amount of what they dubbed "dark matter" surrounding galaxies.  They still have no idea what dark matter is but there seems to be a lot of it.  The recently measurements that have led to the idea that the Hubble Constant is not constant has let scientists to posit something called "dark energy".  They know even less about dark energy than they do about dark matter.  But their current thinking is that the universe consists of about 4% Baryonic matter, 26% dark matter, and 70% dark energy.  So scientists in 1960 knew about only 4% of the mass that current day scientists think the universe actually contains.

And this leads me to my final subject for this post.  Scientists in 1960 envisioned three basic fates for the universe.  The first option was that the universe would explode (big bang), expand for a while, then collapse back on itself.  This was dubbed the "cyclic universe" theory.  At the other extreme the universe would explode then keep on growing.  It would get bigger and bigger.  Everything would spread farther and farther apart until each component of the universe was an island so far away from any other component as to be completely isolated.  The third option was the happy medium one.  The universe would explode and expansion would gradually slow down due to gravity but everything would be on a balance point,  It wouldn't expand forever but it wouldn't collapse back either.  Which would be the fate of the universe?  Well it all depended on the density of the universe.  If it was too dense it would expand then collapse.  If it was not dense enough then it would expand forever.  And if the density was just right if would end up on the balance point.

In 1960 these calculations had been done and it appeared that the universe had exactly the right density to end up at the balance point.  But scientists were completely at a loss as to why the density of the universe was exactly right.  Even a little too much or a little too little would tip the universe one way or the other.  Since then we have this whole dark matter / dark energy thing going.  Factoring everything in, Baryonic matter, dark matter, dark energy, the universe seems to have exactly the correct density.  But current measurements indicate that the density is so ridiculously exactly the correct amount that they are even more puzzled by the whole thing than they were in 1960,

And that gets us to the end of the chapter.