Tuesday, March 29, 2016

50 Years of Science - part 6

This is the sixth in a series.  The first one can be found at  http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html. Part 2 can be found in the August 2012 section of this blog.  Parts 3 and 4 can be found in the September 2012 section. Part 5 can be found in the March 2016 section.  I take the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960).  More than 50 years have now passed but I am going to stick with the original title anyhow even though it is now slightly inaccurate.  In these posts I am reviewing what he reported and examining what has changed since.  For this post I am starting with the chapter Asimov titled "The Birth of the Solar System" and then moving to "Of Shape and Size".  Both chapters are in his "The Earth" section.

The first chapter under discussion doesn't even mention the Earth.  It reviews various theories about the formation of the Solar System.  If you want to know what science looks like when Science has only a vague idea of what it is talking about this is a good chapter.  This chapter was written at the dawn of the space age.  I have talked about the best device for studying the heavens at that time, the 200" Hale telescope, elsewhere.  Frankly it was not up to the task of studying the Solar System in the detail necessary to understand it the way we do now.  Scientists of that time knew the size and orbital parameters of all the planets.  Asimov lists some very important observations that scientists had picked up on by that time.

Nearly all the planets had circular orbits.  (Pluto, the only exception, had not yet been demoted from planet-hood back then.)  All the planets orbited in a counterclockwise direction (when looking down from a great height above the Earth's North Pole).  Nearly all the planets and nearly all the moons known at the time rotated in a counterclockwise direction around axes that were roughly vertical.  (There were a few exceptions but they could be explained away as "exceptions that proved the rule").  And with each planet (again excepting Pluto), the ratio between the size of adjacent orbits fell in or near pleasant ratios.

There seemed to be a system to the Solar System.  But scientists were pretty much stumped as to what that system was.  The best theory at that time was one by Weizsacker.  His 1944 theory had serious problems but it was the best anyone had come up with.  So what, in the most general sense, was the problem?

There were two problems.  The obvious one is the one I have already alluded to.  They didn't have much data.  The Hale telescope was better than nothing but not that good at making the necessary precision measurements.  There were a few satellites in orbit but none of them had a telescope or other good instruments for studying the Solar System.  There may have been probes launched toward Venus or Mars (I didn't check) but, if so, they were very primitive fly-by missions.  And pretty much nothing was known about the gas giant planets.  The first great exploration missions to them, Voyager I and II, would not even launch until 1977.  The same was true for missions to the rocky inner planets (Mercury, Venus, and Mars).  And the greatest instrument of them all, the Hubble Space Telescope, did not launch until 1990 and it took a couple of years more to fix it.  So scientists lacked data.

They also lacked analytical tools.  There were some computers around in 1960 but they were small and slow by modern measures, and also few and far between.  I personally own three desktop computers.  Any one of them was more powerful in terms of speed, RAM, and disk space than all the computers in existence in 1960.  This meant that scientists were stuck with literally not much more than the back of an envelope when it came to thoroughly investigating a theory or trying to assess its ramifications.

A good thing to use as an illustration is the orbital spacing I mentioned above.  The spacing of the orbits of the planets was known to a medium degree of accuracy.  But the idea of "resonance" was not well understood.  Imagine two bodies orbiting the same larger body.  And imagine their orbits are such that one takes exactly twice as long as the other.  This means that over the course of one of the slower orbits each and every combination of relative position will happen exactly twice because the faster body will have orbited twice around and both bodies will end up exactly where they started with respect to each other.  Now imagine that there is a certain configuration where the bodies tug on each other, pulling each other in one direction.  It turns out because of the complete symmetry of the situation that there is another configuration where they pull in exactly the opposite direction.  So over the course of one slow orbit everything exactly balances out.  This is called a 2:1 resonance.

Now assume the resonance is slightly more than or less than 2:1.  Then a net pull can develop over the course of a complete slow orbit that slows one planet down a little or speeds it up a little.  This means that over time the planets will be pulled a little closer together or a little farther apart.  In other words, in our 2:1 resonance case the orbits are stable (they don't change at all over time) but in the other case they evolve.  Now there are other resonances like 3:2 or 4:3 or whatever.  With lots of cheap computer power astronomers are now able run sophisticated long running simulations to discover exactly how things would evolve.  The current state of the art now permits very complicated situations to be thoroughly analyzed and understood.

And with this ability astronomers found that there were only a few stable resonances.  The rest of the time the planets (or moons) get pulled around, often in complicated ways that could not have been predicted by looking at the equations and doing some simple analysis.  The simulations showed that they kept getting pulled around until they hit a stable resonance point, a point that may only have been arrived at after the simulation had covered millions of simulated years.  And guess what?  The current orbits of the planets are predicted by this resonance point analysis.  It was literally impossible to do this kind of analysis before abundant computer power was available cheaply.

The other problem is data.  As we sent space missions like Voyager out we were able to gather tons of data that was much more accurate and complete than that available in 1960.  This was the information needed to do the resonance point analysis with enough accuracy to give meaningful results.  It was literally impossible to make the kinds of detailed calculations necessary unless the parameters put into the simulation were known to a very high degree of accuracy.  Those highly accurate values were not known until we had sent spacecraft out exploring.

And this led directly to one of the things that astronomers got wrong at the time.  That was the question of the origin of the asteroid belt.  The asteroid belt (there are actually several but I am going to concentrate on the main one) consists of a bunch of sub-planet-sized rocks.  The leading theory of the time was that something had torn up a small planet.  We now know that the asteroid belt is a side effect of these resonances.  But the reason we now know this is because we have a lot more data and the data is of much higher quality.  We can also simulate the creation or destruction of a single larger body.  The simulations can't be made to produce the outcome we now see.  But a simulation of a bunch of rocks shows them drifting into the area now occupied by the asteroid belt then getting stuck there.

It wouldn't work if it was just one rock of whatever size.  A single rock would be pulled either toward Jupiter or toward Saturn.  But a flock of rocks can be stable over long periods of time.  At any one time some rocks are pulled in and others are pulled out.  But on average and over time, they just stay within the band that is the belt.  The average can maintain a behavior (stability) that no individual component is capable of.  Individually their orbits are all slightly unstable but this leads to stability at the group level. 

And now we have the instruments to study the individual asteroids in the asteroid belt in considerable detail.  NASA has recently inserted a space probe directly into the middle of things.  The Dawn mission put a spacecraft into orbit around Vesta, a large asteroid.  After studying Vesta for several months the probe was moved to Ceres, the largest asteroid.  Dawn has returned a massive amount of data about the asteroid belt to supplement what earlier probes discovered.

A couple of decades after 1960 scientists thought they had a good handle on the formation of the Solar System.  The idea was that the Sun condensed out of a cloud of gas.  This happened precisely 4.567 billion years ago.  (They have good reason to believe they know the Sun's age that accurately.)  They also think the rest of the Solar System formed a very quickly and a very short time later.  It only took a hundred million years, give or take.  They are very certain that it was very quick but exactly how quick is not nailed down very well.

And they had a theory which sounded very good about why the various planets with their various compositions ended up where they were and with the composition they did.  The theory was that the planets formed in roughly the locations you now see them.  The heat of the Sun's radiation was enough to blow the gas out of the inner solar system and into the outer solar system.  So you had rocky planets (Mercury, Venus, Earth, Mars) in the inner solar system and gas giants (Jupiter, Saturn, Neptune, Uranus) in the outer solar system.  Pluto was assumed to be an asteroid-like thing that got knocked around until it ended up where it ended up.  This theory sounded reasonable to everybody and seemed to work very well.

Then it became possible to discover exo-planets, planets orbiting some star other than our Sun.  The Kepler spacecraft has found literally thousands of them.  And the solar systems around these other suns don't look at all like our solar system does.  There are gas giants in the inner solar system all over the place.  Lots of gas giants have been found with orbits that are smaller even than Mercury's.  What astronomers now know for sure is that don't know.

The common theory for the moment is a hybrid one.  Planets formed in their traditional locations.  Rocky planets formed close in (they are hard to see if they are orbiting another start so it is no surprise that very few have been discovered).  Gas giants formed further out.  Then the gas giants migrated (see the discussion of resonance above) into the inner solar system of these other stars. In this theory the fate of the rocky inner planets is unknown.  But frankly this is a theory like the ones discussed in this chapter of Asimov's book.  It has problems but it is the best scientists currently have.  This means scientists expect the theory to undergo drastic modification or even be discarded completely for a quite different one.

"Of Shape and Size" starts with a discussion of the shape of the Earth.  It has been known to be roughly spherical for several hundred years now.  But some noticed evidence supporting the idea of a spherical shape much further back.  But for a long time their evidence did not carry the day.  By Newton's time (400 years ago) it was generally accepted that the Earth was spherical in shape.  But Newton calculated that gravitational effects should distort it into an oblate spheroid.  The French in the 1800s tried and eventually succeeded in confirming Newton's idea.  The best number for exactly how far out of round the Earth was in 1960 was 26.7 miles.  That is not far off the current number.

We now have much more accurate ways of measuring distance.  So we can very accurately measure the distance from a fixed point on the Earth to a satellite.  A bunch of these measurements yields a very accurate description of the exact shape of the Earth.  It is an oblate spheroid with a number of lumps and bumps on it.  The actual shape, even after you smooth out mountains and oceans, is very complicated and I am not going to go into it.  And, of course, we have turned the whole "satellite distance" thing around to create the GPS system.  GPS satellites need, among other things, a mathematical model of the shape of the Earth.  They use a moderately sophisticated one that works well enough to keep our navigation systems on track almost all of the time.

One of the things the French effort brought out, Asimov tells us, is the fact that at the time there was no agreed upon standard of length.  Everybody knew approximately how long a yard was but no one knew precisely how long it was.  This led to the creation of the "Meter" (French spelling:  Metre).  It was the distance between two very precisely marked lines on a specific piece of metal.  Eventually the "Metric standard" was adopted around the world.  Now even the Yard is defined in terms of the Meter.  An "Inch" is one 39.34th of a Meter.  A "Yard" is 36 inches.  It's clumsy but it works.

And this "two marks on a piece of metal" definition of the Meter worked well for more than a hundred years.  But scientists kept getting better and better at accurately measuring distances.  Soon a more precise specification was required.  The laser made it possible to measure the properties of light very precisely.  And Einstein said "the speed of light is always and everywhere the same".  In 1983 scientists took advantage of this to define a Meter as a certain specific number of oscillations of a certain kind of light as measured under certain very specific conditions.  Now a properly equipped laboratory can measure a Meter far more accurately than was possible at any time during the "Meter bar" era.

And this idea of very precisely specifying all the basic units like those of time, weight (actually mass), etc. caught on.  The French developed an entire "Metric" system with seconds (a carry over from the old system), Kilograms (a replacement for the pound), etc.  Now there is a complex system called the "International System of Units".  It is abbreviated as SI based on the French terminology.  It also includes things like Volts, Watts, Ohms, etc. for electricity, Joules, Newtons, etc. for forces and work (to replace things like "pounds force", horsepower, etc.), Celsius (originally Centigrade - to replace Fahrenheit degrees of temperature), and so on.

Returning to the problem with the shape of the earth.  A trick used then and still in use now was to observe a pendulum.  In this case it was used to accurately measure gravity.  If gravity was stronger than normal the pendulum would swing too fast.  If gravity was weaker than normal the pendulum would swing too slow.  This made it possible to measure and map "gravitational anomalies".  We are using instruments that can do the job far more accurately now but the mapping of gravitational anomalies is a booming business these days.  Geologists can tell a lot from gravitational anomalies (i.e. where there's oil) but there are numerous other applications I am going to skip getting into.

Asimov ends this particular portion of the discussion by noting that prior to 1960 the distance between New York and London was only known to within plus or minus a mile.  The techniques I mentioned above (measuring the locations of satellites) was just coming into use as a "by hand" version of GPS.  And at the time a lot of the results of this procedure were classified.  Why?

After the USSR fell in 1989 it turned out that popular maps issued by the Communists showed the locations of their major cities incorrectly.  A typical "error" was say 25 miles.  It was not that they were bad at making accurate maps.  It was thought instead that they had purposely introduced the errors as an attempt to throw off the aim of western ICBM missiles.  Of course, the US had long since switched to the "GPS by hand" method described above, for deciding where to point their ICBMs.

Asimov then moves on to related problems.  If you know the precise shape of the Earth you can accurately calculate its volume.  Then, if you know its weight (or, more correctly mass) you can calculate its density.  But the problem is figuring out its weight.  And here is a good time to explain why scientists use mass instead of weight.

If you stand on a scale what is actually being measured is force.  A certain amount of force bends a spring a certain amount and that can be used to turn a meter a certain distance.  But it is the force that is being measured.  But the force depends on how strong gravity is pulling.  Scientists wanted to get gravity out of the process.  So they decided that matter has an inherent property called "mass".  The force generated in a specific gravitational field depends on the mass and on the strength of the field.  This let scientists split things into a question of the amount of mass, an amount that is independent of what gravity is or isn't doing, and gravitational force, something that is independent of mass and only depends on what is happening with gravity.

If you are standing still on the surface of the earth then weight and mass can seem like pretty much the same thing.  But let's say you are in a car and you haven't fastened your seat belt and your car crashes into a brick wall.  Lots of force is involved and it is likely to get you killed if you aren't extremely lucky.  But this force has nothing to do with gravity.  It has to do with two things.  One of them is how fast you are slammed to a stop (very fast).  The other is your mass.  Remember gravity is not part of the process so "weight" is irrelevant.  But mass is mass is mass.  It can be accelerated by being operated on by the force gravity, which varies depending on altitude, gravitational anomalies, etc.  Or by a car being forced to come to a stop extremely quickly using a process that doesn't involve gravity at all.  By going with mass, which is the same no matter what else is going on (I'm ignoring relativity here) scientists can plug the right number for mass on the one hand and force on the other hand into their calculations and end up with the right result.

Back to the mass (or, for civilians, weight) of the earth.  The problem is that gravity is everywhere.  How do you get outside it so you can measure it?  Newton came up with a formula that looked helpful.  f = ( G * m-1 * m-2) / d**2.  If you knew the value of "f", a force and "d", a distance and if you knew the value of "G", the "gravitational constant" and if you knew the value of m-1 (the mass of one object, say the moon), you could calculate the value of m-2 (the mass of another object, say the earth).  This does not look promising.  We don't seem to know the value of several of those things.  But the formula applies everywhere.  So let's go into the laboratory.  Here we can measure force ("f") using a spring scale.  We can use a ruler to measure distance ("d").  And we can just weigh m-1 and m-2 and use that to calculate the mass of each.  That leaves just "G".  But the formula then lets us calculate its value.  The problem is that "G" turns out to be a very small number.  There is so little gravitational force to measure between two normal objects you can find in a laboratory that it seems impossible to do so.

The first one to make a serious and successful run at the problem was Henry Cavendish.  If we take a thin wire that is say a foot long and fasten one end to the ceiling we can twist the other end and measure the amount of force involved to twist it say 30 degrees.  It's not much but if we use a sensitive spring balance we can measure it.  Now it turns out that if we instead use a 30 foot long piece of the same wire it takes only a thirtieth as much force to twist it the same 30 degrees.  That's the basic idea.

Cavendish took a long piece of very fine wire that was very easy to twist and performed the appropriate measurements on shorter pieces so he could calculate the force necessary to bend it through a relatively large angle like 30 degrees.  Then he put a mirror on it near the bottom and bounced a light off of it from a long ways away (say 50 feet).  This allowed him to measure very small changes in twist.  Then he put two fairly heavy balls on each end of a rod and hooked the rod to the end of the wire.  He made the weights as heavy as he could get away with and he made the rod as long as he could get away with.  By connecting the center of the rod to the wire he could balance everything so that the wire would hold it all up.

Then he took two really big weights.  They could be very large because the thin wire did not need to hold them up.  They could sit on heavy carts on the floor of the laboratory.  He brought each ball very close to one of the hanging balls.  One heavy ball was on the near side of one hanging ball.  The other heavy ball was on the far side of the other hanging ball.  He brought them very close but did not let them touch.  There should be a small gravitational pull between the heavy ball on the floor and its matching ball hanging on the wire.  And there should be a similar gravitational pull pulling in the same direction in the case of the other pair of balls and this should cause the wire to twist.  It did by a very small amount.  But it was enough for Cavendish measure it and to come up with an accurate value for "G".

As another side note:  The University of Washington has been on the forefront of doing these kinds of Cavendish experiments for some time now.  Things get very complicated when you are trying to measure "G" very accurately.  But they have found ways to overcome these complexities. They have succeeded in measured "G" more accurately than anyone else, even themselves in previous experiments, several times now.

So if we know "G" don't we still have a problem?  At this point we know neither m-1 nor m-2 so aren't we still in a pickle?  Theoretically yes but actually no.  What if we put a hundred pound satellite into circular orbit around the earth.  A little calculus (which I am going to skip) tells us what "f" must be.  And we can measure "d".  So that leaves our two "m"s.  But not really.  We can calculate "m-1 * m-2" because we have all the other values in the formula.  But we also know m-1.  It's the mass equivalent of a hundred pounds.  And that leaves only m-2, the mass of the earth, as an unknown.  Plugging all the other numbers in gives us the value of m-2.  If we know the mass of the earth we can go through the same process and get the mass of the moon. We can also use the same process to get the mass of the Sun.  To get a rough number we just ignore the moon and the other planets.  We need to make adjustments for each celestial body's effect to get a more accurate value.

The adjustments can get complicated but astronomers have figured out how to do it so I am going to leave it there.  And we can keep going.  With the mass of the Sun we can calculate the mass of Jupiter or Saturn or, . . .  It's just a matter of using the basic process then applying the necessary adjustments.  The math is complex if you want to get an accurate answer but all we need to know is that it can be done.  We can use the mass of one celestial body to "bootstrap" us to the mass of other celestial bodies.  These techniques certainly work for the planets.  With asteroids there are so many bodies close at hand that in most cases only a rough number can be calculated.  This is also true in some other "many body" problems.  But as computer power increases more and more complex situations can be handled.  Back to Asimov.

He gives us the answer for the density of the Earth.  It is 5.5 times as dense as water, on average.  If we didn't already know, this would allow us to conclude that the Earth is not composed exclusively of pure water.  Okay.  It it of uniform density?  The answer to that question was already known in 1960.  The answer is NO.  How did we know this back in 1960?  From earthquakes.  And that's Asimov's segue into the next chapter.  And that's my cue to end this post.

Sunday, March 20, 2016

50 Years of Science - part 5

This is the fifth in a series.  The first one can be found at  http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html. Part 2 can be found in the August 2012 section of this blog.  Parts 3 and 4 can be found in the September 2012 section.  Taking the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960), more than 50 years have now passed but I am going to stick with the original title anyway.  In these posts I am reviewing what he reported and examining what has changed since. For this post I am starting with the chapter Asimov titled "The Windows of the Universe".  This is the last chapter in his "The Universe" section.

Asimov starts the chapter with yet another reference to the 200" Hale telescope situated on Mt. Palomar in California.  That gives me a chance to digress into some telescope basics.  The telescope was invented about 1609 and popularized by Galileo.  Before that astronomical observations were made by eye.  About 3,500 stars are visible in the northern hemisphere with the naked eye.  We now know that this is literally a drop in the ocean compared to the actual number of stars in the sky.  Even in this period it turns out that what was most important to astronomers was the ability to measure angles as accurately as possible.

To address this problem people devised instruments.  These included the quadrant, octant, and later the sextant commonly used by sailors for hundreds of years.  This process culminated with the efforts of Tycho Brahe.  He developed the most precise instruments ever devised to make human eye angle measurements.  He died in 1601 just before the telescope was invented.  His super-precise (for the time) measurements uncovered problems with the Ptolemaic astronomical system that had been in use for centuries.  But before people could digest this and decide on an appropriate response the observations of Galileo started becoming known and they really threw a monkey wrench into the works.  This diverted attention from what Brahe had found but the Brahe measurements ultimately bolstered the anti-Ptolemy side of the argument.  Back to telescopes.

So what's the point?  What does the telescope bring to the table?  The first and most obvious thing a telescope does is magnify things.  This instantly makes it possible to measure angles far more accurately than was possible with the naked eye.  It also makes it possible to view things that are too small to see with just the naked eye.  Galileo discovered the "mountains of the moon".  He observed shadows cast by what appeared to be mountains when observing the edge between the bright and dark parts of the face of the moon.  These features are too small to be seen reliably with the naked eye.  He also was able to make out moons around Jupiter.  Again, these are too small to see with the naked eye.  Galileo saw a lot more things that upset the traditional authorities but that's enough for my purposes.

But being able to see the moons of Jupiter leads to another characteristic of telescopes.  They can make dim things brighter.  It is hard to see the moons of Jupiter not only because they are small but because they are dim.  Galileo was able to easily make them out because his telescope made them brighter.  This is generally referred to as "light gathering power".  The previous property (making things bigger) is generally referred to as "magnification".  And these are the two primary characteristics that the telescope brings to the table.

Now let's take a quick look at telescope design.  The telescope we are most familiar with is the sailor's telescope prominently on display in swashbuckler movies.  It consists of a tube with mirrors at each end.  It does both of the things associated with telescopes.  The lens at the front is bigger than the human eye and gathers more light making the image brighter.  The two lenses work together to magnify the image.  This makes it easy to see the other guy's ship in detail when it is many miles away on the horizon.  And this is the design Galileo used.  But it has a problem.

Lenses work because the material they are made of has a different index of refraction.  All you need to know is that it is a property of things and that it is easily measured.  A vacuum has an index of refraction.  Air has a slightly different index of refraction.  Water has a still different index of refraction as does the glass telescope lenses are made of.  As light passes from material with one index of refraction to material with a different index it bends.  Clever design allows this idea to be turned into a telescope.  So what's the problem?

It turns out that the index of refraction depends on frequency.  So lens glass bends red light through a different angle than it bends blue light.  This isn't much of a problem with a sailor's telescope but it soon became a big problem with the large very precise telescopes that people built to observe the heavens with.  This problem is called "chromatic aberration".  It affects any optical device that passes light through lenses and processes more than one frequency of light.  But there is a solution.  And the man that found it was Isaac Newton.  Talk about genius.

By now the general idea should be obvious.  Don't pass light through lenses.  But how could a telescope be made without lenses?  Mirrors also change the direction light travels.  Curved mirrors can bend light in just the way necessary to make a telescope.  And that's what Newton (yes -- same guy) did.  He put a big for the time (6") mirror at the bottom of a tube.  Then he put a small (compared to the big mirror) flat mirror near the top of the tube.  This mirror was angled at 45 degrees so that the light could come out a hole in the side of the telescope.  That's where he put his eye.  The eye contains a lens but the amount of chromatic aberration is still much smaller than with the old design (called a refracting telescope).  This general class of designs is called a reflecting telescope as light reflects off the mirror.  Pretty much all professional astronomical telescopes now use this reflecting design but with a modification.

A "cassegrain" telescope uses a perpendicular mirror at the front instead of a 45 degree one.  The idea is to bounce the light back down the tube and through a relatively small hole in the center of the primary mirror.  The astronomer's eye (then -- now sophisticated instruments) sits slightly behind the main mirror.  The Hubble Space Telescope, for instance, is a cassegrain telescope.  And for the first 400 years telescopes depended on the naked eye to make observations.  But starting in the nineteenth century then more and more often as the twentieth century advanced photographic plates, often a foot on a side, were substituted.  Since about 1970 the CCD or charged-couple device has been taking over from photographic plates.  CCDs are the electronic equivalent of the photographic plate and are what the Hubble uses.

Now let me circle back to the Hale telescope.  It was the largest in the world for about thirty years.  Why?  Isn't bigger always better.  Maybe not.  The point of a telescope is to make things larger and brighter.  Lets take each in turn starting with magnification.  For a while there was a telescope race as people came up with new designs to increase magnification.  But the race didn't last long.  Remember the "twinkle, twinkle, little star" nursery rhyme?  Stars do actually twinkle.

We now know that this is caused by small irregularities in the air.  Some air is slightly thicker than average and some air is slightly thinner than average.  Winds move these thicker and thinner chunks around continuously.  And it turns out that thicker air has a very slightly different index of refraction than thinner air.  This causes these chunks of air to act like lenses and change the path of light.  This means that sometimes the light from a star is being directed into your eye and sometimes it isn't.  The star twinkles.

It 's kind of pretty when you are gazing fondly at the evening sky.  It is really annoying when you are trying to observe something with a telescope.  The greater the magnification the greater the twinkle effect.  Telescopes were quickly built that were all twinkle and no observation.  There was a limit to the amount of useful magnification a telescope could employ.

But none of this messes with the idea of increasing brightness, right?  You have to work harder but it makes problems here too.  We want to focus all the light of a small star onto a very small point so we get a sharp image of it.  But the twinkle effect moves the location of the star around.  And that results in blur.  And if the star is dim enough you never end up with enough light in any one place to be able to see it at all.  So this means there is a limit to the amount of light amplification that is useful.  The 200" telescope superseded an earlier 100" model.  Doubling the diameter theoretically gave the mirror four times the light gathering ability but in actuality it didn't work four times better.  And there was another problem, gravity.

The mirror needs to have an extremely specific shape.  If not then the light from a star won't all land at exactly the same place.  It turns out polishing the mirror to the extreme level of precision necessary to give it the right shape was relatively easy.  The problem was to make the mirror keep its shape.  To do this it was made extremely stiff.  And that was achieved by using a large, thick piece of glass.  And that made it very heavy.

And as you use the telescope you are moving it around.  And that means gravity is coming at it from one angle now and another angle later.  It worked.  But all the calculations that went into the design said it wouldn't work if you made the mirror a lot bigger, say 400".  And all this weight made the telescope hard to operate.  You needed very big motors to move it around.  And these motors had to position the mirror very accurately or everything was a big waste of time.  So for a long time it looked like 200" was as big as it was practical to go.

But modern telescopes are much bigger.  Astronomers use the metric system so astronomers call the Hale not a 200" telescope but a 5 meter one (200" is almost exactly 5 meters).  There are now lots of telescopes with larger mirrors.  The Keck telescopes in Hawaii have 10 meter mirrors.  The Europeans are currently building a 40 meter telescope and designs have been proposed for even larger ones.  The James Web Space Telescope will house a mirror larger than that of the Hale but in space.  Something obviously changed but what?

The easiest to understand change is that of going from one single main mirror to a main mirror made up of several segments.  Each individual mirror segment is much smaller than the mirror as a whole.  This involved solving the math problem of determining the specific shape each mirror segment needed to be.  Then the harder problem of making mirror segments in these odd shapes needed to be solved but it was.  So what's the advantage of segmenting the mirror?  It means that the stiffness problem is much easier to solve.  One edge of the 200" Hale mirror has to be maintained in a very precise relationship with the other edge.  But if the edges are now say 50" apart this becomes much easier to do.  So the mirror glass can be much thinner and still be stiff enough.  And that saves a lot of weight.

The other design change was to dial way back on the whole "stiff" thing.  If instead of depending on the mirror's built in strength to maintain its shape what if we take very precise measurements, determine what corrections need to be made, and then bend the glass until it is in the proper shape.  Computer power and lasers made it possible to perform the measurements and calculate the corrections to the necessary accuracy.  Then it was a simple process to put a gadget (actually several gadgets) on the back of each mirror segment to bend it the right amount to get the shape right.

It now became important to make the glass thin enough that it could be bent appropriately.  This in turn took a lot more weight out and allowed everything to be lighter and cheaper.  So this new "bend on demand" approach made it possible to have a mirror whose effective size was much larger than before but which still had the right shape.  But what about the twinkle problem?

It turns out that this "bend on demand" capability came to the rescue here too.  With even more computing power (now cheap) it was possible to calculate exactly how much the irregularities in the atmosphere were messing things up.  This made it possible to calculate how to bend the mirror out of what would normally be the "correct" shape just enough to undo the twinkling the atmosphere was causing.  The corrections would have to be calculated and applied frequently (about 100 times per second) but it meant that it was possible to take a much sharper picture of the sky from the bottom of the atmosphere.

This process is called "adaptive optics" and all the big telescopes now have adaptive optics systems.  The last thing you need to make it work is a "guide star".  Measuring it allows the distortions and corrections to be calculated.  The guide star can also be used to verify that the right correction was applied.  If there is a bright star handy close to the portion of the sky you want to point your telescope at then it can be used.  Otherwise, synthetic guide stars can be created using lasers and other tricks.  And with that, let me return to Asimov's book.

Remember that chromatic aberration I was talking about.  And remember the old saw about "turning a problem into an opportunity".  That's the first subject Asimov gets into.  If you introduce a piece of glass into the path of the light from your telescope it will bend the light.  And it will bend the light by different amounts that depend on the frequency of the light.  The piece of glass used for this purpose is called a "prism".  A well designed prism will accentuate this phenomenon.  Why do it?  Because this allows us to study each frequency of say the light from a specific star, separately.  And that allows us to learn a lot about the star.  Studying the various frequencies is called spectroscopy and a device for spreading those frequencies out so that each frequency can be examined individually is called a spectroscope and the resulting pattern a spectrograph.  (And this whole business of studying the spectrum of light also goes back to Isaac Newton.)  So what can we learn from the spectrum of a star?

"White" light will contain all frequencies and the intensity of each frequency will follow a specific pattern.  But real light always has bands that are either brighter or darker than they are supposed to be.  Franhoffer first reported this in 1814.  The dark lanes are "absorption" lines where something has absorbed a particular frequency as the light passes through it.  The brighter lines are "emission" lines.  Something, say a candle, has emitted extra light in particular frequencies.  It didn't take long for scientists to speculate that specific elements caused specific lines.  It turned out that they were right.

Early work identified the new at the time elements of Cesium and Rubidium.  Then in 1868 Helium was identified in the spectrum of sunlight.  Cesium and Rubidium were rare but could be found on earth if you looked hard.  At that time no one had found any Helium anywhere.  (It was later discovered to be a trace component of natural gas and can also be found in even smaller quantities elsewhere.)  The use of spectroscopy to discover solar Helium was a big deal and really put the technique on the map.

These early spectroscopy studies were done "by eye".  The first major step in moving away from this was the invention of the daguerreotype, an early photographic method, in 1839.  As the century progressed photographic techniques improved and photographs became more common in astronomy.  And photography made it possible to use telescopes in a different way.  The standard way is to peer very closely at a small part of the sky.  But this means you can't do a broad "survey" of a larger portion of the sky.  Then in the 1930's Schmidt came up with a telescope design that could do surveys.  But you could not do it with your eye.  You had to use photography.  This was an early example of moving on beyond what the naked eye could show us.

But spectroscopy turned out have very down to earth uses.  Around 1800 Herschel moved a thermometer beyond the visible part of the spectrum.  He detected a warming.  He had discovered infrared light, light with a frequency lower than that of red.  Additional investigation by others led to the discovery of ultraviolet, light with a frequency higher than violet.  The "electromagnetic" spectrum has since been expanded to include (from highest to lowest) gamma rays and x-rays (usually subdivided into hard (higher frequency) and soft (lower frequency)) on the high end and various forms of radio waves (from highest to lowest: microwave, shortwave, and long wave) on the low end.

It turns out gamma rays, x-rays, and infrared waves are all very effectively blocked by our atmosphere.  But radio waves are not.  In 1933 Jansky detected radio waves coming from the sky.  This was the start of what is now a booming field of endeavor, radio astronomy.  Radio astronomy was in its infancy when Asimov wrote his book.  Big dish antennas were just coming into being.  And the now famous giant dish at Arecibo in Puerto Rico was not completed until 1963.  Nor had Lasers been invented yet.  But the predecessor to the Laser, the first Maser had been built.  The idea is the same as that behind the Laser but a Maser uses radio waves and a Laser uses light waves.  It was easier to pull the necessary engineering off with radio waves so the Maser came first by about ten years.  And, although radio astronomers invented the Maser and had done so before the book was written, they were still figuring out how to make the best use of Masers so Asimov does not even mention them.

Another technique that is now in common use is radio interferometry.  This is the process of combining signals from two or more widely separated radio antennas to create a "synthetic aperture" that is as large as the distance between the antennas.  A problem Asimov does address is that fact that some radio wavelengths are very long.  This means you need very large equipment to do anything with them.  The synthetic aperture scheme got around this problem and allowed radio astronomers to make effective use of low frequency (long wavelength) radio waves.  But this came well after Asimov's book.  And this synthetic aperture scheme has now been adapted for use with light telescopes.   There are actually two Keck telescopes in Hawaii.  They can be operated together in such a way as to behave like they are one single telescope with a mirror diameter of 85 meters (almost 300 feet).

One issue Asimov does get into is mapping the Milky Way.  As Asimov put it, "[i]n a sense, the galaxy hardest for us to see is our own."  Radio astronomy has been a big help.  Light is blocked by clouds of dust like the "coal sack" in the southern hemisphere.  But radio waves can penetrate dust easily.  This led to the mapping of the Orion, Perseus, and Sagittarius arms of the Milky Way.  But these arms are only partially mapped and there has not been a lot of progress since Asimov's book.  And the existence of a giant black hole (it weighs millions of times as much as our Sun) in the center of our galaxy was totally unsuspected at that time.  As was the Cosmic Microwave Background, the single biggest discovery in the field of radio astronomy.  It was discovered less than a decade after the book was written.

Asimov does list a number of achievements that had been racked up by radio astronomy.  These include a number of bright radio sources in the sky, the fact that sunspots emit radio waves, the fact that the atmospheres of both Venus and Jupiter are turbulent enough to emit radio waves (as does the cloud surrounding the Crab Nebula), the fact that galaxies collide, and others.  And then there is that workhorse of spectroscopy that was first made use of by radio astronomers, the Doppler effect.  As I have indicated elsewhere, it can be used to measure the speed with which stars (and any other object bright enough to allow a detailed spectrograph to be taken) are moving toward or away from the Earth.  Asimov reports the results of early Doppler work.

At first it might seem like this is a story of scientific results being overturned but that is not so.  In fact, what is on display is a progress toward more and better information.  This does result in some scientific theories being overturned.  But that's why scientific theories are theories.  The possibility always exists that new data will come along that will show them to be wrong in some important way.  But before scientists move on to a new theory they make sure it accounts for all the old observations.

And scientists are pretty good at figuring out how solid their theories are.  Scientists see it as part of their job to theorize before all the data is in.  But they admit it.  They even have a name for this sort of thing.  It's called a WAG, a Wild-Assed Guess.  As more data comes in this might be replaced by a SWAG, a Scientific Wild-Assed Guess.  Neither rises to the level of a "theory", which must have more support.  And some theories are "tentative" while others are "pretty solid".  From there it can move on to being a "well tested" or "foundational" theory or not.  It depends on the data.

And mostly what we see are new discoveries made possible by new and better tools.  The new information does not overturn the old.  It supplements it or "opens new vistas".  Large though it is, there were no tools in 1960 that could have detected the giant black hole at the center of our galaxy.  Scientists knew roughly where the center of our galaxy was ("somewhere in the Sagittarius Constellation") but were the first to admit they knew little to nothing about what was there.