This is the sixth in a series. The first one can be found at http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html. Part 2 can be found in the August 2012 section of this blog. Parts 3 and 4 can be found in the September 2012 section. Part 5 can be found in the March 2016 section. I take the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960). More than 50 years have now passed but I am going to stick with the original title anyhow even though it is now slightly inaccurate. In these posts I am reviewing what he reported and examining what has changed since. For this post I am starting with the chapter Asimov titled "The Birth of the Solar System" and then moving to "Of Shape and Size". Both chapters are in his "The Earth" section.
The first chapter under discussion doesn't even mention the Earth. It reviews various theories about the formation of the Solar System. If you want to know what science looks like when Science has only a vague idea of what it is talking about this is a good chapter. This chapter was written at the dawn of the space age. I have talked about the best device for studying the heavens at that time, the 200" Hale telescope, elsewhere. Frankly it was not up to the task of studying the Solar System in the detail necessary to understand it the way we do now. Scientists of that time knew the size and orbital parameters of all the planets. Asimov lists some very important observations that scientists had picked up on by that time.
Nearly all the planets had circular orbits. (Pluto, the only exception, had not yet been demoted from planet-hood back then.) All the planets orbited in a counterclockwise direction (when looking down from a great height above the Earth's North Pole). Nearly all the planets and nearly all the moons known at the time rotated in a counterclockwise direction around axes that were roughly vertical. (There were a few exceptions but they could be explained away as "exceptions that proved the rule"). And with each planet (again excepting Pluto), the ratio between the size of adjacent orbits fell in or near pleasant ratios.
There seemed to be a system to the Solar System. But scientists were pretty much stumped as to what that system was. The best theory at that time was one by Weizsacker. His 1944 theory had serious problems but it was the best anyone had come up with. So what, in the most general sense, was the problem?
There were two problems. The obvious one is the one I have already alluded to. They didn't have much data. The Hale telescope was better than nothing but not that good at making the necessary precision measurements. There were a few satellites in orbit but none of them had a telescope or other good instruments for studying the Solar System. There may have been probes launched toward Venus or Mars (I didn't check) but, if so, they were very primitive fly-by missions. And pretty much nothing was known about the gas giant planets. The first great exploration missions to them, Voyager I and II, would not even launch until 1977. The same was true for missions to the rocky inner planets (Mercury, Venus, and Mars). And the greatest instrument of them all, the Hubble Space Telescope, did not launch until 1990 and it took a couple of years more to fix it. So scientists lacked data.
They also lacked analytical tools. There were some computers around in 1960 but they were small and slow by modern measures, and also few and far between. I personally own three desktop computers. Any one of them was more powerful in terms of speed, RAM, and disk space than all the computers in existence in 1960. This meant that scientists were stuck with literally not much more than the back of an envelope when it came to thoroughly investigating a theory or trying to assess its ramifications.
A good thing to use as an illustration is the orbital spacing I mentioned above. The spacing of the orbits of the planets was known to a medium degree of accuracy. But the idea of "resonance" was not well understood. Imagine two bodies orbiting the same larger body. And imagine their orbits are such that one takes exactly twice as long as the other. This means that over the course of one of the slower orbits each and every combination of relative position will happen exactly twice because the faster body will have orbited twice around and both bodies will end up exactly where they started with respect to each other. Now imagine that there is a certain configuration where the bodies tug on each other, pulling each other in one direction. It turns out because of the complete symmetry of the situation that there is another configuration where they pull in exactly the opposite direction. So over the course of one slow orbit everything exactly balances out. This is called a 2:1 resonance.
Now assume the resonance is slightly more than or less than 2:1. Then a net pull can develop over the course of a complete slow orbit that slows one planet down a little or speeds it up a little. This means that over time the planets will be pulled a little closer together or a little farther apart. In other words, in our 2:1 resonance case the orbits are stable (they don't change at all over time) but in the other case they evolve. Now there are other resonances like 3:2 or 4:3 or whatever. With lots of cheap computer power astronomers are now able run sophisticated long running simulations to discover exactly how things would evolve. The current state of the art now permits very complicated situations to be thoroughly analyzed and understood.
And with this ability astronomers found that there were only a few stable resonances. The rest of the time the planets (or moons) get pulled around, often in complicated ways that could not have been predicted by looking at the equations and doing some simple analysis. The simulations showed that they kept getting pulled around until they hit a stable resonance point, a point that may only have been arrived at after the simulation had covered millions of simulated years. And guess what? The current orbits of the planets are predicted by this resonance point analysis. It was literally impossible to do this kind of analysis before abundant computer power was available cheaply.
The other problem is data. As we sent space missions like Voyager out we were able to gather tons of data that was much more accurate and complete than that available in 1960. This was the information needed to do the resonance point analysis with enough accuracy to give meaningful results. It was literally impossible to make the kinds of detailed calculations necessary unless the parameters put into the simulation were known to a very high degree of accuracy. Those highly accurate values were not known until we had sent spacecraft out exploring.
And this led directly to one of the things that astronomers got wrong at the time. That was the question of the origin of the asteroid belt. The asteroid belt (there are actually several but I am going to concentrate on the main one) consists of a bunch of sub-planet-sized rocks. The leading theory of the time was that something had torn up a small planet. We now know that the asteroid belt is a side effect of these resonances. But the reason we now know this is because we have a lot more data and the data is of much higher quality. We can also simulate the creation or destruction of a single larger body. The simulations can't be made to produce the outcome we now see. But a simulation of a bunch of rocks shows them drifting into the area now occupied by the asteroid belt then getting stuck there.
It wouldn't work if it was just one rock of whatever size. A single rock would be pulled either toward Jupiter or toward Saturn. But a flock of rocks can be stable over long periods of time. At any one time some rocks are pulled in and others are pulled out. But on average and over time, they just stay within the band that is the belt. The average can maintain a behavior (stability) that no individual component is capable of. Individually their orbits are all slightly unstable but this leads to stability at the group level.
And now we have the instruments to study the individual asteroids in the asteroid belt in considerable detail. NASA has recently inserted a space probe directly into the middle of things. The Dawn mission put a spacecraft into orbit around Vesta, a large asteroid. After studying Vesta for several months the probe was moved to Ceres, the largest asteroid. Dawn has returned a massive amount of data about the asteroid belt to supplement what earlier probes discovered.
A couple of decades after 1960 scientists thought they had a good handle on the formation of the Solar System. The idea was that the Sun condensed out of a cloud of gas. This happened precisely 4.567 billion years ago. (They have good reason to believe they know the Sun's age that accurately.) They also think the rest of the Solar System formed a very quickly and a very short time later. It only took a hundred million years, give or take. They are very certain that it was very quick but exactly how quick is not nailed down very well.
And they had a theory which sounded very good about why the various planets with their various compositions ended up where they were and with the composition they did. The theory was that the planets formed in roughly the locations you now see them. The heat of the Sun's radiation was enough to blow the gas out of the inner solar system and into the outer solar system. So you had rocky planets (Mercury, Venus, Earth, Mars) in the inner solar system and gas giants (Jupiter, Saturn, Neptune, Uranus) in the outer solar system. Pluto was assumed to be an asteroid-like thing that got knocked around until it ended up where it ended up. This theory sounded reasonable to everybody and seemed to work very well.
Then it became possible to discover exo-planets, planets orbiting some star other than our Sun. The Kepler spacecraft has found literally thousands of them. And the solar systems around these other suns don't look at all like our solar system does. There are gas giants in the inner solar system all over the place. Lots of gas giants have been found with orbits that are smaller even than Mercury's. What astronomers now know for sure is that don't know.
The common theory for the moment is a hybrid one. Planets formed in their traditional locations. Rocky planets formed close in (they are hard to see if they are orbiting another start so it is no surprise that very few have been discovered). Gas giants formed further out. Then the gas giants migrated (see the discussion of resonance above) into the inner solar system of these other stars. In this theory the fate of the rocky inner planets is unknown. But frankly this is a theory like the ones discussed in this chapter of Asimov's book. It has problems but it is the best scientists currently have. This means scientists expect the theory to undergo drastic modification or even be discarded completely for a quite different one.
"Of Shape and Size" starts with a discussion of the shape of the Earth. It has been known to be roughly spherical for several hundred years now. But some noticed evidence supporting the idea of a spherical shape much further back. But for a long time their evidence did not carry the day. By Newton's time (400 years ago) it was generally accepted that the Earth was spherical in shape. But Newton calculated that gravitational effects should distort it into an oblate spheroid. The French in the 1800s tried and eventually succeeded in confirming Newton's idea. The best number for exactly how far out of round the Earth was in 1960 was 26.7 miles. That is not far off the current number.
We now have much more accurate ways of measuring distance. So we can very accurately measure the distance from a fixed point on the Earth to a satellite. A bunch of these measurements yields a very accurate description of the exact shape of the Earth. It is an oblate spheroid with a number of lumps and bumps on it. The actual shape, even after you smooth out mountains and oceans, is very complicated and I am not going to go into it. And, of course, we have turned the whole "satellite distance" thing around to create the GPS system. GPS satellites need, among other things, a mathematical model of the shape of the Earth. They use a moderately sophisticated one that works well enough to keep our navigation systems on track almost all of the time.
One of the things the French effort brought out, Asimov tells us, is the fact that at the time there was no agreed upon standard of length. Everybody knew approximately how long a yard was but no one knew precisely how long it was. This led to the creation of the "Meter" (French spelling: Metre). It was the distance between two very precisely marked lines on a specific piece of metal. Eventually the "Metric standard" was adopted around the world. Now even the Yard is defined in terms of the Meter. An "Inch" is one 39.34th of a Meter. A "Yard" is 36 inches. It's clumsy but it works.
And this "two marks on a piece of metal" definition of the Meter worked well for more than a hundred years. But scientists kept getting better and better at accurately measuring distances. Soon a more precise specification was required. The laser made it possible to measure the properties of light very precisely. And Einstein said "the speed of light is always and everywhere the same". In 1983 scientists took advantage of this to define a Meter as a certain specific number of oscillations of a certain kind of light as measured under certain very specific conditions. Now a properly equipped laboratory can measure a Meter far more accurately than was possible at any time during the "Meter bar" era.
And this idea of very precisely specifying all the basic units like those of time, weight (actually mass), etc. caught on. The French developed an entire "Metric" system with seconds (a carry over from the old system), Kilograms (a replacement for the pound), etc. Now there is a complex system called the "International System of Units". It is abbreviated as SI based on the French terminology. It also includes things like Volts, Watts, Ohms, etc. for electricity, Joules, Newtons, etc. for forces and work (to replace things like "pounds force", horsepower, etc.), Celsius (originally Centigrade - to replace Fahrenheit degrees of temperature), and so on.
Returning to the problem with the shape of the earth. A trick used then and still in use now was to observe a pendulum. In this case it was used to accurately measure gravity. If gravity was stronger than normal the pendulum would swing too fast. If gravity was weaker than normal the pendulum would swing too slow. This made it possible to measure and map "gravitational anomalies". We are using instruments that can do the job far more accurately now but the mapping of gravitational anomalies is a booming business these days. Geologists can tell a lot from gravitational anomalies (i.e. where there's oil) but there are numerous other applications I am going to skip getting into.
Asimov ends this particular portion of the discussion by noting that prior to 1960 the distance between New York and London was only known to within plus or minus a mile. The techniques I mentioned above (measuring the locations of satellites) was just coming into use as a "by hand" version of GPS. And at the time a lot of the results of this procedure were classified. Why?
After the USSR fell in 1989 it turned out that popular maps issued by the Communists showed the locations of their major cities incorrectly. A typical "error" was say 25 miles. It was not that they were bad at making accurate maps. It was thought instead that they had purposely introduced the errors as an attempt to throw off the aim of western ICBM missiles. Of course, the US had long since switched to the "GPS by hand" method described above, for deciding where to point their ICBMs.
Asimov then moves on to related problems. If you know the precise shape of the Earth you can accurately calculate its volume. Then, if you know its weight (or, more correctly mass) you can calculate its density. But the problem is figuring out its weight. And here is a good time to explain why scientists use mass instead of weight.
If you stand on a scale what is actually being measured is force. A certain amount of force bends a spring a certain amount and that can be used to turn a meter a certain distance. But it is the force that is being measured. But the force depends on how strong gravity is pulling. Scientists wanted to get gravity out of the process. So they decided that matter has an inherent property called "mass". The force generated in a specific gravitational field depends on the mass and on the strength of the field. This let scientists split things into a question of the amount of mass, an amount that is independent of what gravity is or isn't doing, and gravitational force, something that is independent of mass and only depends on what is happening with gravity.
If you are standing still on the surface of the earth then weight and mass can seem like pretty much the same thing. But let's say you are in a car and you haven't fastened your seat belt and your car crashes into a brick wall. Lots of force is involved and it is likely to get you killed if you aren't extremely lucky. But this force has nothing to do with gravity. It has to do with two things. One of them is how fast you are slammed to a stop (very fast). The other is your mass. Remember gravity is not part of the process so "weight" is irrelevant. But mass is mass is mass. It can be accelerated by being operated on by the force gravity, which varies depending on altitude, gravitational anomalies, etc. Or by a car being forced to come to a stop extremely quickly using a process that doesn't involve gravity at all. By going with mass, which is the same no matter what else is going on (I'm ignoring relativity here) scientists can plug the right number for mass on the one hand and force on the other hand into their calculations and end up with the right result.
Back to the mass (or, for civilians, weight) of the earth. The problem is that gravity is everywhere. How do you get outside it so you can measure it? Newton came up with a formula that looked helpful. f = ( G * m-1 * m-2) / d**2. If you knew the value of "f", a force and "d", a distance and if you knew the value of "G", the "gravitational constant" and if you knew the value of m-1 (the mass of one object, say the moon), you could calculate the value of m-2 (the mass of another object, say the earth). This does not look promising. We don't seem to know the value of several of those things. But the formula applies everywhere. So let's go into the laboratory. Here we can measure force ("f") using a spring scale. We can use a ruler to measure distance ("d"). And we can just weigh m-1 and m-2 and use that to calculate the mass of each. That leaves just "G". But the formula then lets us calculate its value. The problem is that "G" turns out to be a very small number. There is so little gravitational force to measure between two normal objects you can find in a laboratory that it seems impossible to do so.
The first one to make a serious and successful run at the problem was Henry Cavendish. If we take a thin wire that is say a foot long and fasten one end to the ceiling we can twist the other end and measure the amount of force involved to twist it say 30 degrees. It's not much but if we use a sensitive spring balance we can measure it. Now it turns out that if we instead use a 30 foot long piece of the same wire it takes only a thirtieth as much force to twist it the same 30 degrees. That's the basic idea.
Cavendish took a long piece of very fine wire that was very easy to twist and performed the appropriate measurements on shorter pieces so he could calculate the force necessary to bend it through a relatively large angle like 30 degrees. Then he put a mirror on it near the bottom and bounced a light off of it from a long ways away (say 50 feet). This allowed him to measure very small changes in twist. Then he put two fairly heavy balls on each end of a rod and hooked the rod to the end of the wire. He made the weights as heavy as he could get away with and he made the rod as long as he could get away with. By connecting the center of the rod to the wire he could balance everything so that the wire would hold it all up.
Then he took two really big weights. They could be very large because the thin wire did not need to hold them up. They could sit on heavy carts on the floor of the laboratory. He brought each ball very close to one of the hanging balls. One heavy ball was on the near side of one hanging ball. The other heavy ball was on the far side of the other hanging ball. He brought them very close but did not let them touch. There should be a small gravitational pull between the heavy ball on the floor and its matching ball hanging on the wire. And there should be a similar gravitational pull pulling in the same direction in the case of the other pair of balls and this should cause the wire to twist. It did by a very small amount. But it was enough for Cavendish measure it and to come up with an accurate value for "G".
As another side note: The University of Washington has been on the forefront of doing these kinds of Cavendish experiments for some time now. Things get very complicated when you are trying to measure "G" very accurately. But they have found ways to overcome these complexities. They have succeeded in measured "G" more accurately than anyone else, even themselves in previous experiments, several times now.
So if we know "G" don't we still have a problem? At this point we know neither m-1 nor m-2 so aren't we still in a pickle? Theoretically yes but actually no. What if we put a hundred pound satellite into circular orbit around the earth. A little calculus (which I am going to skip) tells us what "f" must be. And we can measure "d". So that leaves our two "m"s. But not really. We can calculate "m-1 * m-2" because we have all the other values in the formula. But we also know m-1. It's the mass equivalent of a hundred pounds. And that leaves only m-2, the mass of the earth, as an unknown. Plugging all the other numbers in gives us the value of m-2. If we know the mass of the earth we can go through the same process and get the mass of the moon. We can also use the same process to get the mass of the Sun. To get a rough number we just ignore the moon and the other planets. We need to make adjustments for each celestial body's effect to get a more accurate value.
The adjustments can get complicated but astronomers have figured out how to do it so I am going to leave it there. And we can keep going. With the mass of the Sun we can calculate the mass of Jupiter or Saturn or, . . . It's just a matter of using the basic process then applying the necessary adjustments. The math is complex if you want to get an accurate answer but all we need to know is that it can be done. We can use the mass of one celestial body to "bootstrap" us to the mass of other celestial bodies. These techniques certainly work for the planets. With asteroids there are so many bodies close at hand that in most cases only a rough number can be calculated. This is also true in some other "many body" problems. But as computer power increases more and more complex situations can be handled. Back to Asimov.
He gives us the answer for the density of the Earth. It is 5.5 times as dense as water, on average. If we didn't already know, this would allow us to conclude that the Earth is not composed exclusively of pure water. Okay. It it of uniform density? The answer to that question was already known in 1960. The answer is NO. How did we know this back in 1960? From earthquakes. And that's Asimov's segue into the next chapter. And that's my cue to end this post.
Tuesday, March 29, 2016
Sunday, March 20, 2016
50 Years of Science - part 5
This is the fifth in a series. The first one can be found at http://sigma5.blogspot.com/2012/07/50-years-of-science-part-1.html. Part 2 can be found in the August 2012 section of this blog. Parts 3 and 4 can be found in the September 2012 section. Taking the Isaac Asimov book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 1960), more than 50 years have now passed but I am going to stick with the original title anyway. In these posts I am reviewing what he reported and examining what has changed since. For this post I am starting with the chapter Asimov titled "The Windows of the Universe". This is the last chapter in his "The Universe" section.
Asimov starts the chapter with yet another reference to the 200" Hale telescope situated on Mt. Palomar in California. That gives me a chance to digress into some telescope basics. The telescope was invented about 1609 and popularized by Galileo. Before that astronomical observations were made by eye. About 3,500 stars are visible in the northern hemisphere with the naked eye. We now know that this is literally a drop in the ocean compared to the actual number of stars in the sky. Even in this period it turns out that what was most important to astronomers was the ability to measure angles as accurately as possible.
To address this problem people devised instruments. These included the quadrant, octant, and later the sextant commonly used by sailors for hundreds of years. This process culminated with the efforts of Tycho Brahe. He developed the most precise instruments ever devised to make human eye angle measurements. He died in 1601 just before the telescope was invented. His super-precise (for the time) measurements uncovered problems with the Ptolemaic astronomical system that had been in use for centuries. But before people could digest this and decide on an appropriate response the observations of Galileo started becoming known and they really threw a monkey wrench into the works. This diverted attention from what Brahe had found but the Brahe measurements ultimately bolstered the anti-Ptolemy side of the argument. Back to telescopes.
So what's the point? What does the telescope bring to the table? The first and most obvious thing a telescope does is magnify things. This instantly makes it possible to measure angles far more accurately than was possible with the naked eye. It also makes it possible to view things that are too small to see with just the naked eye. Galileo discovered the "mountains of the moon". He observed shadows cast by what appeared to be mountains when observing the edge between the bright and dark parts of the face of the moon. These features are too small to be seen reliably with the naked eye. He also was able to make out moons around Jupiter. Again, these are too small to see with the naked eye. Galileo saw a lot more things that upset the traditional authorities but that's enough for my purposes.
But being able to see the moons of Jupiter leads to another characteristic of telescopes. They can make dim things brighter. It is hard to see the moons of Jupiter not only because they are small but because they are dim. Galileo was able to easily make them out because his telescope made them brighter. This is generally referred to as "light gathering power". The previous property (making things bigger) is generally referred to as "magnification". And these are the two primary characteristics that the telescope brings to the table.
Now let's take a quick look at telescope design. The telescope we are most familiar with is the sailor's telescope prominently on display in swashbuckler movies. It consists of a tube with mirrors at each end. It does both of the things associated with telescopes. The lens at the front is bigger than the human eye and gathers more light making the image brighter. The two lenses work together to magnify the image. This makes it easy to see the other guy's ship in detail when it is many miles away on the horizon. And this is the design Galileo used. But it has a problem.
Lenses work because the material they are made of has a different index of refraction. All you need to know is that it is a property of things and that it is easily measured. A vacuum has an index of refraction. Air has a slightly different index of refraction. Water has a still different index of refraction as does the glass telescope lenses are made of. As light passes from material with one index of refraction to material with a different index it bends. Clever design allows this idea to be turned into a telescope. So what's the problem?
It turns out that the index of refraction depends on frequency. So lens glass bends red light through a different angle than it bends blue light. This isn't much of a problem with a sailor's telescope but it soon became a big problem with the large very precise telescopes that people built to observe the heavens with. This problem is called "chromatic aberration". It affects any optical device that passes light through lenses and processes more than one frequency of light. But there is a solution. And the man that found it was Isaac Newton. Talk about genius.
By now the general idea should be obvious. Don't pass light through lenses. But how could a telescope be made without lenses? Mirrors also change the direction light travels. Curved mirrors can bend light in just the way necessary to make a telescope. And that's what Newton (yes -- same guy) did. He put a big for the time (6") mirror at the bottom of a tube. Then he put a small (compared to the big mirror) flat mirror near the top of the tube. This mirror was angled at 45 degrees so that the light could come out a hole in the side of the telescope. That's where he put his eye. The eye contains a lens but the amount of chromatic aberration is still much smaller than with the old design (called a refracting telescope). This general class of designs is called a reflecting telescope as light reflects off the mirror. Pretty much all professional astronomical telescopes now use this reflecting design but with a modification.
A "cassegrain" telescope uses a perpendicular mirror at the front instead of a 45 degree one. The idea is to bounce the light back down the tube and through a relatively small hole in the center of the primary mirror. The astronomer's eye (then -- now sophisticated instruments) sits slightly behind the main mirror. The Hubble Space Telescope, for instance, is a cassegrain telescope. And for the first 400 years telescopes depended on the naked eye to make observations. But starting in the nineteenth century then more and more often as the twentieth century advanced photographic plates, often a foot on a side, were substituted. Since about 1970 the CCD or charged-couple device has been taking over from photographic plates. CCDs are the electronic equivalent of the photographic plate and are what the Hubble uses.
Now let me circle back to the Hale telescope. It was the largest in the world for about thirty years. Why? Isn't bigger always better. Maybe not. The point of a telescope is to make things larger and brighter. Lets take each in turn starting with magnification. For a while there was a telescope race as people came up with new designs to increase magnification. But the race didn't last long. Remember the "twinkle, twinkle, little star" nursery rhyme? Stars do actually twinkle.
We now know that this is caused by small irregularities in the air. Some air is slightly thicker than average and some air is slightly thinner than average. Winds move these thicker and thinner chunks around continuously. And it turns out that thicker air has a very slightly different index of refraction than thinner air. This causes these chunks of air to act like lenses and change the path of light. This means that sometimes the light from a star is being directed into your eye and sometimes it isn't. The star twinkles.
It 's kind of pretty when you are gazing fondly at the evening sky. It is really annoying when you are trying to observe something with a telescope. The greater the magnification the greater the twinkle effect. Telescopes were quickly built that were all twinkle and no observation. There was a limit to the amount of useful magnification a telescope could employ.
But none of this messes with the idea of increasing brightness, right? You have to work harder but it makes problems here too. We want to focus all the light of a small star onto a very small point so we get a sharp image of it. But the twinkle effect moves the location of the star around. And that results in blur. And if the star is dim enough you never end up with enough light in any one place to be able to see it at all. So this means there is a limit to the amount of light amplification that is useful. The 200" telescope superseded an earlier 100" model. Doubling the diameter theoretically gave the mirror four times the light gathering ability but in actuality it didn't work four times better. And there was another problem, gravity.
The mirror needs to have an extremely specific shape. If not then the light from a star won't all land at exactly the same place. It turns out polishing the mirror to the extreme level of precision necessary to give it the right shape was relatively easy. The problem was to make the mirror keep its shape. To do this it was made extremely stiff. And that was achieved by using a large, thick piece of glass. And that made it very heavy.
And as you use the telescope you are moving it around. And that means gravity is coming at it from one angle now and another angle later. It worked. But all the calculations that went into the design said it wouldn't work if you made the mirror a lot bigger, say 400". And all this weight made the telescope hard to operate. You needed very big motors to move it around. And these motors had to position the mirror very accurately or everything was a big waste of time. So for a long time it looked like 200" was as big as it was practical to go.
But modern telescopes are much bigger. Astronomers use the metric system so astronomers call the Hale not a 200" telescope but a 5 meter one (200" is almost exactly 5 meters). There are now lots of telescopes with larger mirrors. The Keck telescopes in Hawaii have 10 meter mirrors. The Europeans are currently building a 40 meter telescope and designs have been proposed for even larger ones. The James Web Space Telescope will house a mirror larger than that of the Hale but in space. Something obviously changed but what?
The easiest to understand change is that of going from one single main mirror to a main mirror made up of several segments. Each individual mirror segment is much smaller than the mirror as a whole. This involved solving the math problem of determining the specific shape each mirror segment needed to be. Then the harder problem of making mirror segments in these odd shapes needed to be solved but it was. So what's the advantage of segmenting the mirror? It means that the stiffness problem is much easier to solve. One edge of the 200" Hale mirror has to be maintained in a very precise relationship with the other edge. But if the edges are now say 50" apart this becomes much easier to do. So the mirror glass can be much thinner and still be stiff enough. And that saves a lot of weight.
The other design change was to dial way back on the whole "stiff" thing. If instead of depending on the mirror's built in strength to maintain its shape what if we take very precise measurements, determine what corrections need to be made, and then bend the glass until it is in the proper shape. Computer power and lasers made it possible to perform the measurements and calculate the corrections to the necessary accuracy. Then it was a simple process to put a gadget (actually several gadgets) on the back of each mirror segment to bend it the right amount to get the shape right.
It now became important to make the glass thin enough that it could be bent appropriately. This in turn took a lot more weight out and allowed everything to be lighter and cheaper. So this new "bend on demand" approach made it possible to have a mirror whose effective size was much larger than before but which still had the right shape. But what about the twinkle problem?
It turns out that this "bend on demand" capability came to the rescue here too. With even more computing power (now cheap) it was possible to calculate exactly how much the irregularities in the atmosphere were messing things up. This made it possible to calculate how to bend the mirror out of what would normally be the "correct" shape just enough to undo the twinkling the atmosphere was causing. The corrections would have to be calculated and applied frequently (about 100 times per second) but it meant that it was possible to take a much sharper picture of the sky from the bottom of the atmosphere.
This process is called "adaptive optics" and all the big telescopes now have adaptive optics systems. The last thing you need to make it work is a "guide star". Measuring it allows the distortions and corrections to be calculated. The guide star can also be used to verify that the right correction was applied. If there is a bright star handy close to the portion of the sky you want to point your telescope at then it can be used. Otherwise, synthetic guide stars can be created using lasers and other tricks. And with that, let me return to Asimov's book.
Remember that chromatic aberration I was talking about. And remember the old saw about "turning a problem into an opportunity". That's the first subject Asimov gets into. If you introduce a piece of glass into the path of the light from your telescope it will bend the light. And it will bend the light by different amounts that depend on the frequency of the light. The piece of glass used for this purpose is called a "prism". A well designed prism will accentuate this phenomenon. Why do it? Because this allows us to study each frequency of say the light from a specific star, separately. And that allows us to learn a lot about the star. Studying the various frequencies is called spectroscopy and a device for spreading those frequencies out so that each frequency can be examined individually is called a spectroscope and the resulting pattern a spectrograph. (And this whole business of studying the spectrum of light also goes back to Isaac Newton.) So what can we learn from the spectrum of a star?
"White" light will contain all frequencies and the intensity of each frequency will follow a specific pattern. But real light always has bands that are either brighter or darker than they are supposed to be. Franhoffer first reported this in 1814. The dark lanes are "absorption" lines where something has absorbed a particular frequency as the light passes through it. The brighter lines are "emission" lines. Something, say a candle, has emitted extra light in particular frequencies. It didn't take long for scientists to speculate that specific elements caused specific lines. It turned out that they were right.
Early work identified the new at the time elements of Cesium and Rubidium. Then in 1868 Helium was identified in the spectrum of sunlight. Cesium and Rubidium were rare but could be found on earth if you looked hard. At that time no one had found any Helium anywhere. (It was later discovered to be a trace component of natural gas and can also be found in even smaller quantities elsewhere.) The use of spectroscopy to discover solar Helium was a big deal and really put the technique on the map.
These early spectroscopy studies were done "by eye". The first major step in moving away from this was the invention of the daguerreotype, an early photographic method, in 1839. As the century progressed photographic techniques improved and photographs became more common in astronomy. And photography made it possible to use telescopes in a different way. The standard way is to peer very closely at a small part of the sky. But this means you can't do a broad "survey" of a larger portion of the sky. Then in the 1930's Schmidt came up with a telescope design that could do surveys. But you could not do it with your eye. You had to use photography. This was an early example of moving on beyond what the naked eye could show us.
But spectroscopy turned out have very down to earth uses. Around 1800 Herschel moved a thermometer beyond the visible part of the spectrum. He detected a warming. He had discovered infrared light, light with a frequency lower than that of red. Additional investigation by others led to the discovery of ultraviolet, light with a frequency higher than violet. The "electromagnetic" spectrum has since been expanded to include (from highest to lowest) gamma rays and x-rays (usually subdivided into hard (higher frequency) and soft (lower frequency)) on the high end and various forms of radio waves (from highest to lowest: microwave, shortwave, and long wave) on the low end.
It turns out gamma rays, x-rays, and infrared waves are all very effectively blocked by our atmosphere. But radio waves are not. In 1933 Jansky detected radio waves coming from the sky. This was the start of what is now a booming field of endeavor, radio astronomy. Radio astronomy was in its infancy when Asimov wrote his book. Big dish antennas were just coming into being. And the now famous giant dish at Arecibo in Puerto Rico was not completed until 1963. Nor had Lasers been invented yet. But the predecessor to the Laser, the first Maser had been built. The idea is the same as that behind the Laser but a Maser uses radio waves and a Laser uses light waves. It was easier to pull the necessary engineering off with radio waves so the Maser came first by about ten years. And, although radio astronomers invented the Maser and had done so before the book was written, they were still figuring out how to make the best use of Masers so Asimov does not even mention them.
Another technique that is now in common use is radio interferometry. This is the process of combining signals from two or more widely separated radio antennas to create a "synthetic aperture" that is as large as the distance between the antennas. A problem Asimov does address is that fact that some radio wavelengths are very long. This means you need very large equipment to do anything with them. The synthetic aperture scheme got around this problem and allowed radio astronomers to make effective use of low frequency (long wavelength) radio waves. But this came well after Asimov's book. And this synthetic aperture scheme has now been adapted for use with light telescopes. There are actually two Keck telescopes in Hawaii. They can be operated together in such a way as to behave like they are one single telescope with a mirror diameter of 85 meters (almost 300 feet).
One issue Asimov does get into is mapping the Milky Way. As Asimov put it, "[i]n a sense, the galaxy hardest for us to see is our own." Radio astronomy has been a big help. Light is blocked by clouds of dust like the "coal sack" in the southern hemisphere. But radio waves can penetrate dust easily. This led to the mapping of the Orion, Perseus, and Sagittarius arms of the Milky Way. But these arms are only partially mapped and there has not been a lot of progress since Asimov's book. And the existence of a giant black hole (it weighs millions of times as much as our Sun) in the center of our galaxy was totally unsuspected at that time. As was the Cosmic Microwave Background, the single biggest discovery in the field of radio astronomy. It was discovered less than a decade after the book was written.
Asimov does list a number of achievements that had been racked up by radio astronomy. These include a number of bright radio sources in the sky, the fact that sunspots emit radio waves, the fact that the atmospheres of both Venus and Jupiter are turbulent enough to emit radio waves (as does the cloud surrounding the Crab Nebula), the fact that galaxies collide, and others. And then there is that workhorse of spectroscopy that was first made use of by radio astronomers, the Doppler effect. As I have indicated elsewhere, it can be used to measure the speed with which stars (and any other object bright enough to allow a detailed spectrograph to be taken) are moving toward or away from the Earth. Asimov reports the results of early Doppler work.
At first it might seem like this is a story of scientific results being overturned but that is not so. In fact, what is on display is a progress toward more and better information. This does result in some scientific theories being overturned. But that's why scientific theories are theories. The possibility always exists that new data will come along that will show them to be wrong in some important way. But before scientists move on to a new theory they make sure it accounts for all the old observations.
And scientists are pretty good at figuring out how solid their theories are. Scientists see it as part of their job to theorize before all the data is in. But they admit it. They even have a name for this sort of thing. It's called a WAG, a Wild-Assed Guess. As more data comes in this might be replaced by a SWAG, a Scientific Wild-Assed Guess. Neither rises to the level of a "theory", which must have more support. And some theories are "tentative" while others are "pretty solid". From there it can move on to being a "well tested" or "foundational" theory or not. It depends on the data.
And mostly what we see are new discoveries made possible by new and better tools. The new information does not overturn the old. It supplements it or "opens new vistas". Large though it is, there were no tools in 1960 that could have detected the giant black hole at the center of our galaxy. Scientists knew roughly where the center of our galaxy was ("somewhere in the Sagittarius Constellation") but were the first to admit they knew little to nothing about what was there.
Asimov starts the chapter with yet another reference to the 200" Hale telescope situated on Mt. Palomar in California. That gives me a chance to digress into some telescope basics. The telescope was invented about 1609 and popularized by Galileo. Before that astronomical observations were made by eye. About 3,500 stars are visible in the northern hemisphere with the naked eye. We now know that this is literally a drop in the ocean compared to the actual number of stars in the sky. Even in this period it turns out that what was most important to astronomers was the ability to measure angles as accurately as possible.
To address this problem people devised instruments. These included the quadrant, octant, and later the sextant commonly used by sailors for hundreds of years. This process culminated with the efforts of Tycho Brahe. He developed the most precise instruments ever devised to make human eye angle measurements. He died in 1601 just before the telescope was invented. His super-precise (for the time) measurements uncovered problems with the Ptolemaic astronomical system that had been in use for centuries. But before people could digest this and decide on an appropriate response the observations of Galileo started becoming known and they really threw a monkey wrench into the works. This diverted attention from what Brahe had found but the Brahe measurements ultimately bolstered the anti-Ptolemy side of the argument. Back to telescopes.
So what's the point? What does the telescope bring to the table? The first and most obvious thing a telescope does is magnify things. This instantly makes it possible to measure angles far more accurately than was possible with the naked eye. It also makes it possible to view things that are too small to see with just the naked eye. Galileo discovered the "mountains of the moon". He observed shadows cast by what appeared to be mountains when observing the edge between the bright and dark parts of the face of the moon. These features are too small to be seen reliably with the naked eye. He also was able to make out moons around Jupiter. Again, these are too small to see with the naked eye. Galileo saw a lot more things that upset the traditional authorities but that's enough for my purposes.
But being able to see the moons of Jupiter leads to another characteristic of telescopes. They can make dim things brighter. It is hard to see the moons of Jupiter not only because they are small but because they are dim. Galileo was able to easily make them out because his telescope made them brighter. This is generally referred to as "light gathering power". The previous property (making things bigger) is generally referred to as "magnification". And these are the two primary characteristics that the telescope brings to the table.
Now let's take a quick look at telescope design. The telescope we are most familiar with is the sailor's telescope prominently on display in swashbuckler movies. It consists of a tube with mirrors at each end. It does both of the things associated with telescopes. The lens at the front is bigger than the human eye and gathers more light making the image brighter. The two lenses work together to magnify the image. This makes it easy to see the other guy's ship in detail when it is many miles away on the horizon. And this is the design Galileo used. But it has a problem.
Lenses work because the material they are made of has a different index of refraction. All you need to know is that it is a property of things and that it is easily measured. A vacuum has an index of refraction. Air has a slightly different index of refraction. Water has a still different index of refraction as does the glass telescope lenses are made of. As light passes from material with one index of refraction to material with a different index it bends. Clever design allows this idea to be turned into a telescope. So what's the problem?
It turns out that the index of refraction depends on frequency. So lens glass bends red light through a different angle than it bends blue light. This isn't much of a problem with a sailor's telescope but it soon became a big problem with the large very precise telescopes that people built to observe the heavens with. This problem is called "chromatic aberration". It affects any optical device that passes light through lenses and processes more than one frequency of light. But there is a solution. And the man that found it was Isaac Newton. Talk about genius.
By now the general idea should be obvious. Don't pass light through lenses. But how could a telescope be made without lenses? Mirrors also change the direction light travels. Curved mirrors can bend light in just the way necessary to make a telescope. And that's what Newton (yes -- same guy) did. He put a big for the time (6") mirror at the bottom of a tube. Then he put a small (compared to the big mirror) flat mirror near the top of the tube. This mirror was angled at 45 degrees so that the light could come out a hole in the side of the telescope. That's where he put his eye. The eye contains a lens but the amount of chromatic aberration is still much smaller than with the old design (called a refracting telescope). This general class of designs is called a reflecting telescope as light reflects off the mirror. Pretty much all professional astronomical telescopes now use this reflecting design but with a modification.
A "cassegrain" telescope uses a perpendicular mirror at the front instead of a 45 degree one. The idea is to bounce the light back down the tube and through a relatively small hole in the center of the primary mirror. The astronomer's eye (then -- now sophisticated instruments) sits slightly behind the main mirror. The Hubble Space Telescope, for instance, is a cassegrain telescope. And for the first 400 years telescopes depended on the naked eye to make observations. But starting in the nineteenth century then more and more often as the twentieth century advanced photographic plates, often a foot on a side, were substituted. Since about 1970 the CCD or charged-couple device has been taking over from photographic plates. CCDs are the electronic equivalent of the photographic plate and are what the Hubble uses.
Now let me circle back to the Hale telescope. It was the largest in the world for about thirty years. Why? Isn't bigger always better. Maybe not. The point of a telescope is to make things larger and brighter. Lets take each in turn starting with magnification. For a while there was a telescope race as people came up with new designs to increase magnification. But the race didn't last long. Remember the "twinkle, twinkle, little star" nursery rhyme? Stars do actually twinkle.
We now know that this is caused by small irregularities in the air. Some air is slightly thicker than average and some air is slightly thinner than average. Winds move these thicker and thinner chunks around continuously. And it turns out that thicker air has a very slightly different index of refraction than thinner air. This causes these chunks of air to act like lenses and change the path of light. This means that sometimes the light from a star is being directed into your eye and sometimes it isn't. The star twinkles.
It 's kind of pretty when you are gazing fondly at the evening sky. It is really annoying when you are trying to observe something with a telescope. The greater the magnification the greater the twinkle effect. Telescopes were quickly built that were all twinkle and no observation. There was a limit to the amount of useful magnification a telescope could employ.
But none of this messes with the idea of increasing brightness, right? You have to work harder but it makes problems here too. We want to focus all the light of a small star onto a very small point so we get a sharp image of it. But the twinkle effect moves the location of the star around. And that results in blur. And if the star is dim enough you never end up with enough light in any one place to be able to see it at all. So this means there is a limit to the amount of light amplification that is useful. The 200" telescope superseded an earlier 100" model. Doubling the diameter theoretically gave the mirror four times the light gathering ability but in actuality it didn't work four times better. And there was another problem, gravity.
The mirror needs to have an extremely specific shape. If not then the light from a star won't all land at exactly the same place. It turns out polishing the mirror to the extreme level of precision necessary to give it the right shape was relatively easy. The problem was to make the mirror keep its shape. To do this it was made extremely stiff. And that was achieved by using a large, thick piece of glass. And that made it very heavy.
And as you use the telescope you are moving it around. And that means gravity is coming at it from one angle now and another angle later. It worked. But all the calculations that went into the design said it wouldn't work if you made the mirror a lot bigger, say 400". And all this weight made the telescope hard to operate. You needed very big motors to move it around. And these motors had to position the mirror very accurately or everything was a big waste of time. So for a long time it looked like 200" was as big as it was practical to go.
But modern telescopes are much bigger. Astronomers use the metric system so astronomers call the Hale not a 200" telescope but a 5 meter one (200" is almost exactly 5 meters). There are now lots of telescopes with larger mirrors. The Keck telescopes in Hawaii have 10 meter mirrors. The Europeans are currently building a 40 meter telescope and designs have been proposed for even larger ones. The James Web Space Telescope will house a mirror larger than that of the Hale but in space. Something obviously changed but what?
The easiest to understand change is that of going from one single main mirror to a main mirror made up of several segments. Each individual mirror segment is much smaller than the mirror as a whole. This involved solving the math problem of determining the specific shape each mirror segment needed to be. Then the harder problem of making mirror segments in these odd shapes needed to be solved but it was. So what's the advantage of segmenting the mirror? It means that the stiffness problem is much easier to solve. One edge of the 200" Hale mirror has to be maintained in a very precise relationship with the other edge. But if the edges are now say 50" apart this becomes much easier to do. So the mirror glass can be much thinner and still be stiff enough. And that saves a lot of weight.
The other design change was to dial way back on the whole "stiff" thing. If instead of depending on the mirror's built in strength to maintain its shape what if we take very precise measurements, determine what corrections need to be made, and then bend the glass until it is in the proper shape. Computer power and lasers made it possible to perform the measurements and calculate the corrections to the necessary accuracy. Then it was a simple process to put a gadget (actually several gadgets) on the back of each mirror segment to bend it the right amount to get the shape right.
It now became important to make the glass thin enough that it could be bent appropriately. This in turn took a lot more weight out and allowed everything to be lighter and cheaper. So this new "bend on demand" approach made it possible to have a mirror whose effective size was much larger than before but which still had the right shape. But what about the twinkle problem?
It turns out that this "bend on demand" capability came to the rescue here too. With even more computing power (now cheap) it was possible to calculate exactly how much the irregularities in the atmosphere were messing things up. This made it possible to calculate how to bend the mirror out of what would normally be the "correct" shape just enough to undo the twinkling the atmosphere was causing. The corrections would have to be calculated and applied frequently (about 100 times per second) but it meant that it was possible to take a much sharper picture of the sky from the bottom of the atmosphere.
This process is called "adaptive optics" and all the big telescopes now have adaptive optics systems. The last thing you need to make it work is a "guide star". Measuring it allows the distortions and corrections to be calculated. The guide star can also be used to verify that the right correction was applied. If there is a bright star handy close to the portion of the sky you want to point your telescope at then it can be used. Otherwise, synthetic guide stars can be created using lasers and other tricks. And with that, let me return to Asimov's book.
Remember that chromatic aberration I was talking about. And remember the old saw about "turning a problem into an opportunity". That's the first subject Asimov gets into. If you introduce a piece of glass into the path of the light from your telescope it will bend the light. And it will bend the light by different amounts that depend on the frequency of the light. The piece of glass used for this purpose is called a "prism". A well designed prism will accentuate this phenomenon. Why do it? Because this allows us to study each frequency of say the light from a specific star, separately. And that allows us to learn a lot about the star. Studying the various frequencies is called spectroscopy and a device for spreading those frequencies out so that each frequency can be examined individually is called a spectroscope and the resulting pattern a spectrograph. (And this whole business of studying the spectrum of light also goes back to Isaac Newton.) So what can we learn from the spectrum of a star?
"White" light will contain all frequencies and the intensity of each frequency will follow a specific pattern. But real light always has bands that are either brighter or darker than they are supposed to be. Franhoffer first reported this in 1814. The dark lanes are "absorption" lines where something has absorbed a particular frequency as the light passes through it. The brighter lines are "emission" lines. Something, say a candle, has emitted extra light in particular frequencies. It didn't take long for scientists to speculate that specific elements caused specific lines. It turned out that they were right.
Early work identified the new at the time elements of Cesium and Rubidium. Then in 1868 Helium was identified in the spectrum of sunlight. Cesium and Rubidium were rare but could be found on earth if you looked hard. At that time no one had found any Helium anywhere. (It was later discovered to be a trace component of natural gas and can also be found in even smaller quantities elsewhere.) The use of spectroscopy to discover solar Helium was a big deal and really put the technique on the map.
These early spectroscopy studies were done "by eye". The first major step in moving away from this was the invention of the daguerreotype, an early photographic method, in 1839. As the century progressed photographic techniques improved and photographs became more common in astronomy. And photography made it possible to use telescopes in a different way. The standard way is to peer very closely at a small part of the sky. But this means you can't do a broad "survey" of a larger portion of the sky. Then in the 1930's Schmidt came up with a telescope design that could do surveys. But you could not do it with your eye. You had to use photography. This was an early example of moving on beyond what the naked eye could show us.
But spectroscopy turned out have very down to earth uses. Around 1800 Herschel moved a thermometer beyond the visible part of the spectrum. He detected a warming. He had discovered infrared light, light with a frequency lower than that of red. Additional investigation by others led to the discovery of ultraviolet, light with a frequency higher than violet. The "electromagnetic" spectrum has since been expanded to include (from highest to lowest) gamma rays and x-rays (usually subdivided into hard (higher frequency) and soft (lower frequency)) on the high end and various forms of radio waves (from highest to lowest: microwave, shortwave, and long wave) on the low end.
It turns out gamma rays, x-rays, and infrared waves are all very effectively blocked by our atmosphere. But radio waves are not. In 1933 Jansky detected radio waves coming from the sky. This was the start of what is now a booming field of endeavor, radio astronomy. Radio astronomy was in its infancy when Asimov wrote his book. Big dish antennas were just coming into being. And the now famous giant dish at Arecibo in Puerto Rico was not completed until 1963. Nor had Lasers been invented yet. But the predecessor to the Laser, the first Maser had been built. The idea is the same as that behind the Laser but a Maser uses radio waves and a Laser uses light waves. It was easier to pull the necessary engineering off with radio waves so the Maser came first by about ten years. And, although radio astronomers invented the Maser and had done so before the book was written, they were still figuring out how to make the best use of Masers so Asimov does not even mention them.
Another technique that is now in common use is radio interferometry. This is the process of combining signals from two or more widely separated radio antennas to create a "synthetic aperture" that is as large as the distance between the antennas. A problem Asimov does address is that fact that some radio wavelengths are very long. This means you need very large equipment to do anything with them. The synthetic aperture scheme got around this problem and allowed radio astronomers to make effective use of low frequency (long wavelength) radio waves. But this came well after Asimov's book. And this synthetic aperture scheme has now been adapted for use with light telescopes. There are actually two Keck telescopes in Hawaii. They can be operated together in such a way as to behave like they are one single telescope with a mirror diameter of 85 meters (almost 300 feet).
One issue Asimov does get into is mapping the Milky Way. As Asimov put it, "[i]n a sense, the galaxy hardest for us to see is our own." Radio astronomy has been a big help. Light is blocked by clouds of dust like the "coal sack" in the southern hemisphere. But radio waves can penetrate dust easily. This led to the mapping of the Orion, Perseus, and Sagittarius arms of the Milky Way. But these arms are only partially mapped and there has not been a lot of progress since Asimov's book. And the existence of a giant black hole (it weighs millions of times as much as our Sun) in the center of our galaxy was totally unsuspected at that time. As was the Cosmic Microwave Background, the single biggest discovery in the field of radio astronomy. It was discovered less than a decade after the book was written.
Asimov does list a number of achievements that had been racked up by radio astronomy. These include a number of bright radio sources in the sky, the fact that sunspots emit radio waves, the fact that the atmospheres of both Venus and Jupiter are turbulent enough to emit radio waves (as does the cloud surrounding the Crab Nebula), the fact that galaxies collide, and others. And then there is that workhorse of spectroscopy that was first made use of by radio astronomers, the Doppler effect. As I have indicated elsewhere, it can be used to measure the speed with which stars (and any other object bright enough to allow a detailed spectrograph to be taken) are moving toward or away from the Earth. Asimov reports the results of early Doppler work.
At first it might seem like this is a story of scientific results being overturned but that is not so. In fact, what is on display is a progress toward more and better information. This does result in some scientific theories being overturned. But that's why scientific theories are theories. The possibility always exists that new data will come along that will show them to be wrong in some important way. But before scientists move on to a new theory they make sure it accounts for all the old observations.
And scientists are pretty good at figuring out how solid their theories are. Scientists see it as part of their job to theorize before all the data is in. But they admit it. They even have a name for this sort of thing. It's called a WAG, a Wild-Assed Guess. As more data comes in this might be replaced by a SWAG, a Scientific Wild-Assed Guess. Neither rises to the level of a "theory", which must have more support. And some theories are "tentative" while others are "pretty solid". From there it can move on to being a "well tested" or "foundational" theory or not. It depends on the data.
And mostly what we see are new discoveries made possible by new and better tools. The new information does not overturn the old. It supplements it or "opens new vistas". Large though it is, there were no tools in 1960 that could have detected the giant black hole at the center of our galaxy. Scientists knew roughly where the center of our galaxy was ("somewhere in the Sagittarius Constellation") but were the first to admit they knew little to nothing about what was there.
Saturday, February 27, 2016
Obama Bad for Blacks?
The idea that President Obama has been bad for blacks has been popping up recently. I suspect this is news to people who depend on conservative sources for their news. But it is now an actual thing in some circles. And most people would label the proposition preposterous. But is it?
It certainly is to conservatives. There he is seen as the unpatriotic President, the anti-white President, the Muslim "president" who was born somewhere else (Kenya, Indonesia, wherever but definitely not in the USA), and, most damningly, THE BLACK PRESIDENT. Given all this he positively must have been good for blacks, right?
I don't buy any of that (except the "Black President" part - that is unarguably true). So I was skeptical when I first heard the idea raised. But there is an actual a case there. And I have found that when an idea like this appears "out of nowhere" it almost always turns out that it came from a specific somewhere. I was mystified as to where that somewhere might have been until I saw a mini-review of a book called "Democracy in Black" by Eddie S. Galude, Jr. in The New Yorker. Mr. Glaude is the head of Princeton's African-American Studies program. The review indicates that the book contains "a scathing critique of the Obama Presidency" and "describes the 'devastation' suffered by black communities".
I am not positive that this is the one true source for this idea but it is definitely one of them. That's good enough for the purposes of this post. So is Mr. Glaude some ivory tower academic with a screw loose, an axe to grind, and a plan to raise his visibility? I haven't read the book so I can't definitively answer the question. But my strong suspicion is that he is not. To understand why let me instead ask a slightly different question: Have blacks done well during the Obama years? It turns out that this question is easy to answer and the answer is a resounding NO!
By every measure of economic success blacks are worse off than they were a few years ago. The number of blacks in the middle class is down. The percentage of employed blacks is down. The average income of blacks is down. And blacks have been hurt more than whites over the period in question. The middle class as a whole is smaller. But the black middle class has shrunk more than the white middle class. Black unemployment rates are much higher than white unemployment rates. The gap between black income and white income has increased. Blacks have done badly in both the absolute sense and the relative sense. Mr. Claude lays the case out in considerable detail (at least that's what the reviews I have read say).
So blacks have done badly under President Obama. Is it his fault? There is this theory that the President is responsible for everything, whether good or bad, that happens on his watch. If you buy this theory then the answer is yes -- President Obama is responsible. But this theory is inconsistently applied. To pick just one example: Jeb Bush has frequently stated that his brother George W. Bush "kept us safe". Liberals have long said "wait -- what about 9/11" but this observation was ignored until Donald Trump started repeating it.
Lets take a more thorough look at the "blame" question. Mr. Glaude apportions a generous share of the blame to the President. Then he moves on to what I feel are more deserving recipients. But before moving on let's take a deeper look at Obama's role.
Most of the damage was done by the economic crash. That definitely did not happen on Obama's watch. It happened on Bush's watch. This inconvenient truth is ignored by conservatives and Republicans who have invented any number of fanciful excused for why it is actually Obama's fault. None of them hold water. But what definitely did happen on Obama's watch was his response. And the first major action he took was the "stim", his 800+ billion dollar stimulus package.
Conservatives have argued that it was too big. Liberals have argued that it was too small. The Obama Administration have argued, rightly I believe, that it was as big as they could make it and still get it through congress. If the package had been smaller (or not passed at all) it would have resulted in more damage to black economic interests than was the actual result. So the "stim" was helpful to blacks. But it was also race neutral. It was directed neither toward nor away from blacks. So it was not really an initiative intended to directly help or hurt blacks. So its effect was best described as neutral.
There was, however, a secondary effect stemming from the "stim". In my opinion it was poorly constructed. This was done in what turned out to be a futile effort to attract Republican support. The "stim" was composed of roughly one third temporary tax cuts (many of which were later made permanent), one third one time subsidies to state and local governments, and one third spending, primarily on infrastructure. As predicted, the first two thirds were not very effective in stimulating the economy. But the last third is the part I want to focus on.
Infrastructure projects often include a ribbon cutting ceremony and these ceremonies now commonly feature a giant "check" so that the local TV stations will have a good visual to put on the evening news. And, of course, there is always a smiling politician standing next to the check taking credit for the funding and, by inference, for his ability to bring home the bacon. Many of these ceremonies featured Republicans who had voted against the "stim". And in absolutely every case the Republican in question was careful to ensure that voters were kept ignorant of the fact that the money to pay for the project came from the "stim". They were thus able to portray themselves as effective at roping in Federal money while simultaneously decrying the hated "stim" as useless spending. This was one of many tactics Republicans used to rack up big wins in midterm elections in 2010.
And Republicans used the 2010 election win as leverage with which to gut programs that directly and indirectly affected the financial wellbeing of black people. So by letting the Republicans get away with this trick the "stim" indirectly hurt the economic wellbeing of black people.
Conservatives also railed against Eric Holder, a black man and Obama's first Attorney General. Holder was supposed to have some kind of diabolical pro-black agenda whose details now escape me. But Holder spent most of his time in his first few years dealing with the financial crisis. I can think of no Holder/Obama initiatives on the traditional "war on crime" / "war on drugs" front during this period.
And one thing he did, or more accurately failed to do, was anything effective with respect to Wall Street. One or perhaps two low level people went to jail and eventually a number of large fines were levied and collected. But this was seen at the time as largely ineffective and nothing has happened since to change this judgment. Since Wall Street is almost exclusively a white enclave (the exception being Asians employed in technical rather than leadership positions) an argument can be made that this was anti-black. But this is another example of an essentially color blind policy where one can argue that a secondary effect hit blacks harder than whites. But this same secondary effect hit a lot of whites very hard too.
Another example of a color blind policy where it could be argued that blacks suffered disproportionately more harm was in dealing with the "Foreclosure Mess" aspect of the Financial Crisis. As the economy melted down a lot of people were put into foreclosure. There is a lot of blame here but I am going to focus on the actual Foreclosure process as that happened almost entirely on Obama's watch. A lot of just plain bad execution went into the process. But a lot of criminal behavior was documented too. There was illegal "robo-signing". Houses were foreclosed even though the owners were in negotiation with the mortgage holder. There were even cases where the wrong house was foreclosed on. But here again few if any were prosecuted and sent to jail. Many otherwise law abiding and stable black families were swept up in this, more than the standard distributions and statistics would predict. But the disproportionate impact on black people was a secondary effect. It was not an intended result of Obama's or Holder's policies.
So in all these cases the effect was unintended and fairly modest. There were no Obama administration programs that were designed to disadvantage blacks. And, as I indicated above, the early Holder years included no activity on the crime/drugs front. In fact the Obama Administration maintained a hard line on the War on Drugs until well into his second term when he finally slightly loosened up on Marijuana.
On the crime front, there was no policy push at all in the first term. And nothing was done with respect to specific cases like the Trayvon Martin case. The shooting happened in February of 2012 and the case played out in the months following with essentially no Federal involvement. At this point we are just short of four years in. It was only with the Michael Brown shooting in Ferguson Missouri in August of 2014, roughly six years in that the Justice department made any pro-black move. They eventually became deeply involved. The Justice department has since been involved in a number of cases. Blacks are right to fault Obama and the justice department for being somewhere between neutral and hostile when it came to blacks and the criminal justice system. Claims to the contrary by Republicans are just wrong with respect to roughly the first six years of Obama's time in office.
I have made what I would consider a very weak case for the claim of Glaude and others. He presumably made a much better one. And, for the sake of argument, let's assume the case has been convincingly made. What does Glaude suggest? He makes what I consider two sensible suggestions and one idiotic one. The idiotic suggestion is his "blank-out" one. He suggests that black voters should vote for no one in the 2016 Presidential election. The idea is to punish Democrats for taking the black vote for granted. This is a truly idiotic suggestion.
A blank-out vote is effectively a Republican vote. We have tried the experiment. Blacks and others turned out in large numbers in 2008 and 2012, the years Obama was running. In those years, particularly in 2008, Democrats did well. Did they then work hard to advance a pro-black agenda? No. But look at what happened in 2010 and 2014. Blacks stayed home, they effectively blanked-out in Senatorial, Congressional, and state elections. And Republicans did very well and picked up lots of seats.
Unlike Democrats or the Obama Administration, Republicans have been actively hostile to a pro-black agenda. To take the simplest example they have put in draconian voter-ID laws in many states. These laws are effective at their intended effect, namely denying blacks the vote. Republicans at the state level (with an assist from the Supreme Court) have often not implemented the Medicaid component of Obamacare. This has denied poor people (disproportionately black) access to medical care they can afford. There are other examples. So recent history has conclusively shown that a blank-out vote by blacks is an anti-black vote. So what should blacks do instead?
Frankly, they should do what Republicans do and have been doing for a couple of decades now. And Glaude advocates that in his other two points, the ones I agree with. One reason Obama has been so late to the party and why what successes he has had have been so modest is a lack of engagement by his supporters. This includes but is not limited to blacks. Blacks vote every four years and then they stay home the rest of the time and expect Obama to work miracles all by himself.
Republicans have many powerful, effective, and well organized pressure groups. The religious right is only one of many. Karl Rove's ability to mobilize and turn out the religious right in larger numbers than Democrats could imagine is widely credited for Bush's victory over Kerry in 2004. And this one group has blackmailed the Republican party into its current rabid hostility to abortion. Abortion is low in most voters' priority list but any Republican officeholder who is not fiercely anti-abortion lives in fear of being primary-ed.
Opinions on abortion have not changed much in recent years. But the country has made a rapid shift from being strongly anti-gay to being modestly pro-gay. But being fiercely anti-gay is another litmus test within the GOP. Very few Republicans who hold office or are contemplating running for office try to buck the prevailing wind on this issue. Other constituencies within the Republican party guard other issues and agendas fiercely. And the way they keep their elected officials in line is not by staying home. They do it by threatening primary challenges. This does not, at least in theory, jeopardize control of the seat.
But there are few Democratic pressure groups who reliably deliver votes if Democratic candidates toe the line and credibly threaten primary challenges if they don't. Take the classic black pressure group, the NAACP. The NAACP almost went out of business in the '90s. Then Republicans started making anti-black moves like voting restrictions. This breathed some life back into the NAACP so it is still around. But it can not turn out large blocks of voters so it has little actual power. Theoretically there are pro-women's groups, and anti-war groups, and so on. But they are not as effective as their Republican equivalents. If they were Republicans would be unable to consistently clean up in off year elections and would not have achieved a near-lock on state government. Glaude advocates that blacks get way more active and organized and I agree with him.
Glaude also advocates that blacks be more willing to "disturb the peace". Theoretically, this level of activism is counter productive. But gays used it effectively. They went from being invisible to visible to effective in advocating for their agenda. We have seen the same thing with the Black Lives Matter movement. There have been people within the black community advocating for years. But they were not heard. The "in your face" tactics of many in the Black Lives Matter movement has resulted in attitude and policy changes. In the years before Black Lives Matter these issues were not a priority of this or previous administrations. But consistent pressure exerted over a broad front and for a substantial period of time has garnered results.
And there is no doubt that this is a tried and true and effective tactic of conservatives. We have gotten to the point where gun tragedies are completely routine. Yet the NRA is famous for its "take no prisoners" tactics. It is also notorious for its ability to set the legislative agenda. They have been completely successful at the national level. They have been only slightly less successful at the state and local level. The anti-abortion movement has a similar track record. Clinics have been firebombed. Employees and patients have been threatened. Doctors have been murdered. Yet the NRA and anti-abortion movements are seen as legitimate. And, more importantly, they are effective.
And this is a broader problem. Republicans decided at the beginning of the Obama administration to blindly and categorically impede any Obama initiative. They have been ably supported by their pressure groups. This has allowed them to maintain this stance for a long time and simultaneously be successful at the ballot box. Obama has advanced many initiatives that are broadly popular but have gone nowhere. This is because he has generally been on his own. He has had little or no active support from pressure groups. More telling is the fact that he has been routinely subjected to vicious personal attacks. "He is not Christian." "He is anti-American". "He is not an American citizen." "He is exceeding his authority." Any similar attack lunched against a Republican from outside the Republican party would result in a fierce counter-attack. But there is little or no response from outside pressure groups to these attacks on Obama.
There has generally been no sustained and vocal groundswell of support from the constituencies these initiatives are designed to support. This is true of black issues but also of issues pretty much up and down the line. Obama is expected to deliver on his own. An exception is the gay movement. There has been a lot of outside pressure and Obama has been able to get results. But this means that in most cases a Republican attack is free of cost. This has effectively tied Obama's hands behind his back. It should be no surprise that he has been less effective and slower to the starting line than many constituencies would like. On the other hand, Republicans in states have been very successful in pushing their agendas. Their pressure groups have been active and vocal. Groups that oppose these policies, on the other hand, have been pretty quiet.
We may now be on the brink of more "same old same old". We are seeing record turnouts for Republican caucuses and primaries. Blacks (and others) should be shocked and appalled by the promises being made by Trump and the rest of the GOP crowd. They should be pleased and comforted by the promises being made by Clinton and Sanders. But people are not turning out on the Democratic side, even for Sanders who has made increased turnout a cornerstone of his campaign. Sanders people seem to be up for turning out for a rally but not up for showing up on election/caucus day. Turn out so far on the Democratic side is substantially below 2008 levels. Blacks and other groups that traditionally support Democrats should be concerned. If we get a general election that looks more like 2010/2014 than 2008/2012 we will get results very like 2010/2014. And it will not be pretty for blacks.
The tank was supposed to revolutionize fighting in World War I. In its first couple of battles it did not. But it became a game changer when tactics were changed from "tanks can replace infantry" to "tanks and infantry can support each other". Blacks and others need to vote and to organize and to be in our faces. Voting is necessary to provide persuadable elected officials. But people also need to actively participate in the efforts of pressure groups working on their behalf. This is the tactic of tanks (pressure) plus infantry (voting) that worked in World War I, works for Republicans, and can work for blacks and other constituencies. We have lived through the political equivalent of "trench warfare" when infantry alone (elected officials) could not get the job done. Democrats need to go back to the methods Democrats used successfully several of decades ago. Then elected officials were beholden to active and vocal pressure groups. Politicians were expected to deliver results but in exchange they could depend on the pressure groups delivering the votes necessary to keep them in office.
Obama has a remarkable record of success given what he has been up against. Imagine what he could have achieved had he been properly supported.
It certainly is to conservatives. There he is seen as the unpatriotic President, the anti-white President, the Muslim "president" who was born somewhere else (Kenya, Indonesia, wherever but definitely not in the USA), and, most damningly, THE BLACK PRESIDENT. Given all this he positively must have been good for blacks, right?
I don't buy any of that (except the "Black President" part - that is unarguably true). So I was skeptical when I first heard the idea raised. But there is an actual a case there. And I have found that when an idea like this appears "out of nowhere" it almost always turns out that it came from a specific somewhere. I was mystified as to where that somewhere might have been until I saw a mini-review of a book called "Democracy in Black" by Eddie S. Galude, Jr. in The New Yorker. Mr. Glaude is the head of Princeton's African-American Studies program. The review indicates that the book contains "a scathing critique of the Obama Presidency" and "describes the 'devastation' suffered by black communities".
I am not positive that this is the one true source for this idea but it is definitely one of them. That's good enough for the purposes of this post. So is Mr. Glaude some ivory tower academic with a screw loose, an axe to grind, and a plan to raise his visibility? I haven't read the book so I can't definitively answer the question. But my strong suspicion is that he is not. To understand why let me instead ask a slightly different question: Have blacks done well during the Obama years? It turns out that this question is easy to answer and the answer is a resounding NO!
By every measure of economic success blacks are worse off than they were a few years ago. The number of blacks in the middle class is down. The percentage of employed blacks is down. The average income of blacks is down. And blacks have been hurt more than whites over the period in question. The middle class as a whole is smaller. But the black middle class has shrunk more than the white middle class. Black unemployment rates are much higher than white unemployment rates. The gap between black income and white income has increased. Blacks have done badly in both the absolute sense and the relative sense. Mr. Claude lays the case out in considerable detail (at least that's what the reviews I have read say).
So blacks have done badly under President Obama. Is it his fault? There is this theory that the President is responsible for everything, whether good or bad, that happens on his watch. If you buy this theory then the answer is yes -- President Obama is responsible. But this theory is inconsistently applied. To pick just one example: Jeb Bush has frequently stated that his brother George W. Bush "kept us safe". Liberals have long said "wait -- what about 9/11" but this observation was ignored until Donald Trump started repeating it.
Lets take a more thorough look at the "blame" question. Mr. Glaude apportions a generous share of the blame to the President. Then he moves on to what I feel are more deserving recipients. But before moving on let's take a deeper look at Obama's role.
Most of the damage was done by the economic crash. That definitely did not happen on Obama's watch. It happened on Bush's watch. This inconvenient truth is ignored by conservatives and Republicans who have invented any number of fanciful excused for why it is actually Obama's fault. None of them hold water. But what definitely did happen on Obama's watch was his response. And the first major action he took was the "stim", his 800+ billion dollar stimulus package.
Conservatives have argued that it was too big. Liberals have argued that it was too small. The Obama Administration have argued, rightly I believe, that it was as big as they could make it and still get it through congress. If the package had been smaller (or not passed at all) it would have resulted in more damage to black economic interests than was the actual result. So the "stim" was helpful to blacks. But it was also race neutral. It was directed neither toward nor away from blacks. So it was not really an initiative intended to directly help or hurt blacks. So its effect was best described as neutral.
There was, however, a secondary effect stemming from the "stim". In my opinion it was poorly constructed. This was done in what turned out to be a futile effort to attract Republican support. The "stim" was composed of roughly one third temporary tax cuts (many of which were later made permanent), one third one time subsidies to state and local governments, and one third spending, primarily on infrastructure. As predicted, the first two thirds were not very effective in stimulating the economy. But the last third is the part I want to focus on.
Infrastructure projects often include a ribbon cutting ceremony and these ceremonies now commonly feature a giant "check" so that the local TV stations will have a good visual to put on the evening news. And, of course, there is always a smiling politician standing next to the check taking credit for the funding and, by inference, for his ability to bring home the bacon. Many of these ceremonies featured Republicans who had voted against the "stim". And in absolutely every case the Republican in question was careful to ensure that voters were kept ignorant of the fact that the money to pay for the project came from the "stim". They were thus able to portray themselves as effective at roping in Federal money while simultaneously decrying the hated "stim" as useless spending. This was one of many tactics Republicans used to rack up big wins in midterm elections in 2010.
And Republicans used the 2010 election win as leverage with which to gut programs that directly and indirectly affected the financial wellbeing of black people. So by letting the Republicans get away with this trick the "stim" indirectly hurt the economic wellbeing of black people.
Conservatives also railed against Eric Holder, a black man and Obama's first Attorney General. Holder was supposed to have some kind of diabolical pro-black agenda whose details now escape me. But Holder spent most of his time in his first few years dealing with the financial crisis. I can think of no Holder/Obama initiatives on the traditional "war on crime" / "war on drugs" front during this period.
And one thing he did, or more accurately failed to do, was anything effective with respect to Wall Street. One or perhaps two low level people went to jail and eventually a number of large fines were levied and collected. But this was seen at the time as largely ineffective and nothing has happened since to change this judgment. Since Wall Street is almost exclusively a white enclave (the exception being Asians employed in technical rather than leadership positions) an argument can be made that this was anti-black. But this is another example of an essentially color blind policy where one can argue that a secondary effect hit blacks harder than whites. But this same secondary effect hit a lot of whites very hard too.
Another example of a color blind policy where it could be argued that blacks suffered disproportionately more harm was in dealing with the "Foreclosure Mess" aspect of the Financial Crisis. As the economy melted down a lot of people were put into foreclosure. There is a lot of blame here but I am going to focus on the actual Foreclosure process as that happened almost entirely on Obama's watch. A lot of just plain bad execution went into the process. But a lot of criminal behavior was documented too. There was illegal "robo-signing". Houses were foreclosed even though the owners were in negotiation with the mortgage holder. There were even cases where the wrong house was foreclosed on. But here again few if any were prosecuted and sent to jail. Many otherwise law abiding and stable black families were swept up in this, more than the standard distributions and statistics would predict. But the disproportionate impact on black people was a secondary effect. It was not an intended result of Obama's or Holder's policies.
So in all these cases the effect was unintended and fairly modest. There were no Obama administration programs that were designed to disadvantage blacks. And, as I indicated above, the early Holder years included no activity on the crime/drugs front. In fact the Obama Administration maintained a hard line on the War on Drugs until well into his second term when he finally slightly loosened up on Marijuana.
On the crime front, there was no policy push at all in the first term. And nothing was done with respect to specific cases like the Trayvon Martin case. The shooting happened in February of 2012 and the case played out in the months following with essentially no Federal involvement. At this point we are just short of four years in. It was only with the Michael Brown shooting in Ferguson Missouri in August of 2014, roughly six years in that the Justice department made any pro-black move. They eventually became deeply involved. The Justice department has since been involved in a number of cases. Blacks are right to fault Obama and the justice department for being somewhere between neutral and hostile when it came to blacks and the criminal justice system. Claims to the contrary by Republicans are just wrong with respect to roughly the first six years of Obama's time in office.
I have made what I would consider a very weak case for the claim of Glaude and others. He presumably made a much better one. And, for the sake of argument, let's assume the case has been convincingly made. What does Glaude suggest? He makes what I consider two sensible suggestions and one idiotic one. The idiotic suggestion is his "blank-out" one. He suggests that black voters should vote for no one in the 2016 Presidential election. The idea is to punish Democrats for taking the black vote for granted. This is a truly idiotic suggestion.
A blank-out vote is effectively a Republican vote. We have tried the experiment. Blacks and others turned out in large numbers in 2008 and 2012, the years Obama was running. In those years, particularly in 2008, Democrats did well. Did they then work hard to advance a pro-black agenda? No. But look at what happened in 2010 and 2014. Blacks stayed home, they effectively blanked-out in Senatorial, Congressional, and state elections. And Republicans did very well and picked up lots of seats.
Unlike Democrats or the Obama Administration, Republicans have been actively hostile to a pro-black agenda. To take the simplest example they have put in draconian voter-ID laws in many states. These laws are effective at their intended effect, namely denying blacks the vote. Republicans at the state level (with an assist from the Supreme Court) have often not implemented the Medicaid component of Obamacare. This has denied poor people (disproportionately black) access to medical care they can afford. There are other examples. So recent history has conclusively shown that a blank-out vote by blacks is an anti-black vote. So what should blacks do instead?
Frankly, they should do what Republicans do and have been doing for a couple of decades now. And Glaude advocates that in his other two points, the ones I agree with. One reason Obama has been so late to the party and why what successes he has had have been so modest is a lack of engagement by his supporters. This includes but is not limited to blacks. Blacks vote every four years and then they stay home the rest of the time and expect Obama to work miracles all by himself.
Republicans have many powerful, effective, and well organized pressure groups. The religious right is only one of many. Karl Rove's ability to mobilize and turn out the religious right in larger numbers than Democrats could imagine is widely credited for Bush's victory over Kerry in 2004. And this one group has blackmailed the Republican party into its current rabid hostility to abortion. Abortion is low in most voters' priority list but any Republican officeholder who is not fiercely anti-abortion lives in fear of being primary-ed.
Opinions on abortion have not changed much in recent years. But the country has made a rapid shift from being strongly anti-gay to being modestly pro-gay. But being fiercely anti-gay is another litmus test within the GOP. Very few Republicans who hold office or are contemplating running for office try to buck the prevailing wind on this issue. Other constituencies within the Republican party guard other issues and agendas fiercely. And the way they keep their elected officials in line is not by staying home. They do it by threatening primary challenges. This does not, at least in theory, jeopardize control of the seat.
But there are few Democratic pressure groups who reliably deliver votes if Democratic candidates toe the line and credibly threaten primary challenges if they don't. Take the classic black pressure group, the NAACP. The NAACP almost went out of business in the '90s. Then Republicans started making anti-black moves like voting restrictions. This breathed some life back into the NAACP so it is still around. But it can not turn out large blocks of voters so it has little actual power. Theoretically there are pro-women's groups, and anti-war groups, and so on. But they are not as effective as their Republican equivalents. If they were Republicans would be unable to consistently clean up in off year elections and would not have achieved a near-lock on state government. Glaude advocates that blacks get way more active and organized and I agree with him.
Glaude also advocates that blacks be more willing to "disturb the peace". Theoretically, this level of activism is counter productive. But gays used it effectively. They went from being invisible to visible to effective in advocating for their agenda. We have seen the same thing with the Black Lives Matter movement. There have been people within the black community advocating for years. But they were not heard. The "in your face" tactics of many in the Black Lives Matter movement has resulted in attitude and policy changes. In the years before Black Lives Matter these issues were not a priority of this or previous administrations. But consistent pressure exerted over a broad front and for a substantial period of time has garnered results.
And there is no doubt that this is a tried and true and effective tactic of conservatives. We have gotten to the point where gun tragedies are completely routine. Yet the NRA is famous for its "take no prisoners" tactics. It is also notorious for its ability to set the legislative agenda. They have been completely successful at the national level. They have been only slightly less successful at the state and local level. The anti-abortion movement has a similar track record. Clinics have been firebombed. Employees and patients have been threatened. Doctors have been murdered. Yet the NRA and anti-abortion movements are seen as legitimate. And, more importantly, they are effective.
And this is a broader problem. Republicans decided at the beginning of the Obama administration to blindly and categorically impede any Obama initiative. They have been ably supported by their pressure groups. This has allowed them to maintain this stance for a long time and simultaneously be successful at the ballot box. Obama has advanced many initiatives that are broadly popular but have gone nowhere. This is because he has generally been on his own. He has had little or no active support from pressure groups. More telling is the fact that he has been routinely subjected to vicious personal attacks. "He is not Christian." "He is anti-American". "He is not an American citizen." "He is exceeding his authority." Any similar attack lunched against a Republican from outside the Republican party would result in a fierce counter-attack. But there is little or no response from outside pressure groups to these attacks on Obama.
There has generally been no sustained and vocal groundswell of support from the constituencies these initiatives are designed to support. This is true of black issues but also of issues pretty much up and down the line. Obama is expected to deliver on his own. An exception is the gay movement. There has been a lot of outside pressure and Obama has been able to get results. But this means that in most cases a Republican attack is free of cost. This has effectively tied Obama's hands behind his back. It should be no surprise that he has been less effective and slower to the starting line than many constituencies would like. On the other hand, Republicans in states have been very successful in pushing their agendas. Their pressure groups have been active and vocal. Groups that oppose these policies, on the other hand, have been pretty quiet.
We may now be on the brink of more "same old same old". We are seeing record turnouts for Republican caucuses and primaries. Blacks (and others) should be shocked and appalled by the promises being made by Trump and the rest of the GOP crowd. They should be pleased and comforted by the promises being made by Clinton and Sanders. But people are not turning out on the Democratic side, even for Sanders who has made increased turnout a cornerstone of his campaign. Sanders people seem to be up for turning out for a rally but not up for showing up on election/caucus day. Turn out so far on the Democratic side is substantially below 2008 levels. Blacks and other groups that traditionally support Democrats should be concerned. If we get a general election that looks more like 2010/2014 than 2008/2012 we will get results very like 2010/2014. And it will not be pretty for blacks.
The tank was supposed to revolutionize fighting in World War I. In its first couple of battles it did not. But it became a game changer when tactics were changed from "tanks can replace infantry" to "tanks and infantry can support each other". Blacks and others need to vote and to organize and to be in our faces. Voting is necessary to provide persuadable elected officials. But people also need to actively participate in the efforts of pressure groups working on their behalf. This is the tactic of tanks (pressure) plus infantry (voting) that worked in World War I, works for Republicans, and can work for blacks and other constituencies. We have lived through the political equivalent of "trench warfare" when infantry alone (elected officials) could not get the job done. Democrats need to go back to the methods Democrats used successfully several of decades ago. Then elected officials were beholden to active and vocal pressure groups. Politicians were expected to deliver results but in exchange they could depend on the pressure groups delivering the votes necessary to keep them in office.
Obama has a remarkable record of success given what he has been up against. Imagine what he could have achieved had he been properly supported.
Saturday, February 20, 2016
Digital Privacy
A story has recently broken about a fight between Apple Computer and the FBI. The context is the San Bernardino massacre which resulted in 14 deaths and many injuries. The perpetrators, Syed Farook and Tashfeen Malik, were dead within hours. So the "who" in "who done it" has been known for some time now. The only open questions have to do with how much help they got and from whom. There has been a lot of progress on that front too.
Enrique Marquez, a friend and neighbor has been arrested. Among other things, he purchased some of the guns that were used by the perps. Literally hundreds of searches have been done and mountains of evidence has been seized. Online accounts of all kinds have been scrutinized. Even after all this effort there is more to be learned. A few days before I wrote this Syed's brother's house was searched. This was only the latest in a series of searches of his house.
As a result of all this effort the story is pretty much known. All that is left is to fill in some details. It is possible that a new major development could be unearthed in the future, say substantial participation by overseas terrorist groups. But the chances are small. And that brings us to the phone.
A tiny part of the mountain of seized evidence is a smart phone that belonged to one of the perps. It has been in FBI custody for some time now. But that hasn't stopped the FBI from being frustrated, literally. The phone is encrypted. The FBI has not been able to break or get around the encryption so they have not been able to access the contents of the phone. This literal frustration has not been for want of trying. At least that's the story from both the FBI and Apple Computer. The FBI has asked Apple for assistance and Apple has provided it. But the FBI now says Apple must take that assistance to a new level. And that's what the fight is about.
Before proceeding let me stop to make what I believe is an observation of monumental importance.
Why is this so important? Have you seen an "action" movie or TV show any time in say the last 50 years? These shows often feature a scenario where encrypted data is critical, frequently a matter of life and death. Sometimes a good guy is trying to decrypt the bad guy's secret plan. Sometimes it is a bad guy trying to decrypt the good guy's security system so he can steal the secret formula or the invasion plans or whatever. Regardless, the scene is always handled in the same way.
A geek types away furiously while "action" visuals play out on screen and dramatic music (queue the "Mission Impossible" theme) plays underneath so we will know that important things are happening. This goes on for about 20 seconds of screen time which may represent perhaps a few hours or days of "mission" time. But we always have the "Aha" moment when the geek announces that the encryption has been cracked. And it never takes the geek more than a week to crack it. In fact it is common for the geek to only need a few minutes.
This is the pop culture foundation for a belief that is widespread and grounded in things that are a lot more solid then a TV script. We've all seen it over and over so it must be true. Any encryption system can be broken. All you need is the genius geek and perhaps a bunch of really cool looking equipment. People in the real world support this idea often enough for one reason or another that most people have no reason to doubt its veracity. But it is not true. And we know it is not true because the FBI has just told us. Let's look at why.
It starts with the fact that FBI has publicly said that it has been unsuccessful in breaking Apple's encryption. This is in spite of the fact that they have had weeks in which to try and they have had a considerable amount of cooperation from Apple. But wait, there's more. Which government agency is the one with the most skill, equipment, and experience with encryption? The NSA (National Security Agency). It's literally what they do.
Before 9/11 it was possible to believe that the FBI and the NSA did not talk to each other. It was possible to believe this because it was true. But in the post-9/11 era those communications barriers were broken down and there is now close cooperation between the two agencies, especially on terrorism cases like this one. It is literally unbelievable that the FBI has not consulted with the NSA on this problem. And that means the NSA has also not been able to crack Apple's encryption either.
Let's say they had. Then the FBI could easily have covered this up by claiming that their own people had cracked the phone. Even if this was not believed it provides the standard "plausible deniability" that is commonly used in these situations. It doesn't matter if the official line is credible. It only matters that there is an official line that officials can pretend to believe. This is why I believe the NSA failed too. (For a counter-argument see below).
There is actually a lot of evidence that encryption works but it is the boring stuff that the media ignores. It gets dismissed as a "dog bites man" story. I worked in the computer department of a bank for a long time. They treated computer problems that could screw up data very seriously. "We are messing with people's money and people take their money very seriously." I then worked for a company that ran hospitals and clinics. After observing the culture there I remarked "If you want to see people who treat computer problems seriously, talk to bankers. They deal with money. Around here we only deal with life and death and that's not as serious." That's a cute way of highlighting that people take money very seriously. And every aspect of handling money now depends critically on encryption.
If even one of the common encryption systems used in the money arena could be cracked there is a lot of money to be made. Look at the amount of noise generated by people stealing credit card information. It has finally caused the credit card industry in the US to move from a '60s style magnetic stripe technology to a modern RFID chip based one. The important take away is that the hackers have never broken into a system by breaking the encryption. They have used what is generally referred to as a "crib". One of the most successful cribs goes by the name of Social Engineering. You call someone up (or email them or whatever) and talk them out of information you are not entitled to like say a high powered user id and password. You use this information to break into the system.
Important data has been encrypted for many decades now. The DES standard was developed and implemented in the '70s. It is considered weak by modern standards but I know of no successful attempt to crack it. But the long voiced but so far not validated idea that "it might be crackable soon" has been enough to cause everybody to move on. Something called triple-DES was shown to be harder to crack after double-DES was shown to provide no improvement. We have since moved on to other encryption standards.
A common one in the computer business is "Secure Sockets". Any web site with a prefix of HTTPS uses it. It is now recommended for general use instead of being restricted to use only in "important" situations. The transition has resulted in some variation of a "show all content" message popping up with annoying frequency. That's because the web page is linking to a combination of secure (HTTPS) and unsecure (HTTP) web sites.
If the basic algorithm (computer formula) is sound the common trick is to make the key bigger. DES used a 56 bit key. The triple-DES algorithm can be used with keys that are as long as 168 bits. Behind the scenes, HTTPS has been doing the same thing. Over time the keys it uses have gotten longer and longer. And a little additional length makes a big difference. Every additional bit literally doubles the number of keys that need to be tested in a "brute force" (try all the possible combinations) attack.
So piling on the bits fixes everything, right? No! It gets back to that crib thing. Let's say I have somehow gotten hold of your locked cell phone. What if I call you and say "I'm your mother and I need the key for your phone." Being a dutiful child you always do what your mother says so you give me the key. At this point it literally doesn't matter how long the key you use is. Actually no one would fall for so transparent a ploy but it illustrates the basic idea of Social Engineering. It boils down to tricking people into giving you information that you can use to get around their security.
If I can get your key I have effectively reduced your key length to zero. Cribs can be very complex and sophisticated but a good way to think of them is in terms of ways to reduce the effective key length. If I can find a crib that reduces the effective key length to ten bits that means a brute force attack only needs to try a little over a thousand keys to be guaranteed success. I once used a brute force approach to figure out the combination of a bicycle lock. The lock could be set to a thousand different numbers but only one opened it. It took a couple of hours of trying each possibility in turn but I eventually succeeded in finding that one number. Under ideal circumstances a computer can try a thousand possibilities in less than a second.
And Apple is well aware of this. So they added a delay to the process. It takes about a twelfth of a second to process a key. This means that no more than a dozen keys can be tried in a second. And the Apple key is more than ten bits in length. But wait. There's more. After entering a certain number of wrong keys in a row (the number varies with iPhone model and iOS version) the phone locks up. Under some circumstances the phone will even go so far as to wipe everything clean if too many wrong keys are tried in a row.
The FBI is not releasing the details of what they have tried so far. And Apple has not released the details of what assistance they have rendered so far. But this particular iPhone as currently configured is apparently impervious to a brute force attack. Whatever else the FBI has tried is currently a secret. So what the FBI is asking from Apple is for changes to the configuration. Specifically, they want the twelfth second delay removed and they want the "lock up" and "wipe after a number of failed keys" features disabled. That, according to the FBI, would allow a medium speed brute force attack to be applied. Some combinations of iPhone and iOS version use relatively short key lengths so this would be an effective approach if the phone in question is one of them.
But Apple rightly characterizes this as a request by the FBI to build a crib into their phones. Another name for this sort of thing is a "back door". And we have been down this path before. In the '90s the NSA released specifications for something called a "Clipper chip". It was an encryption / decryption chip that appeared to provide a high level of security. It used an 80 bit key. That's a lot bigger than the 56 bit key used by DES so that's good, right? The problem is that the Clipper chip contained a back door that was supposed to allow "authorized security agencies" like the NSA to crack it fairly easily. The NSA requested that a law be passed mandating exclusive use of the Clipper chip. After vigorous push back on many fronts the whole thing was dropped a couple of years later without being implemented broadly.
We can also look to various statements made by current and former heads of various intelligence and law enforcement agencies. The list includes James Clapper (while he was Director of National Intelligence and since), former NSA director Keith Alexander, and others. They have all railed against encryption unless agencies like theirs are allowed back doors. Supposedly all kinds of horrible things will happen if these agencies can't read everything terrorists are saying. But so far there is no hard evidence that these back doors would be very helpful in the fight against terrorism. What they would be very helpful for is making it easy to invade the privacy of everybody. Pretty much nothing on the Internet was encrypted in the immediate post-9/11 period. Reading messages was helpful in some cases but the bad guys quickly learned how to make their messages hard to find and hard to read.
These agencies have swept up massive amounts of cell phone data. Again, mass data collection has not been shown to be important to thwarting terrorist plots. After they are on to a specific terrorist then going back and retrospectively reviewing who they have been in contact with has been helpful. And, by the way, that has already been done in the San Bernardino Massacre case. But the FBI argues that even after all these other things have been done they still desperately need to read the contents of this one cell phone. We have been told for more than a decade that the "collect everything" programs are desperately needed and are tremendously effective. The FBI's current request indicates that they are not all that effective and that means they were never needed as badly as we were told they were.
The FBI also argues that this will be a "one off" situation. Apple argues that once the tool exists its use will soon become routine. If cracking a phone is difficult, time consuming, and expensive after the tool exists then the FBI may have a case. But if it is then what's to stop the FBI from demanding that Apple build a new tool that is easier, quicker and cheaper to use. Once the first tool has been created the precedent has been set.
The fundamental question here is whether a right to privacy exists. The fourth amendment states:
A plain reading of the language supports the idea that a privacy right exists and that the mass collection of phone records, whether "metadata" or the contents of the actual conversation, is unconstitutional. The Supreme Court has so far dodged its responsibility by falling back on a "standing" argument. I think the standing argument (which I am not going to get into) is bogus but I am not a Supreme Court justice. And the case we are focusing on is clearly covered by the "probable cause . . ." language. The FBI can and has obtained a search warrant. The only problem they are having is the practical one of making sense of the data they have seized.
The problem is not with this specific case. It is with what other use the capability in question might be put to. We have seen our privacy rights almost completely obliterated in the past couple of decades. Technology has enabled an unprecedented and overwhelming intrusion into our privacy. It is possible to listen in on conversations in your home by bouncing a laser off a window. A small GPS tracking device can be attached to your car in such a way that it is nearly undetectable. CCTV cameras are popping up everywhere allowing your public movements to be tracked. Thermal imaging cameras and other technology can tell a lot about what is going on inside your house even if you are not making any noise and they can do this from outside your property line.
And that ignores the fact that we now live in a highly computerized world. Records of your checking, credit card, and debit card activity, all maintained by computer systems, make your life pretty much an open book. Google knows where you go on the Internet (and probably what you say in your emails). And more and more of us run more and more of our lives from our smart phones. Imagine comparing what you can find out from a smart phone with what you could have found out 200 years ago by rifling through someone's desk (their "papers"). Then a lot of people couldn't read. So things were done orally. And financial activity was done in cash so no paper record of most transactions existed. The idea that the contents of a smartphone should not be covered under "persons, papers, and effects" is ridiculous. Yet key loggers and other spyware software are available for any and all models of smart phones.
Apple was one of the first companies to recognize this. They were helped along by several high profile cases where location data, financial data, and other kinds of private data were easily extracted from iPhones. They decided correctly that the only solution that would be effective would be to encrypt everything and to do so with enough protections that the encryption could not be easily avoided. The FBI has validated the robustness of their design.
Technology companies have been repeatedly embarrassed in the last few years by revelations that "confidential" data was easily being swept up by security agencies and others. They too decided that encryption was the way to cut this kind of activity off. Hence we see the move to secure (HTTPS) web sites and to companies encrypting data as is moves across the Internet from one facility to another.
Security agencies and others don't like this. It makes it at least hard and possibly impossible to tap into these data streams. And, according to agency heads this is very dangerous. But these people are known and documented liars. And they have a lot of incentive to lie. It makes the job of their agency easier and it makes it easier for them to amass bureaucratic power. Finally, given that lying does not put them at risk for criminal sanctions (none of them have even been charged) and can actually enhance their political standing, why wouldn't they?
Here's a theory for the paranoid. Maybe the FBI/NSA successfully cracked the phone. But they decided that they could use this case to leverage companies like Apple into building trap doors into their encryption technology. The Clipper case shows that this sort of thinking exists within these agencies. And agency heads are known to be liars. So this theory could be true. I don't think it is true but I can't prove that I am right. (I could if agency heads could actually be compelled to tell the truth when testifying under oath to Congress but I don't see that happening any time soon.)
The issue is at bottom about a trade off. The idea is that we can have more privacy but be less secure or we can have less privacy but be more secure. In my opinion, however, the case that we are more secure is weak to nonexistent and the case that we have lost a lot of valuable privacy and are in serious danger of losing even more is strong. I see the trade off in theory. But I don't see much evidence that as a practical matter the trade off actually exists in the real world. Instead I see us giving up privacy and getting nothing, as in no increase in security, back. In fact, I think our security is diminished as others see us behaving in a sneaky and underhanded way. That causes good people in the rest of the world to be reluctant to cooperate with us. That reduction in cooperation reduces our security. So I come down on the side of privacy and support Apple's actions.
In the end I expect some sort of deal will be worked out between the FBI and Apple. It will probably not be one that I approve of. It will erode our privacy a little or a lot and I predict that whatever information is eventually extracted from the phone will turn out to be of little or no value. And, as Tim Cook, the CEO of Apple, has stated, once the tool is built it will always exist for the next time and the time after that, ad infinitum. That is too high a cost.
Enrique Marquez, a friend and neighbor has been arrested. Among other things, he purchased some of the guns that were used by the perps. Literally hundreds of searches have been done and mountains of evidence has been seized. Online accounts of all kinds have been scrutinized. Even after all this effort there is more to be learned. A few days before I wrote this Syed's brother's house was searched. This was only the latest in a series of searches of his house.
As a result of all this effort the story is pretty much known. All that is left is to fill in some details. It is possible that a new major development could be unearthed in the future, say substantial participation by overseas terrorist groups. But the chances are small. And that brings us to the phone.
A tiny part of the mountain of seized evidence is a smart phone that belonged to one of the perps. It has been in FBI custody for some time now. But that hasn't stopped the FBI from being frustrated, literally. The phone is encrypted. The FBI has not been able to break or get around the encryption so they have not been able to access the contents of the phone. This literal frustration has not been for want of trying. At least that's the story from both the FBI and Apple Computer. The FBI has asked Apple for assistance and Apple has provided it. But the FBI now says Apple must take that assistance to a new level. And that's what the fight is about.
Before proceeding let me stop to make what I believe is an observation of monumental importance.
ENCRYPTION WORKS
Why is this so important? Have you seen an "action" movie or TV show any time in say the last 50 years? These shows often feature a scenario where encrypted data is critical, frequently a matter of life and death. Sometimes a good guy is trying to decrypt the bad guy's secret plan. Sometimes it is a bad guy trying to decrypt the good guy's security system so he can steal the secret formula or the invasion plans or whatever. Regardless, the scene is always handled in the same way.
A geek types away furiously while "action" visuals play out on screen and dramatic music (queue the "Mission Impossible" theme) plays underneath so we will know that important things are happening. This goes on for about 20 seconds of screen time which may represent perhaps a few hours or days of "mission" time. But we always have the "Aha" moment when the geek announces that the encryption has been cracked. And it never takes the geek more than a week to crack it. In fact it is common for the geek to only need a few minutes.
This is the pop culture foundation for a belief that is widespread and grounded in things that are a lot more solid then a TV script. We've all seen it over and over so it must be true. Any encryption system can be broken. All you need is the genius geek and perhaps a bunch of really cool looking equipment. People in the real world support this idea often enough for one reason or another that most people have no reason to doubt its veracity. But it is not true. And we know it is not true because the FBI has just told us. Let's look at why.
It starts with the fact that FBI has publicly said that it has been unsuccessful in breaking Apple's encryption. This is in spite of the fact that they have had weeks in which to try and they have had a considerable amount of cooperation from Apple. But wait, there's more. Which government agency is the one with the most skill, equipment, and experience with encryption? The NSA (National Security Agency). It's literally what they do.
Before 9/11 it was possible to believe that the FBI and the NSA did not talk to each other. It was possible to believe this because it was true. But in the post-9/11 era those communications barriers were broken down and there is now close cooperation between the two agencies, especially on terrorism cases like this one. It is literally unbelievable that the FBI has not consulted with the NSA on this problem. And that means the NSA has also not been able to crack Apple's encryption either.
Let's say they had. Then the FBI could easily have covered this up by claiming that their own people had cracked the phone. Even if this was not believed it provides the standard "plausible deniability" that is commonly used in these situations. It doesn't matter if the official line is credible. It only matters that there is an official line that officials can pretend to believe. This is why I believe the NSA failed too. (For a counter-argument see below).
There is actually a lot of evidence that encryption works but it is the boring stuff that the media ignores. It gets dismissed as a "dog bites man" story. I worked in the computer department of a bank for a long time. They treated computer problems that could screw up data very seriously. "We are messing with people's money and people take their money very seriously." I then worked for a company that ran hospitals and clinics. After observing the culture there I remarked "If you want to see people who treat computer problems seriously, talk to bankers. They deal with money. Around here we only deal with life and death and that's not as serious." That's a cute way of highlighting that people take money very seriously. And every aspect of handling money now depends critically on encryption.
If even one of the common encryption systems used in the money arena could be cracked there is a lot of money to be made. Look at the amount of noise generated by people stealing credit card information. It has finally caused the credit card industry in the US to move from a '60s style magnetic stripe technology to a modern RFID chip based one. The important take away is that the hackers have never broken into a system by breaking the encryption. They have used what is generally referred to as a "crib". One of the most successful cribs goes by the name of Social Engineering. You call someone up (or email them or whatever) and talk them out of information you are not entitled to like say a high powered user id and password. You use this information to break into the system.
Important data has been encrypted for many decades now. The DES standard was developed and implemented in the '70s. It is considered weak by modern standards but I know of no successful attempt to crack it. But the long voiced but so far not validated idea that "it might be crackable soon" has been enough to cause everybody to move on. Something called triple-DES was shown to be harder to crack after double-DES was shown to provide no improvement. We have since moved on to other encryption standards.
A common one in the computer business is "Secure Sockets". Any web site with a prefix of HTTPS uses it. It is now recommended for general use instead of being restricted to use only in "important" situations. The transition has resulted in some variation of a "show all content" message popping up with annoying frequency. That's because the web page is linking to a combination of secure (HTTPS) and unsecure (HTTP) web sites.
If the basic algorithm (computer formula) is sound the common trick is to make the key bigger. DES used a 56 bit key. The triple-DES algorithm can be used with keys that are as long as 168 bits. Behind the scenes, HTTPS has been doing the same thing. Over time the keys it uses have gotten longer and longer. And a little additional length makes a big difference. Every additional bit literally doubles the number of keys that need to be tested in a "brute force" (try all the possible combinations) attack.
So piling on the bits fixes everything, right? No! It gets back to that crib thing. Let's say I have somehow gotten hold of your locked cell phone. What if I call you and say "I'm your mother and I need the key for your phone." Being a dutiful child you always do what your mother says so you give me the key. At this point it literally doesn't matter how long the key you use is. Actually no one would fall for so transparent a ploy but it illustrates the basic idea of Social Engineering. It boils down to tricking people into giving you information that you can use to get around their security.
If I can get your key I have effectively reduced your key length to zero. Cribs can be very complex and sophisticated but a good way to think of them is in terms of ways to reduce the effective key length. If I can find a crib that reduces the effective key length to ten bits that means a brute force attack only needs to try a little over a thousand keys to be guaranteed success. I once used a brute force approach to figure out the combination of a bicycle lock. The lock could be set to a thousand different numbers but only one opened it. It took a couple of hours of trying each possibility in turn but I eventually succeeded in finding that one number. Under ideal circumstances a computer can try a thousand possibilities in less than a second.
And Apple is well aware of this. So they added a delay to the process. It takes about a twelfth of a second to process a key. This means that no more than a dozen keys can be tried in a second. And the Apple key is more than ten bits in length. But wait. There's more. After entering a certain number of wrong keys in a row (the number varies with iPhone model and iOS version) the phone locks up. Under some circumstances the phone will even go so far as to wipe everything clean if too many wrong keys are tried in a row.
The FBI is not releasing the details of what they have tried so far. And Apple has not released the details of what assistance they have rendered so far. But this particular iPhone as currently configured is apparently impervious to a brute force attack. Whatever else the FBI has tried is currently a secret. So what the FBI is asking from Apple is for changes to the configuration. Specifically, they want the twelfth second delay removed and they want the "lock up" and "wipe after a number of failed keys" features disabled. That, according to the FBI, would allow a medium speed brute force attack to be applied. Some combinations of iPhone and iOS version use relatively short key lengths so this would be an effective approach if the phone in question is one of them.
But Apple rightly characterizes this as a request by the FBI to build a crib into their phones. Another name for this sort of thing is a "back door". And we have been down this path before. In the '90s the NSA released specifications for something called a "Clipper chip". It was an encryption / decryption chip that appeared to provide a high level of security. It used an 80 bit key. That's a lot bigger than the 56 bit key used by DES so that's good, right? The problem is that the Clipper chip contained a back door that was supposed to allow "authorized security agencies" like the NSA to crack it fairly easily. The NSA requested that a law be passed mandating exclusive use of the Clipper chip. After vigorous push back on many fronts the whole thing was dropped a couple of years later without being implemented broadly.
We can also look to various statements made by current and former heads of various intelligence and law enforcement agencies. The list includes James Clapper (while he was Director of National Intelligence and since), former NSA director Keith Alexander, and others. They have all railed against encryption unless agencies like theirs are allowed back doors. Supposedly all kinds of horrible things will happen if these agencies can't read everything terrorists are saying. But so far there is no hard evidence that these back doors would be very helpful in the fight against terrorism. What they would be very helpful for is making it easy to invade the privacy of everybody. Pretty much nothing on the Internet was encrypted in the immediate post-9/11 period. Reading messages was helpful in some cases but the bad guys quickly learned how to make their messages hard to find and hard to read.
These agencies have swept up massive amounts of cell phone data. Again, mass data collection has not been shown to be important to thwarting terrorist plots. After they are on to a specific terrorist then going back and retrospectively reviewing who they have been in contact with has been helpful. And, by the way, that has already been done in the San Bernardino Massacre case. But the FBI argues that even after all these other things have been done they still desperately need to read the contents of this one cell phone. We have been told for more than a decade that the "collect everything" programs are desperately needed and are tremendously effective. The FBI's current request indicates that they are not all that effective and that means they were never needed as badly as we were told they were.
The FBI also argues that this will be a "one off" situation. Apple argues that once the tool exists its use will soon become routine. If cracking a phone is difficult, time consuming, and expensive after the tool exists then the FBI may have a case. But if it is then what's to stop the FBI from demanding that Apple build a new tool that is easier, quicker and cheaper to use. Once the first tool has been created the precedent has been set.
The fundamental question here is whether a right to privacy exists. The fourth amendment states:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
A plain reading of the language supports the idea that a privacy right exists and that the mass collection of phone records, whether "metadata" or the contents of the actual conversation, is unconstitutional. The Supreme Court has so far dodged its responsibility by falling back on a "standing" argument. I think the standing argument (which I am not going to get into) is bogus but I am not a Supreme Court justice. And the case we are focusing on is clearly covered by the "probable cause . . ." language. The FBI can and has obtained a search warrant. The only problem they are having is the practical one of making sense of the data they have seized.
The problem is not with this specific case. It is with what other use the capability in question might be put to. We have seen our privacy rights almost completely obliterated in the past couple of decades. Technology has enabled an unprecedented and overwhelming intrusion into our privacy. It is possible to listen in on conversations in your home by bouncing a laser off a window. A small GPS tracking device can be attached to your car in such a way that it is nearly undetectable. CCTV cameras are popping up everywhere allowing your public movements to be tracked. Thermal imaging cameras and other technology can tell a lot about what is going on inside your house even if you are not making any noise and they can do this from outside your property line.
And that ignores the fact that we now live in a highly computerized world. Records of your checking, credit card, and debit card activity, all maintained by computer systems, make your life pretty much an open book. Google knows where you go on the Internet (and probably what you say in your emails). And more and more of us run more and more of our lives from our smart phones. Imagine comparing what you can find out from a smart phone with what you could have found out 200 years ago by rifling through someone's desk (their "papers"). Then a lot of people couldn't read. So things were done orally. And financial activity was done in cash so no paper record of most transactions existed. The idea that the contents of a smartphone should not be covered under "persons, papers, and effects" is ridiculous. Yet key loggers and other spyware software are available for any and all models of smart phones.
Apple was one of the first companies to recognize this. They were helped along by several high profile cases where location data, financial data, and other kinds of private data were easily extracted from iPhones. They decided correctly that the only solution that would be effective would be to encrypt everything and to do so with enough protections that the encryption could not be easily avoided. The FBI has validated the robustness of their design.
Technology companies have been repeatedly embarrassed in the last few years by revelations that "confidential" data was easily being swept up by security agencies and others. They too decided that encryption was the way to cut this kind of activity off. Hence we see the move to secure (HTTPS) web sites and to companies encrypting data as is moves across the Internet from one facility to another.
Security agencies and others don't like this. It makes it at least hard and possibly impossible to tap into these data streams. And, according to agency heads this is very dangerous. But these people are known and documented liars. And they have a lot of incentive to lie. It makes the job of their agency easier and it makes it easier for them to amass bureaucratic power. Finally, given that lying does not put them at risk for criminal sanctions (none of them have even been charged) and can actually enhance their political standing, why wouldn't they?
Here's a theory for the paranoid. Maybe the FBI/NSA successfully cracked the phone. But they decided that they could use this case to leverage companies like Apple into building trap doors into their encryption technology. The Clipper case shows that this sort of thinking exists within these agencies. And agency heads are known to be liars. So this theory could be true. I don't think it is true but I can't prove that I am right. (I could if agency heads could actually be compelled to tell the truth when testifying under oath to Congress but I don't see that happening any time soon.)
The issue is at bottom about a trade off. The idea is that we can have more privacy but be less secure or we can have less privacy but be more secure. In my opinion, however, the case that we are more secure is weak to nonexistent and the case that we have lost a lot of valuable privacy and are in serious danger of losing even more is strong. I see the trade off in theory. But I don't see much evidence that as a practical matter the trade off actually exists in the real world. Instead I see us giving up privacy and getting nothing, as in no increase in security, back. In fact, I think our security is diminished as others see us behaving in a sneaky and underhanded way. That causes good people in the rest of the world to be reluctant to cooperate with us. That reduction in cooperation reduces our security. So I come down on the side of privacy and support Apple's actions.
In the end I expect some sort of deal will be worked out between the FBI and Apple. It will probably not be one that I approve of. It will erode our privacy a little or a lot and I predict that whatever information is eventually extracted from the phone will turn out to be of little or no value. And, as Tim Cook, the CEO of Apple, has stated, once the tool is built it will always exist for the next time and the time after that, ad infinitum. That is too high a cost.
Wednesday, January 27, 2016
Distracted Drivers
I try to pick subjects where I can either provide new (or at least not well known) information or a different perspective. The bulk of this post will be rehash but I do have something new to add. It's down at the bottom.
There is a computer term: multitasking. Over time the term has moved beyond its initial use only in a purely computer context to now frequently being applied more broadly. As we will eventually see, it now applies even to the subject at hand. But let's start with it in its original context. Old computers were really slow compared to their modern counterparts. But there was still a problem. Generally speaking data would be pulled into RAM and processed. It was then spit back out, typically in a new form. But where was it pulled from or pushed to?
The old technical term was "peripheral devices", gadgets that were connected up to the "computer" part of the computer. For modern computers these might be a disk drive, network card, etc. Or they could be a human interface device; something like a keyboard, mouse, or touch screen. Consider for a moment a keyboard. A good typist is capable of typing 60 words per minute. That's one word per second. And, for the purposes of measuring these sort of things, a "word" was 5 characters and a space. So in our example the computer would expect to see a new character every sixth of a second.
But even a very slow computer can perform a million instructions (computations) per second. This means that the computer can perform over 150,000 instructions while waiting for the next character to appear. You ought to be able to do something useful with 150,000 instructions so it seems like a waste for the computer to sit around idly doing nothing useful while it waits for the next character to come in. The solution was to have the computer work on two or more things "at once". That way it could beaver away on problem #2 while waiting for the next character aimed at problem #1.
Now pretty much every peripheral device can handle data faster than a keyboard. But the above example illustrates the problem. And early computers were fantastically expensive. They cost several millions of dollars each. Wasting even a little computer time amounted to wasting a lot of money. So various techniques were devised to allow the computer to have multiple tasks available and to be able to quickly switch from one task to another.
Most of the time any one particular problem (or task) is waiting for some I/O (Input/Output - a read or a write) operation to complete before it can continue on with the job at hand (executing instructions). This "fill in the otherwise idle time with useful work" idea is the driving force behind many of the early networking efforts. If several terminals are hooked up to the same computer then the computer can switch from working on the task associated with one terminal to working on the task associated with another terminal whenever the first task is hung up waiting for I/O to complete.
So the benefit was obvious. Multitasking could keep the computer busy doing useful work more of the time. But there was a cost. In the very early days of computers RAM was very hard to make and, therefore, expensive. A reasonable amount of RAM might cost a million dollars so every effort was made to keep RAM requirements to a minimum. You needed a lot of RAM to have enough to keep the critical pieces of several tasks (say one for each terminal) resident in RAM at the same time. But the price of RAM dropped and the cost of getting enough RAM to enable multitasking soon became manageable. But there was another cost.
The computer doesn't really do multiple things at once, or at least computers couldn't in the old days. So a piece of software called a "task switcher" was necessary. This software kept track of all the tasks currently loaded into RAM. It also kept track of what "state" each was in. A task could be "running" or "waiting to run", or "waiting for a specific I/O request to complete" (the most common state). The running task would go along until it needed to perform an I/O operation. Then it would turn things over to the task switcher. The task switcher would schedule the I/O, put the task to sleep, then look around for another task that was waiting to run. If it found one it would wake that task up and turn things over to it.
It turns out that it takes the execution of a lot of computer instructions to do a task switch. I have skipped over a lot of detail so you'll just have to take my word for it. The modern "task switch" process eats up a lot of instructions doing its job. So what's the point? The point is that these instructions are not available for use by running tasks. An old and slow computer from my past would often dedicate up to 45% of all instruction executions to task switch and other overhead processes. If without task switching you can only keep the computer busy 10% or 20% of the time this overhead is a good thing but it is still expensive.
And that cost associated with multitasking applies to other contexts like people. Most young people multitask all the time. They might be sitting in class and simultaneously monitoring Twitter feeds and updating Facebook posts. They are doing the same thing computers do. They devote a small slice of their attention to one thing, say the lecture. But then they quickly switch to focusing on their Twitter feed, but again not for long. Because they almost immediately switch to the Facebook post they are creating.
If you ask them they will say that what is happening is something akin to what happened on computers in the olden days. They are able to task switch quickly enough and often enough, and efficiently enough that their net productivity goes up. The problem is that everyone who has actually studied the net productivity of multitasking people finds that their net productivity goes down, sometimes by a lot. They think they are doing multiple things well at the same time. But the studies show that they are actually doing multiple things badly at the same time. So how does all this apply to driving?
It turns out that it is directly on point. This same model of trying to do multiple things at the same time means you do them all badly. And you especially drive badly. People think they can task switch quickly enough and efficiently enough that nothing important happens on the road in the small time their focus is away from driving. But every study says they are wrong.
And it turns out that there is a model for what is going on, driving drunk. Drunk drivers aren't multitasking. But they are doing the same thing multitaskers do. They fail to sufficiently focus on their driving. In the drunk driver's case their mind tends to wander. They are not switching from one productive task to another. They are switching from a productive task to a non-productive one, effectively day dreaming. This behavior pattern and its impact on driving was recognized decades ago and the response has been Mothers Against Drunk Driving. An emphasis on getting drunks off the road has cut down on crashes.
But with the advent of the cell phone things changed. You didn't have to be drunk to drive badly. If you were switching your focus between the road and your phone the results could be similar to driving drunk. Lots of drunk drivers believe they can drive well while drunk. Similarly, lots of cell phone users believe they can drive well while using their phone. And it turns out that to some extent they are right. What?
Put simply, there are times when driving requires laser focus and times it doesn't. If you are driving on a straight road in good weather and there is no one else on the road driving does not require much intellectual effort. On the other hand, let's say there is a lot of other traffic on the road. And the speed of the traffic changes drastically and frequently. And say your sight lines are impaired (or the weather is bad) so that things can "come out of nowhere"; a driver pulling out of a parking space, a pedestrian hopping between cars outside of a crossing zone; a bicyclist cutting in and out of traffic. Then driving requires a great deal of attention and it requires it pretty much continuously.
The basic question to ask is "how many decisions per second need to be made". Coupled with this are "how much time is there for the decision" and "how many items must be factored into the decision". If the number of decisions per second is low and the amount of time permitted for decision making is long and the number of items is small then a great deal of "free time" is available without a significant diminution in your quality of driving.
Looked at this way we can see why the first scenario is easy. Few decisions per second are required as not much is going on. The good sight lines mean that there is a relatively long time within which to make the decision. And few factors, perhaps one or two, need to be allowed for. This results in few decisions needing to be made and not much effort being required to reach the correct decision and implement it. And what this means is that in this situation a lot of time can be spent with your focus away from driving without risking any harmful consequences. With a little discipline a cell phone conversation can safely take place under these conditions.
And also looked at this way we can see why the second scenario is hard. This scenario involves a much higher potential decision rate. And that's really the same as a high decision rate. Deciding there is nothing that needs to be done right now is a special kind of decision. And there are a lot of moving pieces to monitor. The car in front is not far away (there is a car in front because there is a lot of traffic and it's not far away also because there is a lot of traffic) so it needs to be carefully and continuously monitored. If it's a multilane road then cars in the other lanes need to be monitored for potential lane changes. Blind spots need to be monitored for the unexpected. And if something changes a decision needs to be made and acted upon quickly. And it is possible, even probable, that several things will change at approximately the same time. So the decision may not be a simple "slow down"/"speed up" decision. Perhaps a swerve needs to also be thrown in so it's complex.
In the first situation if we are switching our focus from driving to the cell phone for short periods of time we will probably still be ok. Something may have changed while our attention was away. But the change will be simple and we should still have plenty of time in which to decide what to do and to do it. In the second scenario the chances that something will go wrong while our focus is elsewhere is much greater. In this environment the time and effort to task switch away from something else and back to driving is a luxury we can't afford.
I think that if we are being honest all of us would agree that talking on the phone while driving is a bad idea. But a lot of us think we can get away with because we are good at task switching and we will be careful and only do in at "appropriate" (situation 1) times. But people usually give themselves too much credit in these situations. Let's move on.
The "fix" to the above situation is to only permit talking on the phone while driving only if we are using a "hands free" device. There is something to this but not much. The hand's free device allows us to keep our hands on the wheel and our eyes on the road. This is an improvement but not much of one. The decision to allow hands free cell phone use was a political one that was based on no scientific research. The broad availability of blue tooth hands free devices meant that many cell phone users had already gone hands free. The industry could see sales of hands free accessories increasing so they decided to go along. But the benefits of hands free are small.
The problem that hands free does not solve is the focus problem. Where is your attention focused? Theoretically hands free makes switching focus back to the road quicker. But lots of people drive one handed so the fact that you are using one hand to hold the phone to your ear rather than doing something else with it doesn't really change things. And it is easy to talk into a hand held phone while keeping your eyes on the road. So there is no real difference during the conversation part of cell phone use. Hands free plus voice activation does help with dialing and hanging up but that's only a small portion of a typical call. In the end there is very little difference between hands on and hands free.
The discussion of talking while driving started when cell phones became common. But a couple of generations of phones later a new threat arose: texting while driving. You pretty much need to look at the device and use both hands to text. This removes your focus, your hands, and your eyes from where they are supposed to be. It also means that the length of time you spend with your focus switched away from driving is much longer. It require more of your brain to text than it does to talk and it takes longer to type a phrase than say it even if you are using all the cute texting shortcuts. That means there is a much longer continuous interval where you are not monitoring the driving environment. This is worse, much worse, and the statistics bear this out. Texting while driving is way more dangerous than talking while driving. And unfortunately there is a significant population that thinks they can get away with doing it anyhow. They are a danger to us all.
So far I have mostly covered ground that everyone is familiar with. Okay, I might have thrown is some computer stuff that is unfamiliar to most of us. But for the most part people have gotten to the same conclusion I have by one means or another. So where's the original content? Coming right up.
I bought a new car about six months ago. I purposely got a car with all the new "electronic assist" goodies. I wanted a backup camera. I wanted a blind spot monitor. My car came with those and lots of other goodies. If the road has decent lane markers my car will warn me if I start to drift across the lane markings without first putting on my turn signal. It also has a alert that warns me that I might not have noticed the car in front of me slowing down. It has another alert that warns me that the car in front of me has started moving and I haven't. It has a bunch of more alerts too but I think you get the idea.
My old car was a 'mid-price 99. It had some gadgets on it but nothing like what my new car has. You could set my old car so the head lights stayed on for a while after you exited the car. It had a compass built into the rear view mirror and cruise control. That was pretty fancy for the time. But my new car leaves all those old gadgets in the dust. My new car has an "automatic" setting on the headlights. They turn on and off automatically based on how much daylight there is. And, of course, it automatically delays shutting the headlights off when you exit the car. My new car has a deluxe cruise control system and a full up navigation system to compliment the compass in the rear view mirror. I'm not trying to brag here. There are lots of other cars that come equipped with a similar (or perhaps an even more extensive) set of gadgets. I am just pointing out that my new car has gadget after gadget after gadget. And it's not just the sheer number of gadgets. Each individual gadget is much more complex.
Let me give you an example. I now leave my headlights set on "automatic". But once the setting got changed without my noticing it. So now I'm driving around in the dark without my headlights on. After a while I figured that out. But now the headlight control is surrounded by a bunch of other controls. So there is no way I am going to be able to fix the problem and drive at the same time. Once I got where I was going I turned the cabin light on and spent about thirty seconds getting the setting fixed. But while I was doing this I accidently turned the high beams on. Again it took me a while to figure this out. But that led to the "how do I turn the high beams off" problem. I am so used to everything on my new car working differently than it did on my old car that the obvious solution of doing the same thing I would have done on my old car literally did not occur to me. Someone had to suggest it to me and you'll be happy to know it worked fine.
I want to make two points. First of all my new car is regularly alerting me to something or another. Some of these are wonderful. It will alert me to cross traffic when I am backing out of a parking slot. With my old car I often couldn't see a thing until I was in the middle of the street. By the time I was far enough out of the slot to see it was too late. The crash would have already happened. With my new car I get an alert, frequently before I have even started backing up. The alert tells me when the traffic has finished passing by and I can back out safely. So that alert is all to the good.
With the other alerts some of them are more of a mixed bag. With the stopping alert I am most likely on top of what is going on. I just want to start stopping a little later than the car does. The car, for good reason, is quite conservative about when it thinks I should start to slow down. Remember, it needs time to warn me, decide I am not going to heed its warning, and after that still have time to slow the car down on its own. With the starting alert what's usually happening is I can see something (i.e. a pedestrian) that the car doesn't so there is a good reason why I haven't immediately imitated the car in front of me. And so on. The point is that my car now fairly frequently makes some noise that breaks my concentration and thus interferes with my focus. So that's one thing.
The other thing is illustrated by the head light story. Things that used to be simple to do are now often much more complicated. A couple of weeks ago I was using the navigation system to get me home from an unfamiliar location. The trip involved two parts. The first part was "get from the starting point to the freeway". For that part the navigation system was invaluable. It did a great job of navigating me along a complicated path on unfamiliar streets. But once I was on the freeway I no longer needed or wanted the navigation system. All I wanted to do was turn it off. But that is not a simple process.
It involved using a touch screen. That involved taking my eyes (and one hand) away from their driving tasks and working my way through the process, all while driving in such a way that I did not run off the road or crash into anybody. It turned out that what I did didn't turn it completely off. I just put it to sleep. So twenty minutes later it woke up and started alerting me to the fact that I should not take an exit that I already knew not to take. So I had to fiddle with it some more to get it actually turned off. And again it was important to not drive off the road nor crash into anybody.
I'm sure I will get better at this sort of thing as time goes by and I get more familiar with all these new gadgets. And I'm sure that some of my fellow drivers thought I was nuts while I was distracted dealing with my navigation system. But then we all see people doing nutty things on the road all the time now. Why? Because they are on the phone or texting or whatever. In any case no one thought what I was doing merited a honk so I guess my behavior fell into the range of what now passes for relatively normal driving.
So my new contribution to the subject is to be the first to identify a new source of distraction that drivers can be subjected to: new cars with a lot of electronic gadgets. With my old car I was pretty much on my own. I knew it and acted accordingly. I didn't expect the car to be of much help but on the other hand the car did not routinely engage in distracting behavior. And you could do simple things like turn the headlights on or change the channel on the radio without having to take your focus off of driving for longer than a brief moment.
With new cars things are quite different. The car will routinely engage in distracting behavior. Sometimes this is a totally good thing. I love being distracted by a cross traffic alert when I am backing out of a parking stall. In other cases the advantage is less clear cut. Most but not all of the time the car is alerting me to a situation I am already aware of. If I am already aware of it then it is a needless distraction that did not exist before the advent of modern "driver assist" electronics. And I can't get a car with the long list of goodies my new car has without also getting a car that has much more complicated controls than cars like my old one.
And its not just me. A few months ago Tesla put out an update to the software on their model S. The update provided an extremely sophisticated cruise control. You could take your hands completely off the wheel for long periods of time. The car would automatically compensate for traffic conditions. It would read the speed limit signs along the side of the road and act accordingly. It would even change lanes if that seemed appropriate. Way cool, right? Apparently too cool. Almost immediately Tesla released a new download that dialed way back on how much the car would do on its own. Apparently It wasn't quite as ready for prime time as Tesla thought.
Finally, let me fold a "robot car" update in here. Back when my old car was new we had a simple situation. The driver drove the car with little or nothing in the way of intelligent assistance from the car. That put the responsibility squarely on the shoulders of the driver. Once we get to robot cars we will again be in a simple situation. Then the driving responsibility will be squarely on the shoulders of the car. Where we are now is some kind of a middle zone. Driving responsibility rests primarily with the driver. But the car is making a significant contribution. This makes the situation more complicated than either pure extreme. And, as the Tesla experience demonstrates, navigating this middle ground is not going to be 100% smooth sailing.
And the general public has already picked up on this. Surveys show that a large majority of people are not ready for a completely robotic car yet. People definitely want a person to be able to take control back from the "robot" any time they feel the need. That means it is going to be a while before there are cars on the market that do not include a steering wheel, brake pedal, etc.. As far as I can determine the Tesla problem resulted in a few scares but no actual accidents. Tesla rightly dialed things back as soon as problems became apparent. And I expect that is how things are going to continue to evolve for some time.
Development of robot cars is proceeding apace. There are now multiple well funded groups with access to deep pools of technical expertise actively working on the problem. It won't be long before someone finds a way to get robot cars into the hands of ordinary consumers. And this will happen in the very near future (2020 or sooner). Then consumers will be in a position to make an informed decision. From Frankenstein to Jurassic World, all of us have long been exposed to "technology gone horribly wrong" themed movies. The public is very attuned to the possibility that robot cars will somehow go horribly wrong. So people want to take it slow for now, and with good reason. But once ordinary consumers get their hands on these cars they will either work well in the real world or they won't. I think before long they will work and work well. At that point the situation will switch from movie plot to real world experience. At that point I expect public to quickly become comfortable sharing the road with robot cars. Time will tell.
There is a computer term: multitasking. Over time the term has moved beyond its initial use only in a purely computer context to now frequently being applied more broadly. As we will eventually see, it now applies even to the subject at hand. But let's start with it in its original context. Old computers were really slow compared to their modern counterparts. But there was still a problem. Generally speaking data would be pulled into RAM and processed. It was then spit back out, typically in a new form. But where was it pulled from or pushed to?
The old technical term was "peripheral devices", gadgets that were connected up to the "computer" part of the computer. For modern computers these might be a disk drive, network card, etc. Or they could be a human interface device; something like a keyboard, mouse, or touch screen. Consider for a moment a keyboard. A good typist is capable of typing 60 words per minute. That's one word per second. And, for the purposes of measuring these sort of things, a "word" was 5 characters and a space. So in our example the computer would expect to see a new character every sixth of a second.
But even a very slow computer can perform a million instructions (computations) per second. This means that the computer can perform over 150,000 instructions while waiting for the next character to appear. You ought to be able to do something useful with 150,000 instructions so it seems like a waste for the computer to sit around idly doing nothing useful while it waits for the next character to come in. The solution was to have the computer work on two or more things "at once". That way it could beaver away on problem #2 while waiting for the next character aimed at problem #1.
Now pretty much every peripheral device can handle data faster than a keyboard. But the above example illustrates the problem. And early computers were fantastically expensive. They cost several millions of dollars each. Wasting even a little computer time amounted to wasting a lot of money. So various techniques were devised to allow the computer to have multiple tasks available and to be able to quickly switch from one task to another.
Most of the time any one particular problem (or task) is waiting for some I/O (Input/Output - a read or a write) operation to complete before it can continue on with the job at hand (executing instructions). This "fill in the otherwise idle time with useful work" idea is the driving force behind many of the early networking efforts. If several terminals are hooked up to the same computer then the computer can switch from working on the task associated with one terminal to working on the task associated with another terminal whenever the first task is hung up waiting for I/O to complete.
So the benefit was obvious. Multitasking could keep the computer busy doing useful work more of the time. But there was a cost. In the very early days of computers RAM was very hard to make and, therefore, expensive. A reasonable amount of RAM might cost a million dollars so every effort was made to keep RAM requirements to a minimum. You needed a lot of RAM to have enough to keep the critical pieces of several tasks (say one for each terminal) resident in RAM at the same time. But the price of RAM dropped and the cost of getting enough RAM to enable multitasking soon became manageable. But there was another cost.
The computer doesn't really do multiple things at once, or at least computers couldn't in the old days. So a piece of software called a "task switcher" was necessary. This software kept track of all the tasks currently loaded into RAM. It also kept track of what "state" each was in. A task could be "running" or "waiting to run", or "waiting for a specific I/O request to complete" (the most common state). The running task would go along until it needed to perform an I/O operation. Then it would turn things over to the task switcher. The task switcher would schedule the I/O, put the task to sleep, then look around for another task that was waiting to run. If it found one it would wake that task up and turn things over to it.
It turns out that it takes the execution of a lot of computer instructions to do a task switch. I have skipped over a lot of detail so you'll just have to take my word for it. The modern "task switch" process eats up a lot of instructions doing its job. So what's the point? The point is that these instructions are not available for use by running tasks. An old and slow computer from my past would often dedicate up to 45% of all instruction executions to task switch and other overhead processes. If without task switching you can only keep the computer busy 10% or 20% of the time this overhead is a good thing but it is still expensive.
And that cost associated with multitasking applies to other contexts like people. Most young people multitask all the time. They might be sitting in class and simultaneously monitoring Twitter feeds and updating Facebook posts. They are doing the same thing computers do. They devote a small slice of their attention to one thing, say the lecture. But then they quickly switch to focusing on their Twitter feed, but again not for long. Because they almost immediately switch to the Facebook post they are creating.
If you ask them they will say that what is happening is something akin to what happened on computers in the olden days. They are able to task switch quickly enough and often enough, and efficiently enough that their net productivity goes up. The problem is that everyone who has actually studied the net productivity of multitasking people finds that their net productivity goes down, sometimes by a lot. They think they are doing multiple things well at the same time. But the studies show that they are actually doing multiple things badly at the same time. So how does all this apply to driving?
It turns out that it is directly on point. This same model of trying to do multiple things at the same time means you do them all badly. And you especially drive badly. People think they can task switch quickly enough and efficiently enough that nothing important happens on the road in the small time their focus is away from driving. But every study says they are wrong.
And it turns out that there is a model for what is going on, driving drunk. Drunk drivers aren't multitasking. But they are doing the same thing multitaskers do. They fail to sufficiently focus on their driving. In the drunk driver's case their mind tends to wander. They are not switching from one productive task to another. They are switching from a productive task to a non-productive one, effectively day dreaming. This behavior pattern and its impact on driving was recognized decades ago and the response has been Mothers Against Drunk Driving. An emphasis on getting drunks off the road has cut down on crashes.
But with the advent of the cell phone things changed. You didn't have to be drunk to drive badly. If you were switching your focus between the road and your phone the results could be similar to driving drunk. Lots of drunk drivers believe they can drive well while drunk. Similarly, lots of cell phone users believe they can drive well while using their phone. And it turns out that to some extent they are right. What?
Put simply, there are times when driving requires laser focus and times it doesn't. If you are driving on a straight road in good weather and there is no one else on the road driving does not require much intellectual effort. On the other hand, let's say there is a lot of other traffic on the road. And the speed of the traffic changes drastically and frequently. And say your sight lines are impaired (or the weather is bad) so that things can "come out of nowhere"; a driver pulling out of a parking space, a pedestrian hopping between cars outside of a crossing zone; a bicyclist cutting in and out of traffic. Then driving requires a great deal of attention and it requires it pretty much continuously.
The basic question to ask is "how many decisions per second need to be made". Coupled with this are "how much time is there for the decision" and "how many items must be factored into the decision". If the number of decisions per second is low and the amount of time permitted for decision making is long and the number of items is small then a great deal of "free time" is available without a significant diminution in your quality of driving.
Looked at this way we can see why the first scenario is easy. Few decisions per second are required as not much is going on. The good sight lines mean that there is a relatively long time within which to make the decision. And few factors, perhaps one or two, need to be allowed for. This results in few decisions needing to be made and not much effort being required to reach the correct decision and implement it. And what this means is that in this situation a lot of time can be spent with your focus away from driving without risking any harmful consequences. With a little discipline a cell phone conversation can safely take place under these conditions.
And also looked at this way we can see why the second scenario is hard. This scenario involves a much higher potential decision rate. And that's really the same as a high decision rate. Deciding there is nothing that needs to be done right now is a special kind of decision. And there are a lot of moving pieces to monitor. The car in front is not far away (there is a car in front because there is a lot of traffic and it's not far away also because there is a lot of traffic) so it needs to be carefully and continuously monitored. If it's a multilane road then cars in the other lanes need to be monitored for potential lane changes. Blind spots need to be monitored for the unexpected. And if something changes a decision needs to be made and acted upon quickly. And it is possible, even probable, that several things will change at approximately the same time. So the decision may not be a simple "slow down"/"speed up" decision. Perhaps a swerve needs to also be thrown in so it's complex.
In the first situation if we are switching our focus from driving to the cell phone for short periods of time we will probably still be ok. Something may have changed while our attention was away. But the change will be simple and we should still have plenty of time in which to decide what to do and to do it. In the second scenario the chances that something will go wrong while our focus is elsewhere is much greater. In this environment the time and effort to task switch away from something else and back to driving is a luxury we can't afford.
I think that if we are being honest all of us would agree that talking on the phone while driving is a bad idea. But a lot of us think we can get away with because we are good at task switching and we will be careful and only do in at "appropriate" (situation 1) times. But people usually give themselves too much credit in these situations. Let's move on.
The "fix" to the above situation is to only permit talking on the phone while driving only if we are using a "hands free" device. There is something to this but not much. The hand's free device allows us to keep our hands on the wheel and our eyes on the road. This is an improvement but not much of one. The decision to allow hands free cell phone use was a political one that was based on no scientific research. The broad availability of blue tooth hands free devices meant that many cell phone users had already gone hands free. The industry could see sales of hands free accessories increasing so they decided to go along. But the benefits of hands free are small.
The problem that hands free does not solve is the focus problem. Where is your attention focused? Theoretically hands free makes switching focus back to the road quicker. But lots of people drive one handed so the fact that you are using one hand to hold the phone to your ear rather than doing something else with it doesn't really change things. And it is easy to talk into a hand held phone while keeping your eyes on the road. So there is no real difference during the conversation part of cell phone use. Hands free plus voice activation does help with dialing and hanging up but that's only a small portion of a typical call. In the end there is very little difference between hands on and hands free.
The discussion of talking while driving started when cell phones became common. But a couple of generations of phones later a new threat arose: texting while driving. You pretty much need to look at the device and use both hands to text. This removes your focus, your hands, and your eyes from where they are supposed to be. It also means that the length of time you spend with your focus switched away from driving is much longer. It require more of your brain to text than it does to talk and it takes longer to type a phrase than say it even if you are using all the cute texting shortcuts. That means there is a much longer continuous interval where you are not monitoring the driving environment. This is worse, much worse, and the statistics bear this out. Texting while driving is way more dangerous than talking while driving. And unfortunately there is a significant population that thinks they can get away with doing it anyhow. They are a danger to us all.
So far I have mostly covered ground that everyone is familiar with. Okay, I might have thrown is some computer stuff that is unfamiliar to most of us. But for the most part people have gotten to the same conclusion I have by one means or another. So where's the original content? Coming right up.
I bought a new car about six months ago. I purposely got a car with all the new "electronic assist" goodies. I wanted a backup camera. I wanted a blind spot monitor. My car came with those and lots of other goodies. If the road has decent lane markers my car will warn me if I start to drift across the lane markings without first putting on my turn signal. It also has a alert that warns me that I might not have noticed the car in front of me slowing down. It has another alert that warns me that the car in front of me has started moving and I haven't. It has a bunch of more alerts too but I think you get the idea.
My old car was a 'mid-price 99. It had some gadgets on it but nothing like what my new car has. You could set my old car so the head lights stayed on for a while after you exited the car. It had a compass built into the rear view mirror and cruise control. That was pretty fancy for the time. But my new car leaves all those old gadgets in the dust. My new car has an "automatic" setting on the headlights. They turn on and off automatically based on how much daylight there is. And, of course, it automatically delays shutting the headlights off when you exit the car. My new car has a deluxe cruise control system and a full up navigation system to compliment the compass in the rear view mirror. I'm not trying to brag here. There are lots of other cars that come equipped with a similar (or perhaps an even more extensive) set of gadgets. I am just pointing out that my new car has gadget after gadget after gadget. And it's not just the sheer number of gadgets. Each individual gadget is much more complex.
Let me give you an example. I now leave my headlights set on "automatic". But once the setting got changed without my noticing it. So now I'm driving around in the dark without my headlights on. After a while I figured that out. But now the headlight control is surrounded by a bunch of other controls. So there is no way I am going to be able to fix the problem and drive at the same time. Once I got where I was going I turned the cabin light on and spent about thirty seconds getting the setting fixed. But while I was doing this I accidently turned the high beams on. Again it took me a while to figure this out. But that led to the "how do I turn the high beams off" problem. I am so used to everything on my new car working differently than it did on my old car that the obvious solution of doing the same thing I would have done on my old car literally did not occur to me. Someone had to suggest it to me and you'll be happy to know it worked fine.
I want to make two points. First of all my new car is regularly alerting me to something or another. Some of these are wonderful. It will alert me to cross traffic when I am backing out of a parking slot. With my old car I often couldn't see a thing until I was in the middle of the street. By the time I was far enough out of the slot to see it was too late. The crash would have already happened. With my new car I get an alert, frequently before I have even started backing up. The alert tells me when the traffic has finished passing by and I can back out safely. So that alert is all to the good.
With the other alerts some of them are more of a mixed bag. With the stopping alert I am most likely on top of what is going on. I just want to start stopping a little later than the car does. The car, for good reason, is quite conservative about when it thinks I should start to slow down. Remember, it needs time to warn me, decide I am not going to heed its warning, and after that still have time to slow the car down on its own. With the starting alert what's usually happening is I can see something (i.e. a pedestrian) that the car doesn't so there is a good reason why I haven't immediately imitated the car in front of me. And so on. The point is that my car now fairly frequently makes some noise that breaks my concentration and thus interferes with my focus. So that's one thing.
The other thing is illustrated by the head light story. Things that used to be simple to do are now often much more complicated. A couple of weeks ago I was using the navigation system to get me home from an unfamiliar location. The trip involved two parts. The first part was "get from the starting point to the freeway". For that part the navigation system was invaluable. It did a great job of navigating me along a complicated path on unfamiliar streets. But once I was on the freeway I no longer needed or wanted the navigation system. All I wanted to do was turn it off. But that is not a simple process.
It involved using a touch screen. That involved taking my eyes (and one hand) away from their driving tasks and working my way through the process, all while driving in such a way that I did not run off the road or crash into anybody. It turned out that what I did didn't turn it completely off. I just put it to sleep. So twenty minutes later it woke up and started alerting me to the fact that I should not take an exit that I already knew not to take. So I had to fiddle with it some more to get it actually turned off. And again it was important to not drive off the road nor crash into anybody.
I'm sure I will get better at this sort of thing as time goes by and I get more familiar with all these new gadgets. And I'm sure that some of my fellow drivers thought I was nuts while I was distracted dealing with my navigation system. But then we all see people doing nutty things on the road all the time now. Why? Because they are on the phone or texting or whatever. In any case no one thought what I was doing merited a honk so I guess my behavior fell into the range of what now passes for relatively normal driving.
So my new contribution to the subject is to be the first to identify a new source of distraction that drivers can be subjected to: new cars with a lot of electronic gadgets. With my old car I was pretty much on my own. I knew it and acted accordingly. I didn't expect the car to be of much help but on the other hand the car did not routinely engage in distracting behavior. And you could do simple things like turn the headlights on or change the channel on the radio without having to take your focus off of driving for longer than a brief moment.
With new cars things are quite different. The car will routinely engage in distracting behavior. Sometimes this is a totally good thing. I love being distracted by a cross traffic alert when I am backing out of a parking stall. In other cases the advantage is less clear cut. Most but not all of the time the car is alerting me to a situation I am already aware of. If I am already aware of it then it is a needless distraction that did not exist before the advent of modern "driver assist" electronics. And I can't get a car with the long list of goodies my new car has without also getting a car that has much more complicated controls than cars like my old one.
And its not just me. A few months ago Tesla put out an update to the software on their model S. The update provided an extremely sophisticated cruise control. You could take your hands completely off the wheel for long periods of time. The car would automatically compensate for traffic conditions. It would read the speed limit signs along the side of the road and act accordingly. It would even change lanes if that seemed appropriate. Way cool, right? Apparently too cool. Almost immediately Tesla released a new download that dialed way back on how much the car would do on its own. Apparently It wasn't quite as ready for prime time as Tesla thought.
Finally, let me fold a "robot car" update in here. Back when my old car was new we had a simple situation. The driver drove the car with little or nothing in the way of intelligent assistance from the car. That put the responsibility squarely on the shoulders of the driver. Once we get to robot cars we will again be in a simple situation. Then the driving responsibility will be squarely on the shoulders of the car. Where we are now is some kind of a middle zone. Driving responsibility rests primarily with the driver. But the car is making a significant contribution. This makes the situation more complicated than either pure extreme. And, as the Tesla experience demonstrates, navigating this middle ground is not going to be 100% smooth sailing.
And the general public has already picked up on this. Surveys show that a large majority of people are not ready for a completely robotic car yet. People definitely want a person to be able to take control back from the "robot" any time they feel the need. That means it is going to be a while before there are cars on the market that do not include a steering wheel, brake pedal, etc.. As far as I can determine the Tesla problem resulted in a few scares but no actual accidents. Tesla rightly dialed things back as soon as problems became apparent. And I expect that is how things are going to continue to evolve for some time.
Development of robot cars is proceeding apace. There are now multiple well funded groups with access to deep pools of technical expertise actively working on the problem. It won't be long before someone finds a way to get robot cars into the hands of ordinary consumers. And this will happen in the very near future (2020 or sooner). Then consumers will be in a position to make an informed decision. From Frankenstein to Jurassic World, all of us have long been exposed to "technology gone horribly wrong" themed movies. The public is very attuned to the possibility that robot cars will somehow go horribly wrong. So people want to take it slow for now, and with good reason. But once ordinary consumers get their hands on these cars they will either work well in the real world or they won't. I think before long they will work and work well. At that point the situation will switch from movie plot to real world experience. At that point I expect public to quickly become comfortable sharing the road with robot cars. Time will tell.
Subscribe to:
Posts (Atom)