I have the deluxe 10 disk boxed set of the trilogy. I watched them again recently. It got me to thinking.
"The Matrix", the original film in what eventually turned out to be a trilogy, was a trend setter and a huge box office success. Its success was what enabled the subsequent two movies to be made. Although they made a lot of money, chapters 2 and 3 (go ahead, try to remember their titles) did not have the impact the first movie did. Why?
The Matrix movies fall into the general category of "Action Movie". As such there are three components, the plot, the babe, and the action. A good action movie will have all three in the appropriate balance. The long running "James Bond" franchise is a classic example of the action genre. So I will use them to examine these components in a little more detail before moving on to the Matrix movies.
The Bond series highlighted the role of the babe. There is such a thing as a "Bond Girl". I even have a book devoted solely to Bond Girls. Most Bond movies have only one but several have two or three. Being a Bond Girl has been the high point of many an actresses' carrier. Ursula Andress, one of the early ones, is almost entirely unknown outside of her Bond Girl persona. And the quality, for lack of a better term, of the Bond girls in the various films, has been erratic.
The Bond series is also noted for its "set piece" action sequences. Typically a Bond movie opens with one. In one movie Bond is chased from what turns out to be a mountain chalet. He is chased on skis down a steep mountain slope by the bad guys. Finally he literally skis off a giant cliff. After what seems like an eternity watching our hero fall through space, finally a parachute in the pattern of a British Union Jack flag opens and we cut to the opening credits. For obvious reasons, this scene is still burned into my retinas even though the movie came out decades ago. Other action sequences are peppered though each movie. One standard set piece type is the chase. It can be in cars, under water in SCUBA gear, in at least one case in space, and on skis as it was in my example. Another action sequence type is the fight between Bond and the villain or his henchmen. Finally, most Bond movies end with a giant explosion or series of explosions, which destroy the villain's lair.
Finally, there is the plot. Early Bond films had elaborate plots. Later the series settled on a standard plot. The villain is trying to take over the world. Bond first discovers this, then tracks the villain to his lair, and finally blows the place up, foiling the plot. Usually Bond and at least one Bond Girl are thrown together and romance ensues. Since it is important to be able to move on to the next Bond Girl in the next movie, a depressing number of Bond Girl characters are killed off.
The early Bond films were very successful and eventually the Bond films became the prototype that most action movies followed. The "take away" for Hollywood was that the plot was not very important, the babe needed to be beautiful but was otherwise disposable, and action was king. So the Bond films maintained a high standard in their action sequences throughout the series. The Bond Girls were beautiful for the most part, but generally unmemorable. There were usually given little to do but hang around looking beautiful. And the plots were allowed to deteriorate. You had the villain du jour implementing the "plot to take over the world" du jour. No one, even fans, paid any attention to the plots of the later movies.
This formula worked very well for many years. The first Bond movie came out in the early '60s and the films were reliable money makers for decades. But by 1999, when "The Matrix" came out, the formula looked vulnerable. A lot of action movies with a non-existent plot, a great babe, and the usual number of well executed action sequences were no longer doing well at the box office. And, since the action sequences were expensive to create, action movies need a large box office to be profitable. For several years, Hollywood could count on foreign revenue to close the gap. Action movies, with little dialog to translate and not much in the way of story that might not go over well in a foreign culture, did well in the foreign market. You might have to cut back on the violence and/or sex to cater to a specific foreign market segment but that was easy to do. Eventually the foreign market saturated out too leaving some to believe that the action genre had run its course. And then along came "The Matrix" in 1999.
I am going to delay talking about plot and talk about the second component first. In the case of the Matrix movies the babe was "Trinity", played by Carrie-Anne Moss. Some commentators say she was not beautiful enough but I disagree. And for the "matrix" parts of the movies (you know what I am talking about if you have seen any of them, and as for the rest of you . . .) she was dressed in a shiny bondage style outfit. She made a real and positive impact on me and that's what the babe is supposed to do. And, unlike the Bond movies, the Matrix movies stuck with the same babe through all three movies. And Carrie-Anne turned in solid performances in all three movies. I know of no action movie that has been made or broken solely by the babe. If Carrie-Anne was weak in the second and third movies, she was weak in the first one. So she wasn't the deciding factor in why the first movie has a much better reputation than the other two.
Moving on to action, here I see a decided difference between the first movie and the last two. (It should be noted that the last two were made at the same time and are best seen as two parts of the same movie). "The Matrix" came out in 1999. This was a period of great advances in CGI (Computer Generated Imagery) capability. A lot of the action in "The Matrix" was made or augmented by CGI. This allowed not only new effects but a seamless integration of CGI effects with "practical" effects, effects using camera tricks and effects based on mechanical devices. An example of a camera trick was "bullet time". This was done by positioning a hundred or more still cameras and then setting them off in a precisely timed sequence. When the still pictures were assembled a frame at a time into the movie it was as if a fast moving movie camera had been used. This resulted in several dramatic "swoop around" and slow-motion scenes that were highly effective. For mechanical devices think R2D2. For many scenes there was a man inside the "robot" operating the various appendages. With good editing this resulted in a very lifelike R2.
By 1999 the cost of an elaborate CGI sequence had plunged. CGI takes vast amounts of computer processing power. By 1999 the CGI people had figured out how to hook together a large number of relatively cheap computer workstations. It might take hours to do the computations necessary to create one frame of film. But with many processors it became possible to do many frames in one overnight run by distributing the work across many processors. Cheap processing power also made it possible to move to digital editing. There is a limit to how many separate components can be used in one frame if you use traditional film techniques. If you look at the original "Star Wars" move, not the cleaned up reissue, there are several places where you can see lighter or darker squares where a space ship image is laid into a complex scene. So many images were combined that it was not possible to maintain a uniform black background. Digital processing does not have this problem. You can combine a virtually unlimited number of images into one frame without anyone being able to tell where the component from one source meets a component from another source.
The Wachowski brothers did a brilliant job of understanding that these new capabilities allowed action sequences to be taken up a notch in "The Matrix". They designed and implemented a number of memorable sequences. They also integrated the action sequences in a seamless manner. In the early days of sound the typical Musical would move along doing the usual dialog and story thing. Then it would stop and do a musical number. Then it would go back to story and plot. Action movies often used the same structure. You could almost see the transitions between the normal movie and the action sequence. With CGI advances and digital editing the Wachowski brothers were able to integrate CGI effects into what appeared to be the normal part of the movie. So there was no sharp boundary between the "normal" part of the movie and the "action" part of the movie. So one of the things that made "The Matrix" such a success was the outstanding action sequences that did not stand out from the rest of the movie.
The two sequels that completed the trilogy were released in 2003. With the success of the original the Wachowskis were given a boat load of money. The state of the art in CGI also advanced. Computer workstations continued to get cheaper and more powerful. So the amount of computer power that could be deployed in support of the second and third movies was far greater than that available for the first one. And again the Wachowskis designed and implemented action sequences that took advantage of this additional capability. They raised the bar. The action sequences are more complex and more elaborate than the ones in the first movie. But in spite of, and I argue because of, these very advances the action sequences in the second and third movies are less satisfactory than the now primitive looking action sequences found in the first one. Why is this?
One problem with the sequences in the latter movies are that they are generally "more" rather than "better". The villain in the first movie is called Agent Smith and is played brilliantly by Hugo Weaving. Smith is also carried forward to the second and third movie. In the first movie Nero, played woodenly by Keanu Reeves in all three movies, battles one Smith. In the second movie he battles several then hundreds of identical Smiths. By the time of the grand finale at the end of the third movie he is battling one Smith while thousands of Smith clones watch, presumably ready to step in to help if the "Hero Smith" needs it.
There is so much going on in the action sequences in the last two movies that it is hard to follow them and they seem to go on forever. In the second movie there is a big chase scene. It starts out in a nightclub. Then it moves to a garage. Then it moves to outside streets. Then it moves to a freeway. On the freeway we dodge between cars while people shoot at each other. Then there is the Samurai sword fight on the top of a truck. Then there is a motorcycle chase. Then there is a big scene where two "18 wheeler" trucks slam into each other head on. It's just too long and complex. We start out excited. Then we become worn out. Then we just become bored. When can we get back to the plot? It's a bad sign when you are waiting for an action scene to end because you are bored.
In the third movie there is an epic battle for the dock (it doesn't matter what the dock is). In this case we have the usual rag tag band of good guys. But there are about 250,000 bad guys. I'm not going to run you through another "first this happened then that happened" description of the scene. Instead let me do some math. There are a bunch of good guys shooting at the bad guys with machine gun-like weapons. Let's say they are shooting 250 rounds per second and they manage to hit a bad guy with every single round. Now this is a stretch, even for a world in which the good guys are inevitably good shots and the bad guys are inevitably bad shots, but stick with me. At this rate it would take 1,000 seconds, or almost 17 minutes to kill all the bad guys. Just how long can you stay interested in a good guy grimacing and going rat-a-tat-a-tat. In my case, and I imagine in yours too, its far less than 17 minutes. While the titanic battle is going on the movie cuts back and forth to another scene. But it's the usual "can the good guys make it through the gate before the bad guys get them" stuff.
Now from a technical point of view both scenes are brilliantly done. It is wondrous how the zillions of bad guys are realized in the "dock" stuff. It really looks like there could be 250,000 of them. But that's just too many. Similarly, there are too many bad guys chasing the other group. And there is only so much "swoop and shimmy" as they are chased around obstacle after obstacle. And just how many parts can you knock off a vehicle as you cut it too close time after time, and still believe the vehicle will not be put out of action? It's all just too much of a good thing.
There's another way to look at it. In the first movie most of the action is fights. And most of the fights are mano a mano, say Neo against one Smith. In the other fights it's one good guy against a few bad guys, say three, or a small number of good guys against a roughly equal number of bad guys. All this is human scale. We can develop a rooting interest in our hero. And in the early part of the first movie it's established that the bad guys are more powerful than the good guys. So you have a number of scenes where a good guy will end up fighting a bad guy. Then when things start going badly the good guy will break off and run away. All this creates dramatic tension. Now let me create some dramatic tension myself by breaking off and talking about plot.
Frankly, one of the things a good plot does is justify the action. The plot should logically force the hero to come into opposition to the villain and be forced to fight him (or chase or be chased or blow stuff up). That's kind of the minimum the plot is required to do. In the later James Bond movies we took it as a given that Bond was a hero and the bad guy was a villain and that Bond's job was to stop him. So the minimum requirements were barely met. Unconsciously we knew the plot was just going through the motions and that diminished the whole endeavor, which in turn made watching the movie less of a pleasure, which finally resulted in diminished box office grosses.
The plot of "The Matrix" was not any kind of minimalist effort. It was inherently interesting and it did a great job of justifying the action. The core of the plot was of all things a philosophical question: What is reality? In "The Matrix" it turns out that what appears to be the real world is actually a computer construct. But it is so cunningly constructed that it is essentially impossible to tell that it is a construct, the Matrix of the title. It turns out that if the construct is done cunningly enough it is literally impossible to show that it is not actual reality. Of course the Matrix is flawed in small ways, allowing the good guys to know that it is a construct. If the Matrix was perfectly constructed we wouldn't have a movie. And the reason why any one or any thing would feel the need to construct such an elaborate illusion is a complete joke. Supposedly human beings make great batteries. In reality the laws of Thermodynamics require that human beings make lousy batteries. You end up putting many in times the energy in the form of food into them than the amount of energy you could possibly get out of them. But that's nit picking. A cool movie always demands a certain amount of suspension of disbelief.
Since the Matrix is artificial, if you are in the know you don't have to follow those pesky laws of physics. Instead you can have fun. And specifically, you can be a way cool Kung Fu fighter. And, again for reasons that are best put into the "suspension of disbelief" bucket, the best way to defeat the bad guys is to be a much better and cooler Kung Fu fighter than your normal bad guy. So there's our justification for lots of cool Kung Fu fighting. And this "you can bend the laws of physics" thing permits and justifies all kinds of jumps across impossibly large distances, action with cool automatic weapons, action with helicopters, in short, lots of really cool mayhem.
And there are a number of other small pieces of philosophical conundrums thrown in, each in a very entertaining manner. There is a really nice short bit where Neo knocks over a vase. The philosophical problem results from the fact that Neo would probably not have knocked over the vase if the Oracle (another character) had not said "watch out for the vase" first. So there is a nice "cause and effect" puzzle pulled off in about 30 seconds of film time. These bits add gravitas to the movie.
Of course the whole thing is a bit of "I only read Playboy for the articles". Hugh Hefner was smart enough to realize that by putting those articles in side by side with pictures of pretty unclothed ladies it would give his magazine some gravitas which would provide some measure of cover justifying more young males buying more copies of his magazine. To see what I mean let's take the core of the Matrix premise seriously for the moment.
Neo is a computer hacker. The Matrix is a computer construct. By applying his knowledge that it's not real combined with that fact that it is computer generated, combined with his computer skills, Neo should be able to seriously bend the rules. The problem is: what rules to bend. Let's take a quick journey into the land of computers using Unix as our example. (Don't panic -- this is not going to get very technical). Unix uses something called "shells". Shells fall into three general categories in terms of power. The shell with the least power is called a "restricted shell". It is only allowed to do a few things and the whole idea is that a restricted shell is not supposed to be able to break out into the wide world of the full Unix environment. As the name implies, a "standard shell" has normal powers. It can can navigate through the wide world of the full Unix environment but it is not supposed to be able to get into the guts of the system and break it. The "root shell" is all powerful. It can do anything it wants to do including change or destroy any or all of the system. The root shell exists so the system itself has enough power to build and maintain itself.
In the Matrix world Neo starts out as a restricted shell. He doesn't have enough power to see the real system so he presents no threat to the real system. When Neo breaks out into the "real" (as opposed to the artificial "Matrix") world it's like he has graduated from being a restricted shell to being a standard shell. He is not powerful enough to destroy the core system (called the Kernel in Unix-speak) but he can at least see it. Any hacker worth his salt who is given standard shell capability tries to find a "back door" that gets him "root shell" power. There is some business with the "key maker" in the second and third movie that is analogous to this. If you go through the right door (a back door, perhaps) you disappear into the guts of the system. In the movie this is represented by a hallway that is invisible to the normal Matrix environment.
In the movie Neo spends a lot of time doing cool stuff (e.g. Kung Fu, playing with cool guns, etc.) rather than going straight for the "root". This makes the movie much more fun for the audience. We get to see cool fights, chases, etc. But those are not the thoughts and actions of a true hacker. So the cool stuff (Kung Fu fights, playing with cool guns, etc.) is the Matrix equivalent of pictures of the Playboy unclothed pretty girls whereas the philosophy stuff is the equivalent of the Playboy articles. Now in my callow youth I used to read Playboy. I read the articles and I looked at the pictures. And I enjoyed both. I probably even enjoyed the pictures more because the articles were there. But the articles without the pictures? No! I wouldn't have read that magazine.
And this is demonstrated by the other two movies. The plot of the other two movies has to do with saving Zion. It's a classic "save the town (western) or the neighborhood (modern cop movie) or the world (James Bond movie) from the bad guys" plot. It's just not that interesting. We've seen this plot enough times that we know that the "whatever" will be saved in the nick of time just before the closing credits. And there's another problem with the second and third movies.
By the end of the first movie Neo has effectively become Superman. In fact he flies using the exact same "right fist pumped in the air" style made famous in the Superman movies. There's even a direct reference to Superman in the dialog. The problem with being Superman is that taking on normal baddies is just not fair or interesting. As numerous writers of comic books, TV shows, and movies have learned, if you have a Superman as a good guy you need a super-villain as a bad guy. Now we learn that Smith has had "upgrades" early in the second movie. That, combined with the fact that there are now Smith clones all over the place, is supposed to make for a super-villain. But it just doesn't work. But wait, there's more.
The problem with super-anything is how do you kill it? This shows up most clearly in the climax fight at the end of the third movie. Neo beats the crap out of Smith. Smith beats the crap out of Neo. But, since they are supermen, this does not kill or even seriously injure either of them. So how do you wrap things up? Well, first you have another action sequence that goes on for far too long while they try unsuccessfully to kill each other. Then finally they decide that it's not important who wins the fight. In a big letdown Neo ultimately gets inside Smith and all of his copies and explodes them from the inside then dies himself.
To wrap it all up, it's harder to make a good action movie than it once was. In fact, it's just plain hard. The first Matrix move was and still is a great action movie. That's because it gets the balance right. It has a truly interesting plot. The plot has ideas and a great justification for the action. It has a great babe, at least in my opinion. The romance never works for me. I think that's a result of weak writing of the romantic components combined with a wooden performance by Keanu Reeves. But Carrie-Anne more than makes up for this by being a great action babe. She looks great doing jumps and fights in the action stuff and is easy on the eyes the rest of the time. Finally, the action scenes are great. They are human scale, draw the audience in and cause you to root for the good guys. They are also very creatively done. With all three components in balance the movie rocks. The other two movies in the series are not so good. The most obvious defect is the plot. It's just not that interesting. The babe/romance is no better but also no worse in these movies. And finally, in spite of the fact that they are technically far superior, the action sequences in the other movies are inferior to the first movie as entertainment. The Wachowski brothers spent too much effort putting "more" into the sequences and not enough effort making them entertaining and human scale so they would draw us in.
Monday, May 16, 2011
Wednesday, May 4, 2011
Hotel on the Corner of Bitter and Sweet
This is a book written by Jamie Ford that I recently finished reading. If you are looking for a review of the book, look elsewhere. What I want to talk about is the history covered in the book. Most of the action takes place in 1942. The rest of the action takes place in 1986 where we find out what happened to the main character. I will focus on the 1942 activities.
The main character is a Chinese boy who is 12 years old at the time. He falls in love with a Japanese girl who is a few months older. Most of the action takes place in Chinatown and Japantown in Seattle during the early part of World War II. A key point is the hostility between the Chinese and Japanese during this period. Towards the end of the book the girl is sent to an internment camp in Idaho. Before this happens one of the impediments to the romance is the boy's father. The father hates the Japanese. The father is not aberrant. This is an opinion that was broadly shared within the Chinese community. The book spends some time on why this was so but I want to get into this in more detail.
The cause is both the result of domestic politics within the U.S. and also of international politics. And funnily enough you can get another look at this issue in an entirely different book. The book is, of all things "Charlie Chan" by Yunte Huang. The center of the Chan book is the fictional detective named in the title. Charlie Chan was immensely popular (6 books, a couple of dozen movies) between 1925 and the late '40s. The author takes pains to set the Chan the phenomenon in the context of the Chinese, and to some extent the Japanese experience in the U.S. There were no orientals in the U.S. in any numbers until California businessmen imported Chinese to work originally in the Gold fields and later on the railroad. At some point the need for labor to work in these industries dried up. The the same businessmen who had originally supported Chinese immigration reacted by stirring up "yellow peril" trouble and causing anti-Chinese immigration laws to be passed. This caused other businessmen to turn to the Japanese as a source of cheap exploitable labor that got around the Chinese exclusion laws. So almost from the beginning the Chinese and Japanese were put into competition with each other.
And this competitive situation was a microscopic version of what had been happening in Asia for millennia. The Chinese have been the dominant culture in Asia for thousands of years. The Chinese influence on Japanese language, writing, and culture is so strong that it can't be ignored. So a major aspect of the Japanese experience has been the effort by the Japanese to differentiate themselves from the Chinese. So while Japanese writing is superficially similar to Chinese there are many differences in the details. The same is true of Japanese architecture. There are overlapping themes and motifs but at a detailed level there are many differences. And so it goes through all of Japanese culture. The Japanese swim in an ocean of Chinese culture but, where they can, they try to build in some distance. This long battle for a separate identity results in a certain amount of resentment by the Japanese toward the Chinese.
And the modern (last 500 years) history of the two countries has diverged quite a bit. Both countries spent a lot of time initially rejecting western influences. Over a long period of time various assaults by Europeans weakened the Chinese to the point that by about 1900 the country as a whole was a basket case. The Japanese also followed a course of rejecting western influence. But eventually they too were forced to abandon this course. The Japanese response was to do a complete reversal. They embraced western culture particularly western business and military methods. This was so successful that in 1905 the Japanese scored a military victory against a traditional western power in the Russo-Japanese War. They did it by using traditional western technology operated in the usual western manner. In the 1920's the Japanese were a party in the 5-5-3 Naval treaty, a very big deal at the time. By this time they were seen as having a completely modern and western navy and the third largest in the world. So during the run up to World War II you had China, effectively a third world basket case as opposed to Japan, now a major power and one of the largest. This was a complete reversal of fortunes. Japan was now culturally, economically, and politically the premier Asian power. This could have engendered a great deal of resentment by Chinese with respect to Japanese.
But there was a much more powerful and more specific basis for bitter resentment of Japanese by Chinese. In the summer of 1937 Japan invaded China. China at this time was too weak to represent a political, economic, or military threat to Japan. It was a pure and simple power grab, a colonial annexation, if you will. And the war was particularly viscous. China was much larger in terms of both physical size and of population. The only way Japan could pull it off, and do so cheaply, was to use its much superior military to intimidate the Chinese population into submission and acquiescence. An example of Japanese tactics was the so called "Rape of Nanking", which involved the wholesale slaughter of tens of thousands of Chinese civilians. To be effective in intimidating the rest of the Chinese population the rest of the Chinese population needed to be aware of what was going on. And what was going on inevitably made its way to the overseas Chinese population in places like Chinatown in Seattle.
So by 1942 Chinese communities in places like Seattle were very familiar with the tactics the Japanese were employing in China. And U.S. politics forced Chinatowns and Japantowns to be adjacent to each other in city after city. Seattle was no exception. So it is entirely understandable that in 1942 the average Chinese as exemplified by the main character's father in the "Hotel" book would fear, hate, and resent the Japanese living a couple of blocks away in Japantown.
With all the history that has happened since the 1930s and early 1940s the war between China and Japan has been largely forgotten outside of China and Japan. Things have evolved to the point where the Nazis pretty much fully occupy the role of WW II villain and the Japanese misbehavior, to use a perhaps too mild term, is far less prominent in our consciousness.
Now that I have explained why a Chinese man in 1942 would very reasonably have not wanted his son to have anything to do with a Japanese girl let me widen my scope and take a more nuanced look at things from the Japanese perspective. (Spoiler alert: The book has a happy ending). As the Chan book admirably points out, by well before 1942 the U.S. perspective had broadened out. It wasn't just the Chinese that were subhuman and undeserving of "full human" status, it was any member of "the yellow races", what we would now call Asians. In most situations, whites saw no reason to differentiate between Chinese and Japanese. And the very idea that there might be still other kinds of "yellow devils" like Koreans, Viet Namese, etc. had just not entered anyone's consciousness.
I have already alluded to an example of this sort of racism in action and directed specifically at the Japanese. The 5-5-3 Naval treaty was ostensibly about avoiding an arms race. The Battleship was the super weapon of its era. By the 1920's there had been several generations of Battleships and each generation was substantially more expensive than the previous generation. For instance the Dreadnought, the original Battleship, was built by the British in1906 and was considered obsolete by 1912. Battleship construction was consuming larger and larger chunks of military budgets. If this kept on too long bankruptcy seemed the inevitable result. So the idea of the 5-5-3 treaty was to cap the rate of construction of Battleships. The British and Americans would be allowed to build equal amounts of tonnage (5 to 5) and everyone else would build less. So why couldn't the Japanese build the same amount of tonnage as the Brits and the Americans? Simple racism and because at the time the Brits and the Americans could force the Japanese to accept lesser status and only get to build 3 for every 5 the Brits and the Americans could build.
And, of course, the seminal event around which "Hotel" is built is the internment of people of Japanese descent in the early days of WW II. The measure was justified on military grounds - "they be spies and saboteurs". But that's nonsense. All you have to do is to look at what happened to the other two nationalities on the WW II "enemies" list: the Germans and the Italians. In all the years since the war and even during the war there was precious little evidence of spying and sabotage by Japanese-Americans. All the intelligence work done leading up to Pearl Harbor, for instance, was done by Japanese diplomatic people or by people brought in from Japan by the Japanese government specifically as spies. No Japanese-Americans living in Hawaii were involved in spying. And there is no evidence of spying by Japanese-Americans living on the West Coast either.
Contrast this with the German community. By the late '30s there was a large network of "bunds", clubs formed to support Hitler and to agitate for Nazi interests. Hollywood made numerous "B" movies about the FBI breaking up German spy rings in the run up to the war. There were well known and prominent Nazi sympathizers like Henry Ford and Charles Lindbergh. Yet there was no general roundup of people of German origin and no action against prominent Nazi sympathisers. Of course, all these Nazi sympathisers and people of German origins all came out as "100% all American" after war was declared. But there is little or no evidence that Japanese-Americans were ever anything except "100% all American" at any time and large numbers of them volunteered to sign various pledges immediately after hostilities broke out. And although the Italian pro-Mussolini forces were never as active as the German pro-Hitler people, the Italian model generally followed the German one.
Finally, I want to note a small and, as far as I know now completely forgotten, piece of Washington State history. I had a paper route while I was in High School. My father helped me out on Sunday when the papers were particularly overwhelming. I distinctly remember a story on the car radio during one of these Sunday excursions with my dad. There was about an Initiative on the ballot. If approved it would again make it legal for people of Japanese descent to own real property in Washington State. Some time during WW II the state had passed a law making it illegal for people of Japanese origin to own property even if they were U.S. citizens. This heinous law effectively made it legal for white people to steal land, etc. from the poor Japanese people who had been bundled off to the internment camps. In the early '60s an effort was made to get this horrible law off the books. The State Legislature and State Courts had apparently been too gutless to do it themselves. So at some point an Initiative campaign had been launched. The campaign for the initiative was deliberately kept low key so as to avoid stirring up anti-Japanese racism. Fortunately, the campaign worked and the law was finally gotten off the books.
My experience with this initiative (at the time I know nothing of the background and was mystified about why the original law existed) and other experiences since have convinced me of something. All peoples, Europeans, Asians, Africans, Americans, whoever I have left out, are capable of wickedness at least some of the time. And all peoples are capable of goodness some of the time. I don't buy the argument that some person or group of people are evil (or good) because of who they are. I try to judge them by what they do. And I expect that I will judge anyone to have at times done evil and at other times to have done good. Google's slogan is "Don't be evil". I think most of the time they live up to that slogan. But not all of the time. The great villain of our time is Osama bin Laden. Certainly he has done a lot of evil. But I am sure he has done good some of the time. But don't get me wrong, I'm glad he is dead.
The main character is a Chinese boy who is 12 years old at the time. He falls in love with a Japanese girl who is a few months older. Most of the action takes place in Chinatown and Japantown in Seattle during the early part of World War II. A key point is the hostility between the Chinese and Japanese during this period. Towards the end of the book the girl is sent to an internment camp in Idaho. Before this happens one of the impediments to the romance is the boy's father. The father hates the Japanese. The father is not aberrant. This is an opinion that was broadly shared within the Chinese community. The book spends some time on why this was so but I want to get into this in more detail.
The cause is both the result of domestic politics within the U.S. and also of international politics. And funnily enough you can get another look at this issue in an entirely different book. The book is, of all things "Charlie Chan" by Yunte Huang. The center of the Chan book is the fictional detective named in the title. Charlie Chan was immensely popular (6 books, a couple of dozen movies) between 1925 and the late '40s. The author takes pains to set the Chan the phenomenon in the context of the Chinese, and to some extent the Japanese experience in the U.S. There were no orientals in the U.S. in any numbers until California businessmen imported Chinese to work originally in the Gold fields and later on the railroad. At some point the need for labor to work in these industries dried up. The the same businessmen who had originally supported Chinese immigration reacted by stirring up "yellow peril" trouble and causing anti-Chinese immigration laws to be passed. This caused other businessmen to turn to the Japanese as a source of cheap exploitable labor that got around the Chinese exclusion laws. So almost from the beginning the Chinese and Japanese were put into competition with each other.
And this competitive situation was a microscopic version of what had been happening in Asia for millennia. The Chinese have been the dominant culture in Asia for thousands of years. The Chinese influence on Japanese language, writing, and culture is so strong that it can't be ignored. So a major aspect of the Japanese experience has been the effort by the Japanese to differentiate themselves from the Chinese. So while Japanese writing is superficially similar to Chinese there are many differences in the details. The same is true of Japanese architecture. There are overlapping themes and motifs but at a detailed level there are many differences. And so it goes through all of Japanese culture. The Japanese swim in an ocean of Chinese culture but, where they can, they try to build in some distance. This long battle for a separate identity results in a certain amount of resentment by the Japanese toward the Chinese.
And the modern (last 500 years) history of the two countries has diverged quite a bit. Both countries spent a lot of time initially rejecting western influences. Over a long period of time various assaults by Europeans weakened the Chinese to the point that by about 1900 the country as a whole was a basket case. The Japanese also followed a course of rejecting western influence. But eventually they too were forced to abandon this course. The Japanese response was to do a complete reversal. They embraced western culture particularly western business and military methods. This was so successful that in 1905 the Japanese scored a military victory against a traditional western power in the Russo-Japanese War. They did it by using traditional western technology operated in the usual western manner. In the 1920's the Japanese were a party in the 5-5-3 Naval treaty, a very big deal at the time. By this time they were seen as having a completely modern and western navy and the third largest in the world. So during the run up to World War II you had China, effectively a third world basket case as opposed to Japan, now a major power and one of the largest. This was a complete reversal of fortunes. Japan was now culturally, economically, and politically the premier Asian power. This could have engendered a great deal of resentment by Chinese with respect to Japanese.
But there was a much more powerful and more specific basis for bitter resentment of Japanese by Chinese. In the summer of 1937 Japan invaded China. China at this time was too weak to represent a political, economic, or military threat to Japan. It was a pure and simple power grab, a colonial annexation, if you will. And the war was particularly viscous. China was much larger in terms of both physical size and of population. The only way Japan could pull it off, and do so cheaply, was to use its much superior military to intimidate the Chinese population into submission and acquiescence. An example of Japanese tactics was the so called "Rape of Nanking", which involved the wholesale slaughter of tens of thousands of Chinese civilians. To be effective in intimidating the rest of the Chinese population the rest of the Chinese population needed to be aware of what was going on. And what was going on inevitably made its way to the overseas Chinese population in places like Chinatown in Seattle.
So by 1942 Chinese communities in places like Seattle were very familiar with the tactics the Japanese were employing in China. And U.S. politics forced Chinatowns and Japantowns to be adjacent to each other in city after city. Seattle was no exception. So it is entirely understandable that in 1942 the average Chinese as exemplified by the main character's father in the "Hotel" book would fear, hate, and resent the Japanese living a couple of blocks away in Japantown.
With all the history that has happened since the 1930s and early 1940s the war between China and Japan has been largely forgotten outside of China and Japan. Things have evolved to the point where the Nazis pretty much fully occupy the role of WW II villain and the Japanese misbehavior, to use a perhaps too mild term, is far less prominent in our consciousness.
Now that I have explained why a Chinese man in 1942 would very reasonably have not wanted his son to have anything to do with a Japanese girl let me widen my scope and take a more nuanced look at things from the Japanese perspective. (Spoiler alert: The book has a happy ending). As the Chan book admirably points out, by well before 1942 the U.S. perspective had broadened out. It wasn't just the Chinese that were subhuman and undeserving of "full human" status, it was any member of "the yellow races", what we would now call Asians. In most situations, whites saw no reason to differentiate between Chinese and Japanese. And the very idea that there might be still other kinds of "yellow devils" like Koreans, Viet Namese, etc. had just not entered anyone's consciousness.
I have already alluded to an example of this sort of racism in action and directed specifically at the Japanese. The 5-5-3 Naval treaty was ostensibly about avoiding an arms race. The Battleship was the super weapon of its era. By the 1920's there had been several generations of Battleships and each generation was substantially more expensive than the previous generation. For instance the Dreadnought, the original Battleship, was built by the British in1906 and was considered obsolete by 1912. Battleship construction was consuming larger and larger chunks of military budgets. If this kept on too long bankruptcy seemed the inevitable result. So the idea of the 5-5-3 treaty was to cap the rate of construction of Battleships. The British and Americans would be allowed to build equal amounts of tonnage (5 to 5) and everyone else would build less. So why couldn't the Japanese build the same amount of tonnage as the Brits and the Americans? Simple racism and because at the time the Brits and the Americans could force the Japanese to accept lesser status and only get to build 3 for every 5 the Brits and the Americans could build.
And, of course, the seminal event around which "Hotel" is built is the internment of people of Japanese descent in the early days of WW II. The measure was justified on military grounds - "they be spies and saboteurs". But that's nonsense. All you have to do is to look at what happened to the other two nationalities on the WW II "enemies" list: the Germans and the Italians. In all the years since the war and even during the war there was precious little evidence of spying and sabotage by Japanese-Americans. All the intelligence work done leading up to Pearl Harbor, for instance, was done by Japanese diplomatic people or by people brought in from Japan by the Japanese government specifically as spies. No Japanese-Americans living in Hawaii were involved in spying. And there is no evidence of spying by Japanese-Americans living on the West Coast either.
Contrast this with the German community. By the late '30s there was a large network of "bunds", clubs formed to support Hitler and to agitate for Nazi interests. Hollywood made numerous "B" movies about the FBI breaking up German spy rings in the run up to the war. There were well known and prominent Nazi sympathizers like Henry Ford and Charles Lindbergh. Yet there was no general roundup of people of German origin and no action against prominent Nazi sympathisers. Of course, all these Nazi sympathisers and people of German origins all came out as "100% all American" after war was declared. But there is little or no evidence that Japanese-Americans were ever anything except "100% all American" at any time and large numbers of them volunteered to sign various pledges immediately after hostilities broke out. And although the Italian pro-Mussolini forces were never as active as the German pro-Hitler people, the Italian model generally followed the German one.
Finally, I want to note a small and, as far as I know now completely forgotten, piece of Washington State history. I had a paper route while I was in High School. My father helped me out on Sunday when the papers were particularly overwhelming. I distinctly remember a story on the car radio during one of these Sunday excursions with my dad. There was about an Initiative on the ballot. If approved it would again make it legal for people of Japanese descent to own real property in Washington State. Some time during WW II the state had passed a law making it illegal for people of Japanese origin to own property even if they were U.S. citizens. This heinous law effectively made it legal for white people to steal land, etc. from the poor Japanese people who had been bundled off to the internment camps. In the early '60s an effort was made to get this horrible law off the books. The State Legislature and State Courts had apparently been too gutless to do it themselves. So at some point an Initiative campaign had been launched. The campaign for the initiative was deliberately kept low key so as to avoid stirring up anti-Japanese racism. Fortunately, the campaign worked and the law was finally gotten off the books.
My experience with this initiative (at the time I know nothing of the background and was mystified about why the original law existed) and other experiences since have convinced me of something. All peoples, Europeans, Asians, Africans, Americans, whoever I have left out, are capable of wickedness at least some of the time. And all peoples are capable of goodness some of the time. I don't buy the argument that some person or group of people are evil (or good) because of who they are. I try to judge them by what they do. And I expect that I will judge anyone to have at times done evil and at other times to have done good. Google's slogan is "Don't be evil". I think most of the time they live up to that slogan. But not all of the time. The great villain of our time is Osama bin Laden. Certainly he has done a lot of evil. But I am sure he has done good some of the time. But don't get me wrong, I'm glad he is dead.
Saturday, March 19, 2011
Numbers in the News
The news business does not do well with numbers. But, some would say the news is full of numbers. And technically that is true. Many news stories feature graphs and charts that are full of very precise very accurate numbers. But the numbers in these graphs and charts aren't really important. Instead what is going on is a "cool graphic". The modern news business is almost entirely about pictures and a "cool graphic" is an effective kind of visual. But so is a picture of a scantily clad starlet. Now I like to gaze fondly at scantily clad starlets so I am all for this sort of thing. But I don't confuse pictures of scantily clad starlets with news. And I bet that if a news producer has the choice between a scantily clad starlet and a cool graphic to illustrate a story the starlet will win 100 times out of 100. So what's important about the cool graphic is not the numbers its the coolness.
And thus I introduce numbers. I use the number "100" above, twice. How important was the specific number I picked? Not very. What I needed was a number that was "big" but not incredibly big. And I am using the word "incredibly" in the sense where it measures how believable something is. In this example "big" was important, I was going for "impact". But credibility was also important. I wanted you to believe what I said. If I had used 1000 instead of 100 I would have gained bigness but lost believability. 100 seemed like the right balance between a number big enough to get impact but not so big to lose believability. And this is a long windy way of demonstrating that the psychological impact of a number is important.
Amping up the psychological impact is incredibly important in the news business. One simple strategy is to use a big number instead of a small number. This plays out by consulting an "expert" who has an ax to grid and who will provide you with an exaggerated estimate of how likely, big, or important something is. If expert #1, who actually is an expert, says "nothing to worry about" and expert #2, who is more interested in shilling for his cause than in enlightenment, says "be afraid, be very afraid", guess who gets lots of air time and who does not. And news producers go into orgasms if they can get two dueling "experts", one saying "it's very very very red" and the other saying "its very very very blue". It doesn't matter that a real expert might say "its mildly green".
This strategy works best in areas where the average audience member doesn't know much about the subject. So we have seen a lot of this surrounding the tragedy in Japan, particular the nuclear problems. Radiation exposure is such a subject. Scientists have gotten very good at measuring radioactivity very accurately. Theoretically, people should know a lot about this subject. It has been a matter of intense public interest since at least 1945, when A-bombs were dropped on Japan. But most of the public discussion has been on a level with my red/blue example above. Over the years the pro-nuclear camp has been saying "no danger here" and the anti-nuclear camp has been saying "any tiny amount of radiation is extremely dangerous". The facts support the green position. We live in a sea of low level radiation. It is literally everywhere. So there are things to worry about but a lot of the "scare" coverage is exaggerated.
The "radiation" story out of Japan is one where there is at least some justification for a difference of opinion. And at least some segments of the media are trying to clarify the situation rather than obscure it. But it is an aspect of the story where these is some justification for the media's actions. Unfortunately, there are many other aspects of the Japan story that illustrate the media's complete inability to deal properly with numbers.
This comes out most strikingly with respect to casualty numbers. The media is very good at conveying the difference between zero and one. A story in which no one dies is covered very differently than a story in which one person dies. The first story becomes a "miracle rescue" story, like the miners in Chile. The second story becomes a "Law and Order" story; who died, who did it, etc. That's OK. But what about where in one case one person dies and in the other case two people die. Given our zero/one example one would think that the coverage would be completely different. But the media coverage is only slightly different. Now it's an "individual" versus a "group" story. But the coverage of the two stories will be not very different.
Moving on, what about a two casualty story versus a ten casualty story? The difference should be large. In the latter case, eight extra people are dead. And remember the death of the one person in the "one death" story was important enough to justify coverage. But the coverage is almost identical. Recently a bus crashed in New Jersey killing 2. This happened within a few days of the Bronx bus crash that killed more than ten. There has been a little more coverage of the Bronx crash but the media approach to each crash has been more similar than different.
This inability to differentiate gets even more pronounced as the numbers get bigger. What if 100 people are killed? Is this different than only 10 people getting killed. No! The media will either choose to cover the story or it won't. If both stories are covered they will both be "a lot of people were killed" stories. The death toll in Japan has crossed 10,000, as I write this. It is expected to continue to rise. The final total will likely pass 20,000. But in the end about one person will be killed in the Japan tragedy for every ten people who perished in Haiti under roughly similar circumstances. The media is completely incapable of differentiating in any meaningful sense between these two numbers. But the difference is hundreds of thousands of lives.
The media has fallen into the "up close and personal" trap. For disasters they show "devastation" video. With their close in emphasis it all looks pretty much the same. They have not figured out how to convey the extent of the devastation. If they have enough devastation video to fill up a "clip loop" it all looks the same. You get a number of short clips, usually about 10, each showing a piece of devastation. The same clip loop is run over and over. So once you have enough devastation to make up the ten short shots all disasters look the same. All disasters are visually boiled down to the clip loop and all clip loops end up looking pretty much the same.
The human toll is handled in a similar manner. We get interviews of survivors or people who knew a victim. Again about 10 of these interviews is all the media can absorb. So if a disaster generates 10 interviewees or millions of interviewees it is all the same. But a disaster which creates 10 interviewees is not the same as a disaster that generates millions of interviewees. But it will be pretty much impossible to tell one disaster from the other based on the media coverage.
Several years ago an earthquake hit my city. Within a few hours the media had put together their clip loop of the event. That's when the calls and e-mails started coming in. Was I OK? Was my home, car, or place of business wiped out in the terrible devastation? How many friends had been killed or seriously injured? That sort of thing. In fact, no one I knew was killed, injured, or even had suffered any property damage. Because no one was killed and only a few people were injured. There was serious damage to a few specific areas but 99% of the city was completely undamaged. You couldn't tell any of this because the national media picked up the same highlight loop showing the few instances of damage and ran it over and over.
Shortly thereafter 9/11 happened. 9/11 was a much bigger event involving thousands of deaths, and much more property damage than my little earthquake. 9/11 looked like a bigger event than the earthquake in my town. And it has received vastly more coverage, partly because it happened in a media mecca. But it is literally impossible to accurately gage the size and scope of the two events based solely on the media coverage. Numbers might help. But, as I have shown, the media is not good with numbers.
And thus I introduce numbers. I use the number "100" above, twice. How important was the specific number I picked? Not very. What I needed was a number that was "big" but not incredibly big. And I am using the word "incredibly" in the sense where it measures how believable something is. In this example "big" was important, I was going for "impact". But credibility was also important. I wanted you to believe what I said. If I had used 1000 instead of 100 I would have gained bigness but lost believability. 100 seemed like the right balance between a number big enough to get impact but not so big to lose believability. And this is a long windy way of demonstrating that the psychological impact of a number is important.
Amping up the psychological impact is incredibly important in the news business. One simple strategy is to use a big number instead of a small number. This plays out by consulting an "expert" who has an ax to grid and who will provide you with an exaggerated estimate of how likely, big, or important something is. If expert #1, who actually is an expert, says "nothing to worry about" and expert #2, who is more interested in shilling for his cause than in enlightenment, says "be afraid, be very afraid", guess who gets lots of air time and who does not. And news producers go into orgasms if they can get two dueling "experts", one saying "it's very very very red" and the other saying "its very very very blue". It doesn't matter that a real expert might say "its mildly green".
This strategy works best in areas where the average audience member doesn't know much about the subject. So we have seen a lot of this surrounding the tragedy in Japan, particular the nuclear problems. Radiation exposure is such a subject. Scientists have gotten very good at measuring radioactivity very accurately. Theoretically, people should know a lot about this subject. It has been a matter of intense public interest since at least 1945, when A-bombs were dropped on Japan. But most of the public discussion has been on a level with my red/blue example above. Over the years the pro-nuclear camp has been saying "no danger here" and the anti-nuclear camp has been saying "any tiny amount of radiation is extremely dangerous". The facts support the green position. We live in a sea of low level radiation. It is literally everywhere. So there are things to worry about but a lot of the "scare" coverage is exaggerated.
The "radiation" story out of Japan is one where there is at least some justification for a difference of opinion. And at least some segments of the media are trying to clarify the situation rather than obscure it. But it is an aspect of the story where these is some justification for the media's actions. Unfortunately, there are many other aspects of the Japan story that illustrate the media's complete inability to deal properly with numbers.
This comes out most strikingly with respect to casualty numbers. The media is very good at conveying the difference between zero and one. A story in which no one dies is covered very differently than a story in which one person dies. The first story becomes a "miracle rescue" story, like the miners in Chile. The second story becomes a "Law and Order" story; who died, who did it, etc. That's OK. But what about where in one case one person dies and in the other case two people die. Given our zero/one example one would think that the coverage would be completely different. But the media coverage is only slightly different. Now it's an "individual" versus a "group" story. But the coverage of the two stories will be not very different.
Moving on, what about a two casualty story versus a ten casualty story? The difference should be large. In the latter case, eight extra people are dead. And remember the death of the one person in the "one death" story was important enough to justify coverage. But the coverage is almost identical. Recently a bus crashed in New Jersey killing 2. This happened within a few days of the Bronx bus crash that killed more than ten. There has been a little more coverage of the Bronx crash but the media approach to each crash has been more similar than different.
This inability to differentiate gets even more pronounced as the numbers get bigger. What if 100 people are killed? Is this different than only 10 people getting killed. No! The media will either choose to cover the story or it won't. If both stories are covered they will both be "a lot of people were killed" stories. The death toll in Japan has crossed 10,000, as I write this. It is expected to continue to rise. The final total will likely pass 20,000. But in the end about one person will be killed in the Japan tragedy for every ten people who perished in Haiti under roughly similar circumstances. The media is completely incapable of differentiating in any meaningful sense between these two numbers. But the difference is hundreds of thousands of lives.
The media has fallen into the "up close and personal" trap. For disasters they show "devastation" video. With their close in emphasis it all looks pretty much the same. They have not figured out how to convey the extent of the devastation. If they have enough devastation video to fill up a "clip loop" it all looks the same. You get a number of short clips, usually about 10, each showing a piece of devastation. The same clip loop is run over and over. So once you have enough devastation to make up the ten short shots all disasters look the same. All disasters are visually boiled down to the clip loop and all clip loops end up looking pretty much the same.
The human toll is handled in a similar manner. We get interviews of survivors or people who knew a victim. Again about 10 of these interviews is all the media can absorb. So if a disaster generates 10 interviewees or millions of interviewees it is all the same. But a disaster which creates 10 interviewees is not the same as a disaster that generates millions of interviewees. But it will be pretty much impossible to tell one disaster from the other based on the media coverage.
Several years ago an earthquake hit my city. Within a few hours the media had put together their clip loop of the event. That's when the calls and e-mails started coming in. Was I OK? Was my home, car, or place of business wiped out in the terrible devastation? How many friends had been killed or seriously injured? That sort of thing. In fact, no one I knew was killed, injured, or even had suffered any property damage. Because no one was killed and only a few people were injured. There was serious damage to a few specific areas but 99% of the city was completely undamaged. You couldn't tell any of this because the national media picked up the same highlight loop showing the few instances of damage and ran it over and over.
Shortly thereafter 9/11 happened. 9/11 was a much bigger event involving thousands of deaths, and much more property damage than my little earthquake. 9/11 looked like a bigger event than the earthquake in my town. And it has received vastly more coverage, partly because it happened in a media mecca. But it is literally impossible to accurately gage the size and scope of the two events based solely on the media coverage. Numbers might help. But, as I have shown, the media is not good with numbers.
Thursday, March 10, 2011
Pensions
Pension plans, particularly the one in the state of Wisconsin, are in the news these days. I don't know the details of the Wisconsin pension plan and neither do most of the people arguing about it. But I do know that it is what's called a "defined benefit" plan. There has been a lot of talk over the years as to whether defined benefit plans are better or worse than the other general class of pension plans called "defined contribution" plans. As with a lot of things there are pluses and minuses associated with each type of plan. And that's what this piece is about. Not the pluses and minuses of the Wisconsin plan in particular but the pluses and minuses of the two classes of plans in general.
So what is a defined benefit plan? That's a plan where an employer contracts to provide a specific level of benefit at a specific time. I have been a participant in a defined benefit plan in the past. It was typical of the breed. The way it worked was that if I continued working for the company for 40 years they would provide a pension that would cover 60% of what I was getting paid when I retired. The way it worked was that my "guarantee" would go up on average 1 1/2% per year. So if I stayed with the company for a year my pension would be 1 1/2%. If I stayed 10 years I would get 15%, and so on. There were "vesting" rules so I didn't get a thing unless I was with the company for at least 5 years. And the percentage wasn't a flat 1 1/2% per year. Sometimes it was 1% for a year, sometimes, 2%, but on average it went up 1 1/2%. Why the variation? The company was interested in retaining employees so the percentage was designed to encourage employees to stick around (high percentage per year) until they were stuck then the percentage was lower.
This sounds like a pretty good deal and I thought so at the time. And most people like defined benefit plans like this one. So what's not to like? Defined benefit plans are a bad deal if they don't deliver on the promise. For instance, in my case the company was bought by another company after I had more than 15 years in. The new company honored the old plan but they folded it into their similar plan. The problem was that they had a different pattern of percentages. The old plan had higher percentages in the later years and lower percentages in the earlier years. The new plan had the reverse. So if you added up all the percentages in my case (lower percentages in the earlier years and now lower percentages in the later years) it no longer added up to the full 60%. And I got laid off after just under 20 years. In my case I did end up with something. But it was a low percentage of my salary as it was when I was laid off many years before retirement. But I at least got something.
The cost to the employer of a defined benefit plan depends on a lot of things. How many people will go the full 40 years. How much will they be earning when they retire. Many other things. And there is something called "present value". My old company made contributions to the plan each year and those contributions were invested. So when I worked my 15th year for the company they put in some amount of money that was supposed to cover what my pension would eventually cost. That money would have many years to earn interest. How much money did they need to put in that year? Well, you make a bunch of assumptions and then you perform what is called a "present value" calculation and that's how much money the company put in. Actuaries, usually employed by insurance companies, specialize in doing this kind of thing. And once all the assumptions are made there is a precise mathematical way to perform the present value computation. But it all depends on the assumptions in the end. And that's where the trouble comes from.
There is no way to get the assumptions right. If you make one set of assumptions the present value calculation says put a smaller amount of money in. If you make a different set of assumptions the present value calculation says put a larger amount of money in. Which set of assumptions are right? No one knows so honest and ethical people can disagree. And then there's the real world. As time goes by things can turn out more like the "small money" assumptions. Then it is likely that the pension plan is over funded. And, of course, the real world can turn out more like the "big money" assumptions. Then the pension plan is underfunded. For many years the stock market has grown faster than the historical trend. This has resulted in a lot of over funded pension plans. Then when the stock market crashed a couple of years ago the value of many pension fund investments dropped a lot and many pension funds instantly became underfunded. Neither of these outcomes are the result of anyone doing anything wrong.
For many years most large companies had a pension plan and it was almost always a defined benefit plan. Defined contribution plans are a recent invention. The first thing to notice about defined benefit plans is that how much money a company needs to put into the pension fund is a matter of opinion. To a company pension contributions are just another kind of expense. If you lower expenses you increase profits. So companies always want to put in as little as they can. So the first way a company can game the system is to use the "less cost" assumptions when calculating the pension contribution even if they are not justified. That's not good. But it can get worse, way worse. Back when there were many companies around with defined benefit pension plans a common tactic for an unscrupulous "take over artist" would be to borrow a lot of money, take the company over, then raid the pension fund to pay back all the money that had been borrowed, and finally to pay themselves a large "fee". After that, they didn't even care if the company stayed in business. And many companies were literally driven out of business by these tactics.
And what happens to the pension fund if the company goes out of business? Well, lots of times the fund is raided before the company goes completely under. And even if this doesn't happen there will be no more contributions from the company. So the fund is likely underfunded. It used to be that the retirees just got completely screwed in these kinds of situations. Now there is a federal agency called the Pension Benefits Guarantee Corporation and a law to go with it. It can contain the damage to some extent but it does not pay the full contracted benefit. And it is perennially short of money. It is currently dealing with the GM and Chrysler pensioners, for instance.
So, if everything works out well a defined benefit pension is a good deal for employees. But things frequently don't work out. And many of the things that can go wrong with a private corporation do not apply to governments like the Wisconsin state government. But some of them do. A good way to reduce current spending is to underfund the state pension. And state legislatures do this all the time. It is one of the standard "budget gimmicks" you hear about. The theory is that the state will make up the shortfall later when the economy is better. But later there is always something more fun to spend the money on that a pension contribution. As far as I can tell (the press coverage on this sort of thing is always poor) Wisconsin was fine before the market crashed and will be fine again if the market goes up enough. But in the same way financial considerations affect business leaders, political considerations affect politicians. This can result in bad behavior that jeopardizes retirees ability to get what they were promised.
So now we know the bad news about defined benefit plans. So what's the bad news about "defined contribution" plans? Before getting into that let me explain how a defined contribution plan works. The best place to start is with a 401k plan. In a 401k plan (applicable to a for profit corporation, there are similar plans with different designations for non-profits and governments) the employee makes a contribution to the plan, say 5% of salary. This money is taken off the top before taxes are calculated and put into a fund. Frequently there is an employer match. The most common match is that the employer will match at a rate of 50 cents on the dollar anything the employee puts in up to 6%. So in our case the employee would put in 5% and the employer would contribute an additional 2 1/2% so the total amount going into the fund would be 7 1/2%. None of this so far is a pension. But the employer can contribute in a similar manner to a fund on behalf of the employee that is not dependent on how much the employee kicks in. This is a pension plan. It is called "defined contribution" because there is a formula that determines how much the employer is going to put in (typically a fixed percentage of the employee's salary) but the employer does not guarantee how much income this will translate into when the employee retires.
Before going into the problems let me go into the benefits of this type of plan. The first benefit is that the employer can't game the system by coming up with some optimistic assumptions and underfunding the plan. There is a rule and the employer is bound by law to put in the exact amount the rule specifies. There is no wiggle room. Secondly, the employer looses control of the money. The money goes to the plan administrator, typically someone like Fidelity or Vanguard. The money is no longer under the employer's control. The employer can't raid the fund. Third, and this is not obvious, all of the "defined contribution" pension cost for a particular year is paid in the year the cost is incurred. With a "defined benefit" plan extra money may need to be put in (or possibly can be taken out) because previous years now look underfunded (or over funded). This means that a company can accurately predict the cost of doing business in the current year. In summary, with a defined contribution plan the money is reliably there and corporations get a more predictable cost structure. So what's the down side?
The down side is that the employee does not know how much income he will get at retirement. If enough money is put into the defined contribution plan and the money is invested well an employee can do as well or better than with a defined contribution plan. But if the money is invested badly there may be little or nothing left at all at retirement time. This is not as far fetched as it sounds. Let's say you were an Enron employee and Enron had a defined contribution pension plan. (I don't know what kind of plan Enron actually had). With many companies the 50% match on the 401k is in company stock. And many employees believe in the company they work for so they invest their portion of the 401k money in company stock too. And some companies will contribute company stock in lieu of cash as their pension contribution. So in this case it would be possible for all of an employee's pension money to be tied up in Enron stock. Enron stock is now worthless so people who were 100% in Enron were completely wiped out. Enron was a high flier. But for most if its life Washington Mutual was a well run and staid bank, technically a mutual savings bank. For most of the life of the company, employees would be justified in believing that WaMu stock was a conservative investment. Many WaMu employees had a large part of their retirement tied up in WaMu stock. That investment is now worth a few cents on the dollar and may eventually be worthless. So the down side of defined contribution plans is the investment risk.
In summary, defined benefit and defined contribution pension plans both have risks. With the defined benefit plan you are betting on the company being reasonably ethical and surviving for a long time. It should be noted that the only stock listed in the 1900 Dow Jones Industrial average that was still there 100 years later was GE. You are also betting you can stay with the same company for your entire adult carrier. I worked for three companies as an adult. I worked for the same company for 19, 4, and 15 years. And I worked hard to stay with a company after I got there. In no case did I quit, although I did retire from the last one. Most people are much less successful at longevity than I have been. My 4 year stint would be a dead waste to a defined benefit plan. I do get a nice to have but very modest pension from my 19 year stint. By the time I retired from my 15 year stint the company had converted to a defined contribution plan. So it is a lot harder than it looks to cash in on a defined benefit plan.
I also was able to avoid the "put all your eggs in one stock" problem I have described above. It was part of a pretty conservative investment strategy that has stood me in good stead. But there's no doubt about it. It would have been easy to screw up some time over the years and lose all or a lot of my retirement money. So it is also harder than it looks to cash in on a defined contribution plan. Most people are more aware of the risks inherent in a defined contribution plan so they characterize it as the more risky of the two. I hope I have convinced you that it is not necessarily more risky and is frequently less risky. And the fact that both options are risky is why I strongly advocate leaving Social Security pretty much as it is. Assuming it can survive the current political attack on it, it represents the only truly low risk retirement option available to people. Everyone needs a low risk component in their retirement plan.
Finally, what should be done about government pension plans like the one in Wisconsin? Probably the biggest reason private defined benefit pension plans are risky is bad behavior by company management. They either abuse the pension plan or they run the company badly. There is no reason to believe that state legislatures are likely to do a better job. The current activities of the legislatures in Wisconsin and several other states strongly supports this contention. The money pot invariably associated with a defined benefit plan just represents too tempting a target. So I think that governments should follow the lead of private industry. Private industry has almost entirely shifted over to defined contribution pension systems. I think it is both inevitable and likely that in the coming years governments will switch over to defined contribution systems too. The switchover needs to be watched carefully. It can be done properly so that the employee at least starts out with an equivalent benefit. But it can also turn into a license to steal if done badly.
But once done opportunities for mischief are greatly reduced. It now becomes a discussion between the employer and employee about total compensation. The employer is now indifferent as to how much of the compensation is in salary and how much is in the form of a pension contribution. He only cares about the total amount. And there is no longer an opportunity to raid the pension fund or to underfund the plan in the current year. Pensions would be off the table completely in Wisconsin if the state had a defined contribution plan. And that would be better for state employees and for the rest of us.
So what is a defined benefit plan? That's a plan where an employer contracts to provide a specific level of benefit at a specific time. I have been a participant in a defined benefit plan in the past. It was typical of the breed. The way it worked was that if I continued working for the company for 40 years they would provide a pension that would cover 60% of what I was getting paid when I retired. The way it worked was that my "guarantee" would go up on average 1 1/2% per year. So if I stayed with the company for a year my pension would be 1 1/2%. If I stayed 10 years I would get 15%, and so on. There were "vesting" rules so I didn't get a thing unless I was with the company for at least 5 years. And the percentage wasn't a flat 1 1/2% per year. Sometimes it was 1% for a year, sometimes, 2%, but on average it went up 1 1/2%. Why the variation? The company was interested in retaining employees so the percentage was designed to encourage employees to stick around (high percentage per year) until they were stuck then the percentage was lower.
This sounds like a pretty good deal and I thought so at the time. And most people like defined benefit plans like this one. So what's not to like? Defined benefit plans are a bad deal if they don't deliver on the promise. For instance, in my case the company was bought by another company after I had more than 15 years in. The new company honored the old plan but they folded it into their similar plan. The problem was that they had a different pattern of percentages. The old plan had higher percentages in the later years and lower percentages in the earlier years. The new plan had the reverse. So if you added up all the percentages in my case (lower percentages in the earlier years and now lower percentages in the later years) it no longer added up to the full 60%. And I got laid off after just under 20 years. In my case I did end up with something. But it was a low percentage of my salary as it was when I was laid off many years before retirement. But I at least got something.
The cost to the employer of a defined benefit plan depends on a lot of things. How many people will go the full 40 years. How much will they be earning when they retire. Many other things. And there is something called "present value". My old company made contributions to the plan each year and those contributions were invested. So when I worked my 15th year for the company they put in some amount of money that was supposed to cover what my pension would eventually cost. That money would have many years to earn interest. How much money did they need to put in that year? Well, you make a bunch of assumptions and then you perform what is called a "present value" calculation and that's how much money the company put in. Actuaries, usually employed by insurance companies, specialize in doing this kind of thing. And once all the assumptions are made there is a precise mathematical way to perform the present value computation. But it all depends on the assumptions in the end. And that's where the trouble comes from.
There is no way to get the assumptions right. If you make one set of assumptions the present value calculation says put a smaller amount of money in. If you make a different set of assumptions the present value calculation says put a larger amount of money in. Which set of assumptions are right? No one knows so honest and ethical people can disagree. And then there's the real world. As time goes by things can turn out more like the "small money" assumptions. Then it is likely that the pension plan is over funded. And, of course, the real world can turn out more like the "big money" assumptions. Then the pension plan is underfunded. For many years the stock market has grown faster than the historical trend. This has resulted in a lot of over funded pension plans. Then when the stock market crashed a couple of years ago the value of many pension fund investments dropped a lot and many pension funds instantly became underfunded. Neither of these outcomes are the result of anyone doing anything wrong.
For many years most large companies had a pension plan and it was almost always a defined benefit plan. Defined contribution plans are a recent invention. The first thing to notice about defined benefit plans is that how much money a company needs to put into the pension fund is a matter of opinion. To a company pension contributions are just another kind of expense. If you lower expenses you increase profits. So companies always want to put in as little as they can. So the first way a company can game the system is to use the "less cost" assumptions when calculating the pension contribution even if they are not justified. That's not good. But it can get worse, way worse. Back when there were many companies around with defined benefit pension plans a common tactic for an unscrupulous "take over artist" would be to borrow a lot of money, take the company over, then raid the pension fund to pay back all the money that had been borrowed, and finally to pay themselves a large "fee". After that, they didn't even care if the company stayed in business. And many companies were literally driven out of business by these tactics.
And what happens to the pension fund if the company goes out of business? Well, lots of times the fund is raided before the company goes completely under. And even if this doesn't happen there will be no more contributions from the company. So the fund is likely underfunded. It used to be that the retirees just got completely screwed in these kinds of situations. Now there is a federal agency called the Pension Benefits Guarantee Corporation and a law to go with it. It can contain the damage to some extent but it does not pay the full contracted benefit. And it is perennially short of money. It is currently dealing with the GM and Chrysler pensioners, for instance.
So, if everything works out well a defined benefit pension is a good deal for employees. But things frequently don't work out. And many of the things that can go wrong with a private corporation do not apply to governments like the Wisconsin state government. But some of them do. A good way to reduce current spending is to underfund the state pension. And state legislatures do this all the time. It is one of the standard "budget gimmicks" you hear about. The theory is that the state will make up the shortfall later when the economy is better. But later there is always something more fun to spend the money on that a pension contribution. As far as I can tell (the press coverage on this sort of thing is always poor) Wisconsin was fine before the market crashed and will be fine again if the market goes up enough. But in the same way financial considerations affect business leaders, political considerations affect politicians. This can result in bad behavior that jeopardizes retirees ability to get what they were promised.
So now we know the bad news about defined benefit plans. So what's the bad news about "defined contribution" plans? Before getting into that let me explain how a defined contribution plan works. The best place to start is with a 401k plan. In a 401k plan (applicable to a for profit corporation, there are similar plans with different designations for non-profits and governments) the employee makes a contribution to the plan, say 5% of salary. This money is taken off the top before taxes are calculated and put into a fund. Frequently there is an employer match. The most common match is that the employer will match at a rate of 50 cents on the dollar anything the employee puts in up to 6%. So in our case the employee would put in 5% and the employer would contribute an additional 2 1/2% so the total amount going into the fund would be 7 1/2%. None of this so far is a pension. But the employer can contribute in a similar manner to a fund on behalf of the employee that is not dependent on how much the employee kicks in. This is a pension plan. It is called "defined contribution" because there is a formula that determines how much the employer is going to put in (typically a fixed percentage of the employee's salary) but the employer does not guarantee how much income this will translate into when the employee retires.
Before going into the problems let me go into the benefits of this type of plan. The first benefit is that the employer can't game the system by coming up with some optimistic assumptions and underfunding the plan. There is a rule and the employer is bound by law to put in the exact amount the rule specifies. There is no wiggle room. Secondly, the employer looses control of the money. The money goes to the plan administrator, typically someone like Fidelity or Vanguard. The money is no longer under the employer's control. The employer can't raid the fund. Third, and this is not obvious, all of the "defined contribution" pension cost for a particular year is paid in the year the cost is incurred. With a "defined benefit" plan extra money may need to be put in (or possibly can be taken out) because previous years now look underfunded (or over funded). This means that a company can accurately predict the cost of doing business in the current year. In summary, with a defined contribution plan the money is reliably there and corporations get a more predictable cost structure. So what's the down side?
The down side is that the employee does not know how much income he will get at retirement. If enough money is put into the defined contribution plan and the money is invested well an employee can do as well or better than with a defined contribution plan. But if the money is invested badly there may be little or nothing left at all at retirement time. This is not as far fetched as it sounds. Let's say you were an Enron employee and Enron had a defined contribution pension plan. (I don't know what kind of plan Enron actually had). With many companies the 50% match on the 401k is in company stock. And many employees believe in the company they work for so they invest their portion of the 401k money in company stock too. And some companies will contribute company stock in lieu of cash as their pension contribution. So in this case it would be possible for all of an employee's pension money to be tied up in Enron stock. Enron stock is now worthless so people who were 100% in Enron were completely wiped out. Enron was a high flier. But for most if its life Washington Mutual was a well run and staid bank, technically a mutual savings bank. For most of the life of the company, employees would be justified in believing that WaMu stock was a conservative investment. Many WaMu employees had a large part of their retirement tied up in WaMu stock. That investment is now worth a few cents on the dollar and may eventually be worthless. So the down side of defined contribution plans is the investment risk.
In summary, defined benefit and defined contribution pension plans both have risks. With the defined benefit plan you are betting on the company being reasonably ethical and surviving for a long time. It should be noted that the only stock listed in the 1900 Dow Jones Industrial average that was still there 100 years later was GE. You are also betting you can stay with the same company for your entire adult carrier. I worked for three companies as an adult. I worked for the same company for 19, 4, and 15 years. And I worked hard to stay with a company after I got there. In no case did I quit, although I did retire from the last one. Most people are much less successful at longevity than I have been. My 4 year stint would be a dead waste to a defined benefit plan. I do get a nice to have but very modest pension from my 19 year stint. By the time I retired from my 15 year stint the company had converted to a defined contribution plan. So it is a lot harder than it looks to cash in on a defined benefit plan.
I also was able to avoid the "put all your eggs in one stock" problem I have described above. It was part of a pretty conservative investment strategy that has stood me in good stead. But there's no doubt about it. It would have been easy to screw up some time over the years and lose all or a lot of my retirement money. So it is also harder than it looks to cash in on a defined contribution plan. Most people are more aware of the risks inherent in a defined contribution plan so they characterize it as the more risky of the two. I hope I have convinced you that it is not necessarily more risky and is frequently less risky. And the fact that both options are risky is why I strongly advocate leaving Social Security pretty much as it is. Assuming it can survive the current political attack on it, it represents the only truly low risk retirement option available to people. Everyone needs a low risk component in their retirement plan.
Finally, what should be done about government pension plans like the one in Wisconsin? Probably the biggest reason private defined benefit pension plans are risky is bad behavior by company management. They either abuse the pension plan or they run the company badly. There is no reason to believe that state legislatures are likely to do a better job. The current activities of the legislatures in Wisconsin and several other states strongly supports this contention. The money pot invariably associated with a defined benefit plan just represents too tempting a target. So I think that governments should follow the lead of private industry. Private industry has almost entirely shifted over to defined contribution pension systems. I think it is both inevitable and likely that in the coming years governments will switch over to defined contribution systems too. The switchover needs to be watched carefully. It can be done properly so that the employee at least starts out with an equivalent benefit. But it can also turn into a license to steal if done badly.
But once done opportunities for mischief are greatly reduced. It now becomes a discussion between the employer and employee about total compensation. The employer is now indifferent as to how much of the compensation is in salary and how much is in the form of a pension contribution. He only cares about the total amount. And there is no longer an opportunity to raid the pension fund or to underfund the plan in the current year. Pensions would be off the table completely in Wisconsin if the state had a defined contribution plan. And that would be better for state employees and for the rest of us.
Saturday, March 5, 2011
Robot Cars
This is the third and final installment of my "robots in transportation" series. The first one was http://sigma5.blogspot.com/2010/12/space-final-frontier.html. It argued in favor of shutting down the manned part of the space program and going with robot space probes. The second installment: http://sigma5.blogspot.com/2011/01/robot-jet-fighters.html discussed unmanned airplanes. Here I discuss robot cars. But before I get into replacing drivers with robots I am going to discuss some other aspects of cars of the future.
Does the car have a future? There are some who would argue that it does not. There is a small but vocal contingent of people in my town that hate cars. They walk, bicycle, or take public transportation. They see cars as evil incarnate. In the most general sense they are arguing against personal transportation, unless it is people powered. So, taking things in reverse order, is people powered transportation practical? People have been walking since there were people. And animals have been walking for a lot longer. So in some sense walking is practical. But during the long period when walking was how most people got around most people never ventured more than 25 miles from where they were born. I for one, do not want to give up the option of venturing further abroad.
Walking also has another disadvantage. You can't carry much along with you. If you intend to walk more than about 10 miles in a day I would estimate that it would be impossible for most people to carry more than about 100 lbs. Many people couldn't handle even this amount. And if you want to go further you need to cut down on your load. If you want to carry more, and the limit would be less than 200 lbs, you would not be able to walk even 10 miles per day. Domestic animals have been around for about 10,000 years as a way to improve the distance/load calculus. People have also been inventing things like ships and wagons as another solution to the problem.
A more recent invention is the bicycle. It is another approach to beating the distance/load calculus. There is an annual "Seattle to Portland" bicycle race in my neck of the woods that demonstrates this. Most participants travel the roughly 200 mile distance in two days. Many of them do it in a day. So a bicycle allows you to travel four to eight times as far as pure foot power would permit. This is a definite improvement. But most people, given the option of trading a bicycle for an automobile, opt for the car. This is most obvious in China. We all have seen video of hordes of bicycles on the streets of Beijing a few years ago. But Chinese are deserting their bikes for cars in the millions. China is now the largest single car market in the world.
Public transportation, typically in the form of buses, but also in the form of light rail, is touted by many as the "correct" alternative to cars when feet or bicycles are not the answer. Why? Well, when you strip the argument down it is efficiency. Public transportation is more efficient and produces less pollution than cars. There is also the gridlock problem. Let's take each of these issues separately.
The theory is that public transportation is more efficient. But is it? If you take a bus and fill it full of people it will be cheaper per passenger mile than the equivalent number of cars, each with only one person in it. That's the way the efficiency argument is usually presented. But are the buses really full? Currently the answer is pretty much yes. But this is because the number of buses is far less than the number that would be needed to meet the demand in a car free environment. Bus systems all lose money and are limited in size to what the taxpayer will support. Tax payer support falls off rapidly as the load (number of people on the typical bus run) decreases. So the current subsidy is only enough to provide for a few pretty full buses. This works because there are lots of cars around to take care of most of the transportation need.
There is also a hidden cost to buses and other mass transportation solutions. That is lost time. One of the real benefits to a car is in it I can go long distances whenever I want. I take a lot of short in-city trips. Frequently I have some flexibility as to when I go so theoretically I could time the trip to fit the bus schedule. But much of the time this is not true. I got my hair cut today. It took me about 10 minutes each way. In a bus, if I timed it right, it might have taken me 20 minutes each way. So right away my travel time doubled. Next, it was to an appointment. So I really needed to travel to the appointment at a specific time. It is possible to go early but whatever time I would have waited between when I arrived and when my appointment started would have been lost time. Even on the most traveled bus routes an "every 20 minutes" schedule is about as good as it gets. So I would have lost another ten minutes in synchronizing with the bus schedule. And all this is true for my trip home too. So in this semi-ideal situation 20 minutes of travel time has ballooned up to 60 minutes. And this is for an in-city to in-city trip.
Buses (or light rail) don't go most suburban places. And they do not go every 20 minutes and they do not run all the time. Let's say we fixed that. Buses would now go everywhere and they run all the time on an "every 20 minutes" schedule. What do things look like now? First, we need a lot of buses, between 10 and 100 times as many buses as we now have. We would also have to use some kind of "hub and spoke" system. It is impractical to have buses running from everywhere to everywhere. So for a lot of trips you would take a local to a hub, a trip to a second hub, and finally a local to your actual destination. That means a 10 minute delay at the first hub, a 10 minute delay at your second hub, and arriving at your destination 10 minutes early. We add an hour to a typical longer round trip. And, in order to meet our "every 20 minutes" and our "goes everywhere" requirement we are going to be running most buses pretty empty a lot of the time and some buses completely empty some of the time. This scenario I have outlined may seem unrealistic but it is exactly what cars provide. I can get in my car whenever I want and go reasonably directly to wherever I want. My route and schedule is completely independent of anyone else's route and schedule. What is your time worth? There is an incredibly large time penalty to shifting most people from traveling by car to traveling by public transportation.
And, once you increase the density of public transportation sufficiently to reduce the time cost to a reasonable amount the efficiency goes out the door. And the efficiency of public transportation is none too good now. There is no public transportation system in existence that recovers all of its costs. If you increase the quality of service of a public transportation system to anything approaching that currently provided by cars it becomes fantastically expensive. Even in a place like New York City with its subway system that was built 100 years ago and its very high density, public transportation is heavily subsidized and they have lots of automobile traffic.
Next, let's consider pollution. Cars are a definite improvement over horses. The release far less pollution per mile. But is a car inherently a polluter? The answer is no! We know this because we can now buy electric cars. The single problem with electric cars is that the current battery technology sucks. The motors that turn the propellers of the aircraft carrier Enterprise are electric. Given this it should be clear that there is absolutely no problem making electric motors that will provide all the performance anyone could want. But current batteries can't store much power. So manufacturers put in wimpy motors to make the batteries last longer so people think electric cars are wimpy. Fix the battery problem and you fix the wimpy problem. Until the crappy battery problem is fixed electric cars are not for everybody.
Given cheap gas and a lack of powerful cheap batteries we will continue to have gas powered cars that pollute. The obvious solution, if we want to reduce the pollution problem, is to make gas more expensive. I drove a big old car while I was in college. It ran on "super". One day I bought super for 29.9 cents/gallon. I said to myself "I will never buy super cheaper in my life". I was right. That was a lot of years ago but it demonstrates what has been happening to gas prices over the last 40 years. Even with the price increases gas is still cheap. But let's assume it gets expensive or we decide for other reasons that we need to make cars much more efficient. What will we do?
The current answer is a hybrid. This is a combination of gas and electric. There are a number of ways to do hybrid and it is not clear how most hybrids work now. In a purely gas car you have an engine, a transmission, some mechanical connections like the differential and this all this spins shafts that turn wheels. This is a pretty inefficient process. There is another way to do this, the way aircraft carriers and diesel locomotives do it. There the primary motor (nuclear or diesel) that is connected to a generator. The electricity is fed to electric motors that spin the wheels. For powerful machines like locomotives and aircraft carriers, this is the most efficient way to do it. It seems to me that this should be the most efficient way to do cars. You would also throw some batteries and a more complex "control" system in between the generator and the motors on the wheels. This approach has many advantages. You can put more or less batteries in. You can add in the capability of charging the batteries from the electric grid. More batteries and charging allows you to run the car on electricity more of the time. The motor generator approach allows you to make the motor more efficient because it doesn't have to run at different speeds and loads. It is no longer connected to the wheels. You get rid of the transmission and other mechanical equipment that is inefficient at transferring power along the line to the wheels. This general approach should result in the most efficient car. So why don't all cars do it this way?
It turns out that if you do the same thing a lot of times for a long time you get very efficient at it. Auto makers have been making a lot of traditional cars for a long time. They have gotten very efficient at it. Going to the design I recommend means learning how to do a lot of new things very efficiently. It may be cheaper to do the theoretically less efficient thing (a car with a lot of old technology) because you are so darn good at doing the old thing. Eventually some auto makers will figure out how to be good at making the new kinds of cars. They will force the rest to figure it out too. It might take a while but probably less than 10 years.
Moving from our current "gas" car to hybrid cars is hard but doable. Most of our infrastructure; most of the manufacturing process, roads, gas stations, etc., will require little or no modification. Switching to all electric cars would require a much greater change. Batteries are the critical problem. I have been following battery technology for 30 years. The newest batteries are better than the old ones but we have merely moved from appalling to awful. We need to move all the way to good. Then there is our electricity grid. It is not set up to handle the load that moving our transportation system over to a high percentage of electric would require. Part of it is just more as in more generating capacity and more transmission lines. We know how to do these things. The part we don't know how to do is storage. Our electric grid is real time. It has a little inertia built into it but mostly it generates and distributes what is needed now. Any serious imbalance between supply and demand results in outages. And wind, solar, and some other sources are intermittent. It would be nice if we could store the excess from some periods to cover the shortages from other periods. Being able to store large amounts of power for a few days or even a few hours would make a tremendous difference. This is the wholesale version of the electric car battery problem. Again, I have been watching this area for over 30 years and not much progress has been made. Nuff said. Back to cars.
There is another "clean car" idea out there. That's fuel cell/hydrogen. The idea is to use a fuel cell to turn hydrogen into electricity. NASA has been doing this since the '60s so its something we know how to do. The problem is not in the fuel cell it is in the Hydrogen. If you are NASA sending a space probe to the back side of beyond, the many problems associated with dealing with Hydrogen are worth the hassle. But this is not true here on Earth. Hydrogen has two problems: making it and storing it. There is no Hydrogen loose around to collect. You have to make it. Hydrogen is a constituent of lots of things including water. But Hydrogen really likes to combine with stuff. So it does. To make it into a fuel we have to uncombine it. That takes lots of energy. Well, the whole point of Hydrogen is as a source of energy so this whole thing about consuming a lot of energy to create Hydrogen is just wrong. A simple way to make Hydrogen is to use electricity to separate out the Hydrogen in water. Other than the fantastic amounts of energy this requires it works pretty well. And that's the problem with the many alternatives. You end up having to use a lot of energy to make the Hydrogen. This is not good. Then when you have made the Hydrogen you have made something that loves to recombine with other stuff. The name for this "combine" process, in many cases, is explosion. So you need to be very careful how you handle things or you get explosions or perhaps just a very large very hot fire.
So Hydrogen is dangerous to store and you have to be very careful. That sounds like the "storage" problem but it's not. To store gas in a car you make this thing that is a couple of cubic feet in size out of sheet metal called a gas tank. It's not very big and it's not very heavy, even full of gas. But in this not very big not very heavy thing you can put enough gas with enough energy to move a big SUV 300 miles. To store the same amount of energy as Hydrogen under similar conditions you would need a tank many times the size of the SUV. So you have to do something. One thing is to compress it. But you have to compress it a lot. So you need a very strong tank and you need to transfer the Hydrogen from the gas station to the car under these very high pressures. This is dangerous and expensive. And probably the gas tank is now very heavy in order to be strong enough.
Another approach is a sponge. It turns out that there are certain materials that wick up Hydrogen. They do it so well that you can get a lot of cubic feet of Hydrogen into a few cubic feet of tank. And the pressure is not very high. It's some kind of chemistry magic but it works. Effectively you get a lot of compression without a lot of pressure. You need a kind of sponge material that will store a lot of Hydrogen in each cubic foot of sponge. There are some materials that do this but not many. And it's tricky. You have to get the Hydrogen to go into the sponge material at a reasonable rate at roughly room temperature and pressure. This is tough. And the Hydrogen needs to leak out of the sponge material without a lot of encouragement so that you can get the Hydrogen back to use in the fuel cell. And it has to be cheap enough to be practical to put into millions of car fuel tanks. So far no one has come up with a magic sponge material that has all these characteristics. I don't see Hydrogen fuel cell cars in any numbers any time soon.
So what are we going to see on the road in the next 20-40 years? I don't see anything replacing the car. So we will see lots of cars. I see a lot of hybrids, some electrics. And a lot of old style gas cars unless gas gets up to $40/gal. I also don't see cars looking a lot different than they do now. One reason to change the shape of cars is aerodynamics. The first aerodynamic car was introduced in the '30s. Wind resistance does not make much difference in the 30-60 MPH speed range. It makes even less difference under 30 MPH. So if you can make the power train more efficient or the car drastically lighter you will get a lot more bang for the buck. I'm sure auto designers will find new and different ways to bend the sheet metal but this will be due more to fashion trends than anything else. We will see more plastic, especially carbon fiber but that won't make much change in the look and feel of cars.
But I do predict a major change in one area. It gets back to congestion and the title of this piece: robot cars. A robot car as anything other than Science Fiction is a pretty recent development. The idea of a practical robot car that you could imagine sharing a street with regular cars is only about 10 years old. But the field is now moving rapidly. The first development to demonstrate this was the DARPA Grand Challenge series of rallies. DARPA, a DOD agency, issued its first challenge in this area for an event that took place in 2004. Driverless cars were to navigate on their own over a 150 mile route on regular roads on a closed course (no other traffic). The best car went 7.3 miles before coming to a stop. Not a very impressive showing.
But oh what a difference a year makes. Round two took place in 2005. The course was similar. But this time 6 out of 15 vehicles finished including one that was a 30,000 lb military vehicle. The winning vehicle was put together by a team from Stanford University. Two years later in 2007 round three was held. This time the course was only 60 miles long but it was over an "urban" route. Vehicles had to obey speed limits, stop for traffic signs, and avoid other "moving hazard" vehicles. Again, 6 teams finished.
So the DARPA challenges resulted in autonomous vehicles that could tell road from not-road, identify stop signs, avoid moving vehicles and perform other basic driving tasks. This was a tremendous accomplishment but could they drive on ordinary roads beside vehicles driven by ordinary people? Other than the last one, the DARPA challenges represented a more sophisticated version of what people had been experimenting for longer. Several demonstrations had been done earlier with robot vehicles that could navigate in a closed "toy" system. The last DARPA challenge introduced a more "real world" environment.
Things have been moving forward rapidly since. Google has been experimenting with a driverless car. Most of the testing has been in closed "toy" environments but not all. The Google car has driven on California freeways. It has even navigated Lombard Street, the famous twisty road in San Francisco. The Google car has not been actually driverless. There has been someone aboard who can take over if necessary. But it has rarely been necessary. The Google car has even been involved in an accident. It was rear ended while stopped at a stop sign. So I have every confidence that we will crack the "robot car" problem from a technical point of view in the next few years.
But the robot car question is also one of those "how do we get there from here" problems. If all cars were robot cars then we can imagine all cars being robot cars but they are not. I think it is idiotic to think that we will have dedicated robot car roads and other non-robot car roads so robot cars need to be able to work in a "real world" environment where there are lots of non-robot cars around. DARPA, Google, and whoever come next are busy proving that it is possible to build robot cars that can do this. So technology will not be the impediment. So we can see the end. But it is necessary to see the intermediate steps too.
I believe the foundation we can build the intermediate steps on is a Collision Avoidance System (CAS). Cars have had crude cruise control systems for many years. These are capable of maintaining a constant speed. But it was the driver's responsibility to do collision avoidance. And the systems were so dumb that you can't even lower the target speed. But we are now seeing much more sophisticate systems coming on line in new cars, especially luxury cars. A simple version of this ties into the cruise control system. It will detect that the vehicle in front is getting too close and alert the driver and disconnect the cruise control. Another improvement is a system that checks what's behind you when you are backing up and alerts you. Another system will parallel park your car. I believe the current version of the parallel park system operates blindly but I can see an upgrade that checks for obstacles and stops. Another possible component is a system that looks for vehicles in your blind spot.
These are the beginning steps. But sensors are getting cheaper and computing power is getting cheaper. Adding more sensors and tying them together to give you a smart cruise control, a "blind spot" (to the side and rear) detection system and other features gives the auto manufacturing companies something to build on. They can market them not only as a differentiator (my CAS has more features than your CAS) but at least some of the features will bring real value to the driver. If you can use your cruise control in heavier traffic and go faster with greater fuel economy, that's worth something. The features that warn you of vehicles in your blind spot, save you from backing into things, and do most of the work in parking your car would all be appealing to me. The equipment that enables all this is the equipment that can be enhanced to provide the robot car capability.
There is one feature that I see that is important to moving things along in the proper direction. GM has an "EN-V" program. The cars themselves are cute and toys, in my opinion. But they do have one feature I see as a good idea. The cars talk to each other. This allows one EN-V to not run into another EN-V. But the system is proprietary. I think it would be a big help if the automakers got together and built a standard for cars talking to each other. The difference in road knowledge possible when a car is on its own versus even the situation where a only few of the cars are exchanging information is tremendous. Imagine a simple situation where one car is following another car. Assume the cars are exchanging information and the rear car is following the front car in "cruise control" mode. The rear car could easily maintain a constant distance because it knows the speed of the front car. Now let's say that he front car needed to brake severely to avoid an obstacle. It could pass the information back to the rear car so that it too could slow and avoid a collision. And in a more mundane case, say the front car was about to exit the freeway. It could signal the following car so it could break off station keeping. There would be fewer situations where the rear car could provide information useful to the front car but there would be some.
Now imagine a situation where most cars were information sharing and had sophisticated robot car capability. Here you could transition to convoying. This would let cars stay closer together and go faster safely. This increases the effective capacity of our current road system thus reducing congestion. And with many sensors in many vehicles the chances of a surprise that might lead to an accident become very small. From here it becomes possible to transition to a true robot car environment. The result would be a cleaner, safer, and more efficient situation than what we have now. Why more efficient? As with bicycles, slowing down and then speeding up uses a lot of energy. If you can save the energy you save the cost of generating the energy. So we get the benefits the car haters desire without getting rid of the convenience benefit car lovers love. A win all around.
Does the car have a future? There are some who would argue that it does not. There is a small but vocal contingent of people in my town that hate cars. They walk, bicycle, or take public transportation. They see cars as evil incarnate. In the most general sense they are arguing against personal transportation, unless it is people powered. So, taking things in reverse order, is people powered transportation practical? People have been walking since there were people. And animals have been walking for a lot longer. So in some sense walking is practical. But during the long period when walking was how most people got around most people never ventured more than 25 miles from where they were born. I for one, do not want to give up the option of venturing further abroad.
Walking also has another disadvantage. You can't carry much along with you. If you intend to walk more than about 10 miles in a day I would estimate that it would be impossible for most people to carry more than about 100 lbs. Many people couldn't handle even this amount. And if you want to go further you need to cut down on your load. If you want to carry more, and the limit would be less than 200 lbs, you would not be able to walk even 10 miles per day. Domestic animals have been around for about 10,000 years as a way to improve the distance/load calculus. People have also been inventing things like ships and wagons as another solution to the problem.
A more recent invention is the bicycle. It is another approach to beating the distance/load calculus. There is an annual "Seattle to Portland" bicycle race in my neck of the woods that demonstrates this. Most participants travel the roughly 200 mile distance in two days. Many of them do it in a day. So a bicycle allows you to travel four to eight times as far as pure foot power would permit. This is a definite improvement. But most people, given the option of trading a bicycle for an automobile, opt for the car. This is most obvious in China. We all have seen video of hordes of bicycles on the streets of Beijing a few years ago. But Chinese are deserting their bikes for cars in the millions. China is now the largest single car market in the world.
Public transportation, typically in the form of buses, but also in the form of light rail, is touted by many as the "correct" alternative to cars when feet or bicycles are not the answer. Why? Well, when you strip the argument down it is efficiency. Public transportation is more efficient and produces less pollution than cars. There is also the gridlock problem. Let's take each of these issues separately.
The theory is that public transportation is more efficient. But is it? If you take a bus and fill it full of people it will be cheaper per passenger mile than the equivalent number of cars, each with only one person in it. That's the way the efficiency argument is usually presented. But are the buses really full? Currently the answer is pretty much yes. But this is because the number of buses is far less than the number that would be needed to meet the demand in a car free environment. Bus systems all lose money and are limited in size to what the taxpayer will support. Tax payer support falls off rapidly as the load (number of people on the typical bus run) decreases. So the current subsidy is only enough to provide for a few pretty full buses. This works because there are lots of cars around to take care of most of the transportation need.
There is also a hidden cost to buses and other mass transportation solutions. That is lost time. One of the real benefits to a car is in it I can go long distances whenever I want. I take a lot of short in-city trips. Frequently I have some flexibility as to when I go so theoretically I could time the trip to fit the bus schedule. But much of the time this is not true. I got my hair cut today. It took me about 10 minutes each way. In a bus, if I timed it right, it might have taken me 20 minutes each way. So right away my travel time doubled. Next, it was to an appointment. So I really needed to travel to the appointment at a specific time. It is possible to go early but whatever time I would have waited between when I arrived and when my appointment started would have been lost time. Even on the most traveled bus routes an "every 20 minutes" schedule is about as good as it gets. So I would have lost another ten minutes in synchronizing with the bus schedule. And all this is true for my trip home too. So in this semi-ideal situation 20 minutes of travel time has ballooned up to 60 minutes. And this is for an in-city to in-city trip.
Buses (or light rail) don't go most suburban places. And they do not go every 20 minutes and they do not run all the time. Let's say we fixed that. Buses would now go everywhere and they run all the time on an "every 20 minutes" schedule. What do things look like now? First, we need a lot of buses, between 10 and 100 times as many buses as we now have. We would also have to use some kind of "hub and spoke" system. It is impractical to have buses running from everywhere to everywhere. So for a lot of trips you would take a local to a hub, a trip to a second hub, and finally a local to your actual destination. That means a 10 minute delay at the first hub, a 10 minute delay at your second hub, and arriving at your destination 10 minutes early. We add an hour to a typical longer round trip. And, in order to meet our "every 20 minutes" and our "goes everywhere" requirement we are going to be running most buses pretty empty a lot of the time and some buses completely empty some of the time. This scenario I have outlined may seem unrealistic but it is exactly what cars provide. I can get in my car whenever I want and go reasonably directly to wherever I want. My route and schedule is completely independent of anyone else's route and schedule. What is your time worth? There is an incredibly large time penalty to shifting most people from traveling by car to traveling by public transportation.
And, once you increase the density of public transportation sufficiently to reduce the time cost to a reasonable amount the efficiency goes out the door. And the efficiency of public transportation is none too good now. There is no public transportation system in existence that recovers all of its costs. If you increase the quality of service of a public transportation system to anything approaching that currently provided by cars it becomes fantastically expensive. Even in a place like New York City with its subway system that was built 100 years ago and its very high density, public transportation is heavily subsidized and they have lots of automobile traffic.
Next, let's consider pollution. Cars are a definite improvement over horses. The release far less pollution per mile. But is a car inherently a polluter? The answer is no! We know this because we can now buy electric cars. The single problem with electric cars is that the current battery technology sucks. The motors that turn the propellers of the aircraft carrier Enterprise are electric. Given this it should be clear that there is absolutely no problem making electric motors that will provide all the performance anyone could want. But current batteries can't store much power. So manufacturers put in wimpy motors to make the batteries last longer so people think electric cars are wimpy. Fix the battery problem and you fix the wimpy problem. Until the crappy battery problem is fixed electric cars are not for everybody.
Given cheap gas and a lack of powerful cheap batteries we will continue to have gas powered cars that pollute. The obvious solution, if we want to reduce the pollution problem, is to make gas more expensive. I drove a big old car while I was in college. It ran on "super". One day I bought super for 29.9 cents/gallon. I said to myself "I will never buy super cheaper in my life". I was right. That was a lot of years ago but it demonstrates what has been happening to gas prices over the last 40 years. Even with the price increases gas is still cheap. But let's assume it gets expensive or we decide for other reasons that we need to make cars much more efficient. What will we do?
The current answer is a hybrid. This is a combination of gas and electric. There are a number of ways to do hybrid and it is not clear how most hybrids work now. In a purely gas car you have an engine, a transmission, some mechanical connections like the differential and this all this spins shafts that turn wheels. This is a pretty inefficient process. There is another way to do this, the way aircraft carriers and diesel locomotives do it. There the primary motor (nuclear or diesel) that is connected to a generator. The electricity is fed to electric motors that spin the wheels. For powerful machines like locomotives and aircraft carriers, this is the most efficient way to do it. It seems to me that this should be the most efficient way to do cars. You would also throw some batteries and a more complex "control" system in between the generator and the motors on the wheels. This approach has many advantages. You can put more or less batteries in. You can add in the capability of charging the batteries from the electric grid. More batteries and charging allows you to run the car on electricity more of the time. The motor generator approach allows you to make the motor more efficient because it doesn't have to run at different speeds and loads. It is no longer connected to the wheels. You get rid of the transmission and other mechanical equipment that is inefficient at transferring power along the line to the wheels. This general approach should result in the most efficient car. So why don't all cars do it this way?
It turns out that if you do the same thing a lot of times for a long time you get very efficient at it. Auto makers have been making a lot of traditional cars for a long time. They have gotten very efficient at it. Going to the design I recommend means learning how to do a lot of new things very efficiently. It may be cheaper to do the theoretically less efficient thing (a car with a lot of old technology) because you are so darn good at doing the old thing. Eventually some auto makers will figure out how to be good at making the new kinds of cars. They will force the rest to figure it out too. It might take a while but probably less than 10 years.
Moving from our current "gas" car to hybrid cars is hard but doable. Most of our infrastructure; most of the manufacturing process, roads, gas stations, etc., will require little or no modification. Switching to all electric cars would require a much greater change. Batteries are the critical problem. I have been following battery technology for 30 years. The newest batteries are better than the old ones but we have merely moved from appalling to awful. We need to move all the way to good. Then there is our electricity grid. It is not set up to handle the load that moving our transportation system over to a high percentage of electric would require. Part of it is just more as in more generating capacity and more transmission lines. We know how to do these things. The part we don't know how to do is storage. Our electric grid is real time. It has a little inertia built into it but mostly it generates and distributes what is needed now. Any serious imbalance between supply and demand results in outages. And wind, solar, and some other sources are intermittent. It would be nice if we could store the excess from some periods to cover the shortages from other periods. Being able to store large amounts of power for a few days or even a few hours would make a tremendous difference. This is the wholesale version of the electric car battery problem. Again, I have been watching this area for over 30 years and not much progress has been made. Nuff said. Back to cars.
There is another "clean car" idea out there. That's fuel cell/hydrogen. The idea is to use a fuel cell to turn hydrogen into electricity. NASA has been doing this since the '60s so its something we know how to do. The problem is not in the fuel cell it is in the Hydrogen. If you are NASA sending a space probe to the back side of beyond, the many problems associated with dealing with Hydrogen are worth the hassle. But this is not true here on Earth. Hydrogen has two problems: making it and storing it. There is no Hydrogen loose around to collect. You have to make it. Hydrogen is a constituent of lots of things including water. But Hydrogen really likes to combine with stuff. So it does. To make it into a fuel we have to uncombine it. That takes lots of energy. Well, the whole point of Hydrogen is as a source of energy so this whole thing about consuming a lot of energy to create Hydrogen is just wrong. A simple way to make Hydrogen is to use electricity to separate out the Hydrogen in water. Other than the fantastic amounts of energy this requires it works pretty well. And that's the problem with the many alternatives. You end up having to use a lot of energy to make the Hydrogen. This is not good. Then when you have made the Hydrogen you have made something that loves to recombine with other stuff. The name for this "combine" process, in many cases, is explosion. So you need to be very careful how you handle things or you get explosions or perhaps just a very large very hot fire.
So Hydrogen is dangerous to store and you have to be very careful. That sounds like the "storage" problem but it's not. To store gas in a car you make this thing that is a couple of cubic feet in size out of sheet metal called a gas tank. It's not very big and it's not very heavy, even full of gas. But in this not very big not very heavy thing you can put enough gas with enough energy to move a big SUV 300 miles. To store the same amount of energy as Hydrogen under similar conditions you would need a tank many times the size of the SUV. So you have to do something. One thing is to compress it. But you have to compress it a lot. So you need a very strong tank and you need to transfer the Hydrogen from the gas station to the car under these very high pressures. This is dangerous and expensive. And probably the gas tank is now very heavy in order to be strong enough.
Another approach is a sponge. It turns out that there are certain materials that wick up Hydrogen. They do it so well that you can get a lot of cubic feet of Hydrogen into a few cubic feet of tank. And the pressure is not very high. It's some kind of chemistry magic but it works. Effectively you get a lot of compression without a lot of pressure. You need a kind of sponge material that will store a lot of Hydrogen in each cubic foot of sponge. There are some materials that do this but not many. And it's tricky. You have to get the Hydrogen to go into the sponge material at a reasonable rate at roughly room temperature and pressure. This is tough. And the Hydrogen needs to leak out of the sponge material without a lot of encouragement so that you can get the Hydrogen back to use in the fuel cell. And it has to be cheap enough to be practical to put into millions of car fuel tanks. So far no one has come up with a magic sponge material that has all these characteristics. I don't see Hydrogen fuel cell cars in any numbers any time soon.
So what are we going to see on the road in the next 20-40 years? I don't see anything replacing the car. So we will see lots of cars. I see a lot of hybrids, some electrics. And a lot of old style gas cars unless gas gets up to $40/gal. I also don't see cars looking a lot different than they do now. One reason to change the shape of cars is aerodynamics. The first aerodynamic car was introduced in the '30s. Wind resistance does not make much difference in the 30-60 MPH speed range. It makes even less difference under 30 MPH. So if you can make the power train more efficient or the car drastically lighter you will get a lot more bang for the buck. I'm sure auto designers will find new and different ways to bend the sheet metal but this will be due more to fashion trends than anything else. We will see more plastic, especially carbon fiber but that won't make much change in the look and feel of cars.
But I do predict a major change in one area. It gets back to congestion and the title of this piece: robot cars. A robot car as anything other than Science Fiction is a pretty recent development. The idea of a practical robot car that you could imagine sharing a street with regular cars is only about 10 years old. But the field is now moving rapidly. The first development to demonstrate this was the DARPA Grand Challenge series of rallies. DARPA, a DOD agency, issued its first challenge in this area for an event that took place in 2004. Driverless cars were to navigate on their own over a 150 mile route on regular roads on a closed course (no other traffic). The best car went 7.3 miles before coming to a stop. Not a very impressive showing.
But oh what a difference a year makes. Round two took place in 2005. The course was similar. But this time 6 out of 15 vehicles finished including one that was a 30,000 lb military vehicle. The winning vehicle was put together by a team from Stanford University. Two years later in 2007 round three was held. This time the course was only 60 miles long but it was over an "urban" route. Vehicles had to obey speed limits, stop for traffic signs, and avoid other "moving hazard" vehicles. Again, 6 teams finished.
So the DARPA challenges resulted in autonomous vehicles that could tell road from not-road, identify stop signs, avoid moving vehicles and perform other basic driving tasks. This was a tremendous accomplishment but could they drive on ordinary roads beside vehicles driven by ordinary people? Other than the last one, the DARPA challenges represented a more sophisticated version of what people had been experimenting for longer. Several demonstrations had been done earlier with robot vehicles that could navigate in a closed "toy" system. The last DARPA challenge introduced a more "real world" environment.
Things have been moving forward rapidly since. Google has been experimenting with a driverless car. Most of the testing has been in closed "toy" environments but not all. The Google car has driven on California freeways. It has even navigated Lombard Street, the famous twisty road in San Francisco. The Google car has not been actually driverless. There has been someone aboard who can take over if necessary. But it has rarely been necessary. The Google car has even been involved in an accident. It was rear ended while stopped at a stop sign. So I have every confidence that we will crack the "robot car" problem from a technical point of view in the next few years.
But the robot car question is also one of those "how do we get there from here" problems. If all cars were robot cars then we can imagine all cars being robot cars but they are not. I think it is idiotic to think that we will have dedicated robot car roads and other non-robot car roads so robot cars need to be able to work in a "real world" environment where there are lots of non-robot cars around. DARPA, Google, and whoever come next are busy proving that it is possible to build robot cars that can do this. So technology will not be the impediment. So we can see the end. But it is necessary to see the intermediate steps too.
I believe the foundation we can build the intermediate steps on is a Collision Avoidance System (CAS). Cars have had crude cruise control systems for many years. These are capable of maintaining a constant speed. But it was the driver's responsibility to do collision avoidance. And the systems were so dumb that you can't even lower the target speed. But we are now seeing much more sophisticate systems coming on line in new cars, especially luxury cars. A simple version of this ties into the cruise control system. It will detect that the vehicle in front is getting too close and alert the driver and disconnect the cruise control. Another improvement is a system that checks what's behind you when you are backing up and alerts you. Another system will parallel park your car. I believe the current version of the parallel park system operates blindly but I can see an upgrade that checks for obstacles and stops. Another possible component is a system that looks for vehicles in your blind spot.
These are the beginning steps. But sensors are getting cheaper and computing power is getting cheaper. Adding more sensors and tying them together to give you a smart cruise control, a "blind spot" (to the side and rear) detection system and other features gives the auto manufacturing companies something to build on. They can market them not only as a differentiator (my CAS has more features than your CAS) but at least some of the features will bring real value to the driver. If you can use your cruise control in heavier traffic and go faster with greater fuel economy, that's worth something. The features that warn you of vehicles in your blind spot, save you from backing into things, and do most of the work in parking your car would all be appealing to me. The equipment that enables all this is the equipment that can be enhanced to provide the robot car capability.
There is one feature that I see that is important to moving things along in the proper direction. GM has an "EN-V" program. The cars themselves are cute and toys, in my opinion. But they do have one feature I see as a good idea. The cars talk to each other. This allows one EN-V to not run into another EN-V. But the system is proprietary. I think it would be a big help if the automakers got together and built a standard for cars talking to each other. The difference in road knowledge possible when a car is on its own versus even the situation where a only few of the cars are exchanging information is tremendous. Imagine a simple situation where one car is following another car. Assume the cars are exchanging information and the rear car is following the front car in "cruise control" mode. The rear car could easily maintain a constant distance because it knows the speed of the front car. Now let's say that he front car needed to brake severely to avoid an obstacle. It could pass the information back to the rear car so that it too could slow and avoid a collision. And in a more mundane case, say the front car was about to exit the freeway. It could signal the following car so it could break off station keeping. There would be fewer situations where the rear car could provide information useful to the front car but there would be some.
Now imagine a situation where most cars were information sharing and had sophisticated robot car capability. Here you could transition to convoying. This would let cars stay closer together and go faster safely. This increases the effective capacity of our current road system thus reducing congestion. And with many sensors in many vehicles the chances of a surprise that might lead to an accident become very small. From here it becomes possible to transition to a true robot car environment. The result would be a cleaner, safer, and more efficient situation than what we have now. Why more efficient? As with bicycles, slowing down and then speeding up uses a lot of energy. If you can save the energy you save the cost of generating the energy. So we get the benefits the car haters desire without getting rid of the convenience benefit car lovers love. A win all around.
Saturday, February 19, 2011
Artificial Intelligence
In 1950 Alan Turing, a noted mathematician and cryptographer, published a paper in which he described the "Turing Test". In its modern form imagine texting to a stranger. You can send any kind of text message you want. The stranger can reply or not as and how he chooses. After some time (days, perhaps a week) you are asked to answer a simple question: Is the stranger a person or a machine? The Turing Test is designed to answer an important question: Is it possible to create an intelligent machine? If a machine can pass the Touring Test by successfully impersonating a human then it is possible to create an intelligent machine, according to Turing.
Since then Computer Scientists and many others have been fascinated by the Turing Test. Individuals and groups have set up actual Touring Tests but it turns out to be tougher than you would think to get right. What would happen, for instance, if the stranger was a human who tried to imitate a machine? Is this really fair? Also, certain limitations are usually necessary because machines that can look, sound, and act like humans only exist in the realm of fiction. Hence, the text message scenario.
But the concept behind the Touring Test has continued to fascinate people because Turing had a profound idea. If a machine can do things that intelligent humans do then it must be intelligent. This approach seems like a very intuitive and natural approach to figuring out what we mean by "intelligent". And we have just witnessed a very public event. The TV show "Jeopardy" hosted a three day two game exhibition match between a computer (actually a network of IBM computers) and two champion winners (Ken Jennings, winner of 74 matches in a row, and Brad Rutter, the all time money champ). We know a lot of people are fascinated because the match bumped Jeopardy's ratings up 30%, according to Nielson.
Technically, it wasn't a Touring Test because we all knew that Watson (the name IBM gave to their computer system) was a machine. But the question lots of people including yours truly were asking themselves was whether Watson was able to play like a human. The answer we got, if we just go by results. was that Watson was better during these two games than his human competition. And, given the quality of the players he was up against, we can say that he was far better than the typical Jeopardy contestant and, therefore, far far better than the rest of us.
As I have pointed out, staging a Touring Test, even a fake one like the Jeopardy exhibition, is a lot tougher than it looks, if one of your objectives is fairness. IBM staged a number of demonstration matches in order to convince the Jeopardy producers that the tournament would be a good idea. In these demonstration matches it was obvious that Watson could be led astray by constructing "questions" properly. (I know about the "response in the form of a question" gimmick. It works well on the show but I am not going to bother in this article). On the other hand, electronics are far faster at reflex activities like pressing a button. So a lot of work was put into providing what both sides saw as a level playing field. "Normal" Jeopardy questions on one side versus making Watson actually push a button on the other side. And both sides wanted to create an entertaining result so both sides wanted the human participants to have a chance and for Watson to not look like a comedy punch line. And they succeeded. It was fun to watch.
So how did it all come out? Well, Watson showed some weak spots but generally won handily. The final score was Watson - $77,147, Jennings - $24,000, and Rutter - $21,600. Watson was also way ahead at the end of the first match. And Watson was pretty good at figuring out whether he knew the answer or not. We got to see Watson's top three possible answers for most questions. The top answer was coded green for confident, red for not confident, and yellow for somewhere in between. Most of the time Watson's answer was green and another player rang in first when Watson coded his best answer red. Watson also rang in first about two thirds of the time. But the tale is told by the answers Watson got wrong, especially the ones he got really wrong.
We can decide Watson is a machine by the sheer speed and breadth of his performance. But if the IBM people did not think he was quick and knew lots of stuff the exhibition would never have taken place. So that's not enough. We all know that modern computers can organize a lot of data. But, while computers are very good at dealing with "structured" data, say where you have a table with rows and columns, computers are poor at dealing with unstructured data. Put simply, computers can't read.
Oh, they can scan text. Then they can identify all the letters and assemble the letters into words by taking advantage of spaces and other punctuation. But it is very hard for computers to take the next step and understand what the words mean. Most of the truly massive amount of data that was loaded into Watson was in the form of long sequences of text. All of Wikipedia was loaded into Watson. Wikipedia consists of over 2 million articles. And each article consists mostly of standard text because that's what people are good at using. The Internet Movie Database was also loaded in, along with a truly astounding number of other references. A lot of the IMDB data is structured. You have the movie name at the top. Then you have a section that lists each actor and role, one to a line, and so on. If you take a hard look at IMDB you will find out that it is not that easy but, for IMDB it seems like you at least have a chance of sorting much of it out. But for Wikipedia and most of what was loaded into Watson it is a lot harder.
I might find a sentence "Adam begat Cain". This tells a person that you have a parent child relationship where Adam is the father and Cain is the son. And to completely nail it down, you have to know that Adam and Cain are both male names. But what about "A boy named Sue", the popular Johnny Cash song. While Sue is normally a female name, in this case Sue is a male. A friend had a dog named Sam, short for Samantha. Sam is usually the name of a male person. And I could come up with even tougher examples where it is hard for a machine to make sense of things. There is ambiguity. There are contradictions. People are pretty good at functioning in this kind of messy environment but computers aren't.
So how did Watson do? Watson came up with the correct answer in a truly astounding number of areas. So whatever the IBM people did did to collect and organize Watson's data, it worked pretty well. Most of the time Watson came up with up with a green answer that was correct. In one case Watson came up with a yellow rated answer of "Serbia" when the correct answer was "Slovenia". I wouldn't have known which answer was the correct one. So I score Watson high for getting close and knowing when he wasn't sure. I don't know what process the IBM people to assemble the database. The advantage they had was that this could all be done "offline" before the exhibition started. And it might bee that they cheated by using statistical techniques like "this word is frequently found near that word". But if they were able to do the linguistic analysis necessary to get from "Adam begat Cain" to "Adam is the father of Cain", "Cain is the son of Adam", etc., in other words do linguistic and other analysis to turn strings of text into usable information, that would be a truly useful feat.
Jeopardy also poses quite a challenge in the structure of the clues. They are not standard English. There are frequently puns and other tricks. These are hard for people type contestants to deal with, especially with the time constraint. But they are much more difficult for a computer to deal with. And in this case, you can't use statistical tricks. It won't yield enough information because Jeopardy clues are very tightly packed and are frequently constructed so that the components are ambiguous and you have to combine all the components to narrow things down to one answer.
On several occasions Watson went astray by not figuring out an attribute that the correct answer needed to possess. For instance, in one case Watson gave a green rated answer of "Picasso", which was wrong. The clue was asking for a painting style not the name of a painter. The correct answer was "Modern Art". This would have been completely obvious to a human. In another case Watson was unable to correctly process the category. It was a tricky one, keys found on a computer keyboard. For instance, one answer that Watson did not ring in on was "F1". But in another case Watson rang in and supplied a green rated answer of "Chemise". There is no "Chemise" key on a computer keyboard but there is a "Shift" key. The clue had to do with clothing styles. I might or might not have come up with the correct answer but I would definitely have known that "Chemise" was wrong. Had Watson gotten the category, the questions would have been a piece of cake for him. Watson also answered "Dorothy Parker", an author, when what was required was "Elements of Style", the title of a book. I believe this was on a Daily Double and Watson did correctly rate his answer as a red.
So tricky categories and clues were a disadvantage to Watson. But an area he should have had a decided advantage was with ringing in. One would expect that Watson would let red answers go but would always ring in first when he had a Green answer. But in 11 cases Watson with a green answer was beat by one of the human players. I don't know what the story was here. Ken Jennings did say somewhere that it is possible to do an "anticipatory" ring in. You try to figure out when Alex is going to finish the "question" and ring in an instant after he should finish. If you ring in early your button is locked out for a while so there is a high penalty for ringing in early. Successful contestants try to figure out the answer while Alex is still talking. Watson got the questions in text form as soon as it was displayed on the board and employed the same strategy. So, for green answers, Watson should always ring in the same way.
Without knowing a lot more of the details (see - I told you this is tricky) it is impossible to give Watson a grade on the Jeopardy version of the Touring Test. But, ignoring the obvious, like the presence of an Avatar rather than a real person, and assuming the whole undertaking was "legit", for the most part Watson did very well. He was able to make the proper sense of most of the clues most of the time. And, besides the Avatar there was another dead give away. In one case Watson gave the same answer as another contestant who had gotten it wrong. Watson was deaf and they did not feed any of the audio to him. Again, ignoring the obvious (Avatar, deafness) that fact that Watson made at least one bonehead mistake means that technically he failed. But it's still a stunning achievement.
So, beyond the Touring Test aspect, what does this all mean? I was in school in the early '70s and it was one of those times when Artificial Intelligence (AI) research was on the rise. A number of "proof of concept" projects had generated a lot of buzz inside the Computer Science community. The story was "just let us crank these up a bit and see what we can really do". But none of these projects was able to progress past "proof of concept" into something more general and more powerful. After watching for a couple of years I decided that real progress in AI was a long way away and that AI was really hard to do. Unfortunately, my observations turned out to be spot on. There hasn't been a lot of headline progress on AI since. But Watson proves that real progress has been made.
The issue I discussed above of turning text into data was completely beyond the state of the art in the '70s for anything but toy environments. If, as it seems, the Watson project has been able to process astounding amounts of raw text data and turn it into real information that can be used by computers, that is tremendous and real progress. There is another aspect of the Watson project that I want to discuss next. That's machine learning.
The original approach to AI was to put in a bunch of rules. Things like "normal temperature for a human is 97 - 99 degrees Fahrenheit", that sort of stuff. You put in a bunch of rules and the computer used them to answer your question. The theory was that if you had enough rules and they were good rules, you could get a good result. That approach eventually petered out. The modern approach to the same problem is to put some kind of general structure and analytic capability into the system. They you give the system a bunch of right and wrong examples Including whether each example is right or wrong, called a "training set". In our case they fed in a bunch of right and wrong "answers and questions". Then the idea is that the system does some kind of analysis of the "training set" and figures out its own rules. Computer Science people have been playing around with this approach for many years now. And its the approach the IBM people used. Based on their results, I would have to say that the state of the art in machine learning is now pretty good. And that's good news too.
So where do we go from here? Years ago a "physician assistant" was developed. Modern medicine is (and was) unbelievably complicated. The idea was to provide some computer assistance to help medical people diagnose tough cases. The program ultimately went nowhere. But this project seems like a perfect fit for Watson's capabilities. Pump in a lot of medical data, most typically found in text form. This medical data will be chock full of ambiguities and contradictions, just as the Jeopardy database was. Watson has tremendous English language capabilities and "medical language" should be no harder to master than "Jeopardy language". Finally this "training set" approach to machine learning should work as well for medicine as it did for Jeopardy. So it sounds like this would be a good application for Watson technology.
This has already occurred to IBM. They are starting work on a medicine version of Watson. Another area they have identified is the law. Again you have vast amounts of text, in this case legalese, that is ambiguous and contradictory. But again the same types of abilities that made Jeopardy Watson successful look like a good fit. Certainly if one or both of these projects are successful we can expect IBM (and eventually others) to come up with other applications.
Finally, can we look forward to a rematch? I don't think so. I have seen some of the test runs IBM did before the Jeopardy producers decided Watson was ready for prime time. The goofs were much more frequent and much more embarrassing. Yet the progress from there to what we saw in prime time only a few months later was truly astounding. Given even a few more months I am sure the Watson team could fix the few goofs we saw and many more to the point that human contestants would go from having little chance to having no chance at all. The only reason I can think of that I might have misjudged the situation would be if the books were cooked is some non obvious way. Since I don't think the books were cooked I think this Jeopardy challenge will be a one time event.
Since then Computer Scientists and many others have been fascinated by the Turing Test. Individuals and groups have set up actual Touring Tests but it turns out to be tougher than you would think to get right. What would happen, for instance, if the stranger was a human who tried to imitate a machine? Is this really fair? Also, certain limitations are usually necessary because machines that can look, sound, and act like humans only exist in the realm of fiction. Hence, the text message scenario.
But the concept behind the Touring Test has continued to fascinate people because Turing had a profound idea. If a machine can do things that intelligent humans do then it must be intelligent. This approach seems like a very intuitive and natural approach to figuring out what we mean by "intelligent". And we have just witnessed a very public event. The TV show "Jeopardy" hosted a three day two game exhibition match between a computer (actually a network of IBM computers) and two champion winners (Ken Jennings, winner of 74 matches in a row, and Brad Rutter, the all time money champ). We know a lot of people are fascinated because the match bumped Jeopardy's ratings up 30%, according to Nielson.
Technically, it wasn't a Touring Test because we all knew that Watson (the name IBM gave to their computer system) was a machine. But the question lots of people including yours truly were asking themselves was whether Watson was able to play like a human. The answer we got, if we just go by results. was that Watson was better during these two games than his human competition. And, given the quality of the players he was up against, we can say that he was far better than the typical Jeopardy contestant and, therefore, far far better than the rest of us.
As I have pointed out, staging a Touring Test, even a fake one like the Jeopardy exhibition, is a lot tougher than it looks, if one of your objectives is fairness. IBM staged a number of demonstration matches in order to convince the Jeopardy producers that the tournament would be a good idea. In these demonstration matches it was obvious that Watson could be led astray by constructing "questions" properly. (I know about the "response in the form of a question" gimmick. It works well on the show but I am not going to bother in this article). On the other hand, electronics are far faster at reflex activities like pressing a button. So a lot of work was put into providing what both sides saw as a level playing field. "Normal" Jeopardy questions on one side versus making Watson actually push a button on the other side. And both sides wanted to create an entertaining result so both sides wanted the human participants to have a chance and for Watson to not look like a comedy punch line. And they succeeded. It was fun to watch.
So how did it all come out? Well, Watson showed some weak spots but generally won handily. The final score was Watson - $77,147, Jennings - $24,000, and Rutter - $21,600. Watson was also way ahead at the end of the first match. And Watson was pretty good at figuring out whether he knew the answer or not. We got to see Watson's top three possible answers for most questions. The top answer was coded green for confident, red for not confident, and yellow for somewhere in between. Most of the time Watson's answer was green and another player rang in first when Watson coded his best answer red. Watson also rang in first about two thirds of the time. But the tale is told by the answers Watson got wrong, especially the ones he got really wrong.
We can decide Watson is a machine by the sheer speed and breadth of his performance. But if the IBM people did not think he was quick and knew lots of stuff the exhibition would never have taken place. So that's not enough. We all know that modern computers can organize a lot of data. But, while computers are very good at dealing with "structured" data, say where you have a table with rows and columns, computers are poor at dealing with unstructured data. Put simply, computers can't read.
Oh, they can scan text. Then they can identify all the letters and assemble the letters into words by taking advantage of spaces and other punctuation. But it is very hard for computers to take the next step and understand what the words mean. Most of the truly massive amount of data that was loaded into Watson was in the form of long sequences of text. All of Wikipedia was loaded into Watson. Wikipedia consists of over 2 million articles. And each article consists mostly of standard text because that's what people are good at using. The Internet Movie Database was also loaded in, along with a truly astounding number of other references. A lot of the IMDB data is structured. You have the movie name at the top. Then you have a section that lists each actor and role, one to a line, and so on. If you take a hard look at IMDB you will find out that it is not that easy but, for IMDB it seems like you at least have a chance of sorting much of it out. But for Wikipedia and most of what was loaded into Watson it is a lot harder.
I might find a sentence "Adam begat Cain". This tells a person that you have a parent child relationship where Adam is the father and Cain is the son. And to completely nail it down, you have to know that Adam and Cain are both male names. But what about "A boy named Sue", the popular Johnny Cash song. While Sue is normally a female name, in this case Sue is a male. A friend had a dog named Sam, short for Samantha. Sam is usually the name of a male person. And I could come up with even tougher examples where it is hard for a machine to make sense of things. There is ambiguity. There are contradictions. People are pretty good at functioning in this kind of messy environment but computers aren't.
So how did Watson do? Watson came up with the correct answer in a truly astounding number of areas. So whatever the IBM people did did to collect and organize Watson's data, it worked pretty well. Most of the time Watson came up with up with a green answer that was correct. In one case Watson came up with a yellow rated answer of "Serbia" when the correct answer was "Slovenia". I wouldn't have known which answer was the correct one. So I score Watson high for getting close and knowing when he wasn't sure. I don't know what process the IBM people to assemble the database. The advantage they had was that this could all be done "offline" before the exhibition started. And it might bee that they cheated by using statistical techniques like "this word is frequently found near that word". But if they were able to do the linguistic analysis necessary to get from "Adam begat Cain" to "Adam is the father of Cain", "Cain is the son of Adam", etc., in other words do linguistic and other analysis to turn strings of text into usable information, that would be a truly useful feat.
Jeopardy also poses quite a challenge in the structure of the clues. They are not standard English. There are frequently puns and other tricks. These are hard for people type contestants to deal with, especially with the time constraint. But they are much more difficult for a computer to deal with. And in this case, you can't use statistical tricks. It won't yield enough information because Jeopardy clues are very tightly packed and are frequently constructed so that the components are ambiguous and you have to combine all the components to narrow things down to one answer.
On several occasions Watson went astray by not figuring out an attribute that the correct answer needed to possess. For instance, in one case Watson gave a green rated answer of "Picasso", which was wrong. The clue was asking for a painting style not the name of a painter. The correct answer was "Modern Art". This would have been completely obvious to a human. In another case Watson was unable to correctly process the category. It was a tricky one, keys found on a computer keyboard. For instance, one answer that Watson did not ring in on was "F1". But in another case Watson rang in and supplied a green rated answer of "Chemise". There is no "Chemise" key on a computer keyboard but there is a "Shift" key. The clue had to do with clothing styles. I might or might not have come up with the correct answer but I would definitely have known that "Chemise" was wrong. Had Watson gotten the category, the questions would have been a piece of cake for him. Watson also answered "Dorothy Parker", an author, when what was required was "Elements of Style", the title of a book. I believe this was on a Daily Double and Watson did correctly rate his answer as a red.
So tricky categories and clues were a disadvantage to Watson. But an area he should have had a decided advantage was with ringing in. One would expect that Watson would let red answers go but would always ring in first when he had a Green answer. But in 11 cases Watson with a green answer was beat by one of the human players. I don't know what the story was here. Ken Jennings did say somewhere that it is possible to do an "anticipatory" ring in. You try to figure out when Alex is going to finish the "question" and ring in an instant after he should finish. If you ring in early your button is locked out for a while so there is a high penalty for ringing in early. Successful contestants try to figure out the answer while Alex is still talking. Watson got the questions in text form as soon as it was displayed on the board and employed the same strategy. So, for green answers, Watson should always ring in the same way.
Without knowing a lot more of the details (see - I told you this is tricky) it is impossible to give Watson a grade on the Jeopardy version of the Touring Test. But, ignoring the obvious, like the presence of an Avatar rather than a real person, and assuming the whole undertaking was "legit", for the most part Watson did very well. He was able to make the proper sense of most of the clues most of the time. And, besides the Avatar there was another dead give away. In one case Watson gave the same answer as another contestant who had gotten it wrong. Watson was deaf and they did not feed any of the audio to him. Again, ignoring the obvious (Avatar, deafness) that fact that Watson made at least one bonehead mistake means that technically he failed. But it's still a stunning achievement.
So, beyond the Touring Test aspect, what does this all mean? I was in school in the early '70s and it was one of those times when Artificial Intelligence (AI) research was on the rise. A number of "proof of concept" projects had generated a lot of buzz inside the Computer Science community. The story was "just let us crank these up a bit and see what we can really do". But none of these projects was able to progress past "proof of concept" into something more general and more powerful. After watching for a couple of years I decided that real progress in AI was a long way away and that AI was really hard to do. Unfortunately, my observations turned out to be spot on. There hasn't been a lot of headline progress on AI since. But Watson proves that real progress has been made.
The issue I discussed above of turning text into data was completely beyond the state of the art in the '70s for anything but toy environments. If, as it seems, the Watson project has been able to process astounding amounts of raw text data and turn it into real information that can be used by computers, that is tremendous and real progress. There is another aspect of the Watson project that I want to discuss next. That's machine learning.
The original approach to AI was to put in a bunch of rules. Things like "normal temperature for a human is 97 - 99 degrees Fahrenheit", that sort of stuff. You put in a bunch of rules and the computer used them to answer your question. The theory was that if you had enough rules and they were good rules, you could get a good result. That approach eventually petered out. The modern approach to the same problem is to put some kind of general structure and analytic capability into the system. They you give the system a bunch of right and wrong examples Including whether each example is right or wrong, called a "training set". In our case they fed in a bunch of right and wrong "answers and questions". Then the idea is that the system does some kind of analysis of the "training set" and figures out its own rules. Computer Science people have been playing around with this approach for many years now. And its the approach the IBM people used. Based on their results, I would have to say that the state of the art in machine learning is now pretty good. And that's good news too.
So where do we go from here? Years ago a "physician assistant" was developed. Modern medicine is (and was) unbelievably complicated. The idea was to provide some computer assistance to help medical people diagnose tough cases. The program ultimately went nowhere. But this project seems like a perfect fit for Watson's capabilities. Pump in a lot of medical data, most typically found in text form. This medical data will be chock full of ambiguities and contradictions, just as the Jeopardy database was. Watson has tremendous English language capabilities and "medical language" should be no harder to master than "Jeopardy language". Finally this "training set" approach to machine learning should work as well for medicine as it did for Jeopardy. So it sounds like this would be a good application for Watson technology.
This has already occurred to IBM. They are starting work on a medicine version of Watson. Another area they have identified is the law. Again you have vast amounts of text, in this case legalese, that is ambiguous and contradictory. But again the same types of abilities that made Jeopardy Watson successful look like a good fit. Certainly if one or both of these projects are successful we can expect IBM (and eventually others) to come up with other applications.
Finally, can we look forward to a rematch? I don't think so. I have seen some of the test runs IBM did before the Jeopardy producers decided Watson was ready for prime time. The goofs were much more frequent and much more embarrassing. Yet the progress from there to what we saw in prime time only a few months later was truly astounding. Given even a few more months I am sure the Watson team could fix the few goofs we saw and many more to the point that human contestants would go from having little chance to having no chance at all. The only reason I can think of that I might have misjudged the situation would be if the books were cooked is some non obvious way. Since I don't think the books were cooked I think this Jeopardy challenge will be a one time event.
Subscribe to:
Posts (Atom)