Wednesday, September 21, 2016

50 Years of Science - Predictions

I have been publishing a series of "50 Years of Science" posts for some time now.  The most recent post (part 7) can be found here:  http://sigma5.blogspot.com/2016/08/50-years-of-sceince-part-7.html.  That post also contains information on how to find the rest of the posts in the series.

For this post I am going to take a slight detour.  The series is based on the work of Isaac Asimov and so is this one.  Here I want to focus on a piece he wrote in 1964 for the New York Times.  It can be found here:  http://www.nytimes.com/books/97/03/23/lifetimes/asi-v-fair.html.  I found out about the piece in an article Kim Stanley Robinson, another highly respected Science Fiction author, wrote for the September, 2016 issue of Scientific American called "The Great Unknown" (see page 80).  The subhead for the article is "Can we trust our own predictions?"  And it turns out that some of the material in this article is based on a piece he wrote that can be found here:  https://www.scifinow.co.uk/blog/kim-stanley-robinson-on-isaac-asimovs-1964-predictions/.  I recommend them all.

I have been consuming predictions (frequently) and making predictions (occasionally) for some time.  If you don't take it too seriously it's a fun exercise.  It turns out that everyone is really bad at it.  That's why predictions that cover any significant period of time should be evaluated primarily based on their entertainment value.  Robinson in his Scientific American piece goes into some of the reasons everybody including the experts are so bad at it.   And, of course, Science Fiction writers, when not being graded on the curve, are almost as bad as everybody else.

This can be excused based in their particular case on their need to be entertaining above all else.  Hence their heavy reliance on space babes.  Various alien creatures, then primarily bug eyed monsters, now more benign creatures like Spock, also feature prominently.  But so far we haven't even found any single celled alien creatures.  And that means we also haven't found anything larger.  And that, of course, means that space babes are currently strictly confined to fictional realms.

An argument can be made that this is to be expected but still.  More importantly we have now spent over fifty years looking for radio signals from far away places and we have struck out here too.  But enough of picking on the work product of people whose background and interests were primarily in the arts rather than the sciences.  Let's move on to an actual science guy.  Mr. Asimov was an actual science guy (PHD, Biochemistry plus one scientific text book to his credit) and see how he did.

Mr. Robinson gives him high marks.  But he is grading on the curve.  And Mr. Asimov's '64 piece was produced in conjunction with the 1964 World's Fair.  So he was definitely interested in being entertaining.  This resulted in Mr. Asimov predicting that various science fiction tropes that were then common would be reality fifty years later.  This led Mr. Asimov badly astray in a number of places.

He goes on about underground houses which use electronics to simulate the natural world.  We now have that capability, at least when it comes to the visual aspect, but we don't use it.  And some variation of smell-o-vision is resurrected periodically but never seems to catch on.  It turns out that there are people are like Mr. Asimov who like an artificial environment and don't enjoy being out in the natural world where there are actual plants and animals about.  These people don't bother with a simulation.  And the far more populous group who actually likes a natural environment prefers the real thing to a simulation.

Mr. Asimov goes on about kitchen gadgets.  He got a lot of the specific details wrong but he nailed the broader trend.  He failed completely when he predicted that power cords would go away.  This prediction was based on an assumption that small energy sources would be broadly available.  Based on radioactivity would provide a compact high capacity energy source that would eliminate the need for a power cord.  Alas we are still dealing with batteries that don't work a lot better than those available in the '60s.  So we have power cords all over the place.  We also have a scourge he completely missed, data cords.

On an industrial scale he predicted the development of power plants based on nuclear fusion.  In the intervening fifty years tens of billions of dollars have been invested and we are still trying to get to the prototype stage.  It looks like many tens of billions of additional dollars and a decade or more of additional work will be required to get us to a working model.  And that's if everything goes well.  But that looks pretty unlikely.  He predicted solar power stations but he got the technology wrong.  His were based on mirrors in space, no doubt a result of his science fiction background.  Instead we have wind farms and solar panels.

He is perhaps at his most "woo woo" when it comes to transportation.  His predicted low flying cars (and boats) have not turned up as they are quite impractical.  Tires are still the technology of choice for ground transportation and boats still float in the water.  Another science fiction staple, moving sidewalks, has yet to be found practical outside a few tiny niche applications.  Moving goods around using pneumatic tubes, common in office buildings at the time, have faded away rather than seeing expanded use.

At the time copper signal cables under the oceans were the preferred method of sending electronic messages long distances.  Asimov predicted, as did many others, that geosynchronous satellites would replace them.  They didn't.  Such satellites introduce about a quarter of a second delay.  People find this annoying when making phone calls.  They might have put up with it but something better came along (see below).  Asimov forecast a substantial population of people living off of the surface of the earth.  This included substantial populations occupying space stations and lunar colonies.  Instead all we have are  a few people occupying the International Space Station in what is mostly a public relations stunt.

He predicts a substantial evolution in the job market.  "Mankind will therefore have become largely a race of machine tenders."  This is a bit of an over-reach but, as a trend, it is correct.  Robotics and automation are reworking the workplace, particularly in manufacturing, and a lot of what people now do can accurately be described as machine tending.  There is currently a fear that employment will never return to traditional levels.  Asimov seen this as a good thing.  If the "disease of boredom" becoming widespread is the worst problem mankind faces then to his way of thinking things must be pretty good.

I have skipped over one prediction Asimov made and that is population trends.  He very accurately predicted world population as of 2014.  And he was very concerned about our ability to feed so many people.  This latter fear turned out to be baseless.  We do a much better job in 2014 of feeding everyone than we did in 1964 even though world population has grown substantially.

That's the background.  Now I want to add my thoughts to Robinson's on the business of prediction.  He gives Asimov full credit for his population prediction.  He attributes Asimov's success to the fact that population growth is what he calls a "historically dominant" trend.  These kind of trends are ones where the driving factors are so powerful that only the most powerful unexpected event or trend can derail them.  As its basic level population trends are driven by only two basic factors.  How many children are women having?  How many of these children live to become old enough to have children of their own?  It is very easy to determine the answer to these questions to the accuracy necessary to make accurate predictions.

Demographers have been tracking these two numbers for many decades now.  Women in advanced nations have been having few children for several decades now.  This has now been going on long enough that we can predict that for the next few decades the population of these countries will, at best, grow slowly, and in many cases shrink.  In general more and more people are living long enough to reach child bearing age.  That makes little difference in advanced countries because it is more than offset by how far the children per woman number has dropped.  But the population is still continuing to grow in the rest of the world because large families are still the norm there.  And longevity continues to increase.  Only large and easily identified changes in one or both of these factors can significantly change the trajectory.

Robinson identifies a second "historically dominant" trend, global warming.  It behaves much like population.  A few easily monitored processes are responsible for most of the trend.  These processes often operate on a time scale measured in decades.  So a lot of the change we can expect in the next few decades is already baked in.  And any changes big enough to make a noticeable trajectory change will be large, prolonged, and easily spotted.  Like population trends we will know well in advance if projections need changing.

Now let's down size a little.  I next want to introduce a trend that is only somewhat "historically dominant".  I want to talk about integrated circuits, commonly referred to as computer chips.  A critical patent in the field was issued in 1959.  It was just coming to the attention of a more general audience about the time Asimov was writing his article so he can be forgiven for missing its significance.  The trend associated with computer chips was dubbed "Moore's Law" in 1965.  The rapid introduction of personal computers in the '80s quickly resulted in the law becoming well known.

Moore predicted that computer chips would continue to get smaller, faster, and cheaper at a rapid clip.  And computer chips in their more broadly useful guise of integrated circuits ended up being plugged into all make and manner of gadgets.  The fact that the capabilities of integrated circuits rapidly improved meant that the features and capabilities of these myriad gadgets could also rapidly improve.  So they did.  Moore's Law held for about thirty years.  It still holds to some extent today.  Integrated circuits are no longer getting smaller and faster at a rapid pace but they are still getting cheaper.  And industry keeps finding more and more ways to use them.

Robinson identifies several kinds of trend curves.  The one that fits computer chips is the "S" curve.  It took a while to get good at them.  This resulted in the early part of the curve being flat (low growth/improvement).  Then industry got good at making them and improving them.  The curve bent up sharply into a period of explosive growth.  Now basic physics have put limits on how much better computer chips can get so the curve has flattened out.

The computer chip story is in the middle of the list of ways trends can play out.  At one end of the list we have breakthroughs.  Before you have nothing then after things take off like a rocket.  In 1964 the laser had just been invented.  Lasers took off and are now used in many ways that were unimaginable at the time.  People were predicting this at the time but at the time it was only a guess.  This prediction eventually panned out.  Another invention that later became significant came along some time later, that of fiber optic cables.

By itself this invention was not that significant.  But the combination of lasers, fiber optics, and computer chips enabled out modern "internet" communications infrastructure.  Vast quantities of data can now be moved around the world nearly instantaneously.  This alternative to communications satellites is why they went out of favor.  It is almost impossible to predict this kind of synergistic behavior where multiple developments combine to change the world.  Without all of them there wouldn't be much going on.

At the other end of our list is the foreseen breakthrough technology that never materializes.  The classic example of this is rockets.  Rockets today are pretty much as they were in 1964.  So it is just too expensive to put people in space so we don't.  And so any prediction that requires easy access to space turns out to be wrong.

I don't know specifically what Asimov had in mind that would enable ground transportation to float above the ground and water transportation to float above the water.  But whatever it was it never showed up.  So land vehicles still use tires and water vehicles still float.  Underground houses with fake windows failed to catch on not due to a feasibility problem.  They could have been built then and can be built now.  It's just that few people want to live in them.  They are an idea that never became popular.

Asimov did not know about the whole Moore's Law thing.  In spite of that he managed to make reasonably accurate predictions about computers.  He just took it on faith that computers would continue to get better and they did.  His take on general computer capability (i.e. language translation) turned out to be pretty accurate.  Computers are much better at some things than he probably expected and much worse at other things.

A lot of Asimov's fiction involved robots.  So it is perhaps no surprise that he did pretty well on them.  He foresaw household robots that would be large, clumsy, and slow-moving.  We have more computer power available to apply to the task but the task requires more computer power than was predicted.  The two misses balanced out.

And then there are the unforeseen breakthroughs.  The chemical structure of DNA had been determined for about a decade.  But DNA is not even mentioned in Asimov's story.  The intervening half century has seen breakthrough after breakthrough.  The idea that you could quickly and cheaply sequence someone's DNA and determine where their ancestors came from or diagnose some diseases or positively identify an individual using a trace so small it is almost undetectable to the naked eye were so far out that not even science fiction writers imagined them.

DNA in particular and biotechnology in general are now coming things.  Our knowledge in this area is exploding.  I confidently predict that a lot of progress will happen in the next fifty years.  But most people would probably put that prediction into the category of a "historically dominant" trend.  So let me go out on a limb and make a prediction about something that is much more iffy.

There has been vast discussion about "the singularity".  There seem to be two schools on the subject with not much else getting any exposure.  It's either "It's going to happen and any day now" or "anyone that thinks it is ever going to happen is nuts".  So what is "the singularity"?  It's the day when computers get smarter than humans and escape our control.  It is the computer equivalent of the Frankenstein scenario.

An early take on this focused on the phone system.  Certain components could be seen as the equivalent of brain synapses.  When the component count exceeded that of the human brain some magical transformation was supposed to take place and the phone system was supposed to come alive.

Another scenario envisioned some kind of Watson style machine only more powerful.  At some point some threshold would be passed and it would become sentient and take control of its own fate.  It is easy to shoot down these kinds of scenarios.  Robinson even does so in his Scientific American article.  The phone system is not wired either for intelligence or for independent action.  As Robinson observes in the Scientific American article, "if it can't happen, it won't happen".  His response to the Watson scenario is the observation that Watson can be unplugged.  But is there a way to thread the needle and end up with a third way?  I think there actually is.

The whole field of artificial intelligence is evolving quickly.  A recent book, "Weapons of Math Destruction" by Cathy O'Neil has some interesting things to say about the field.  What she's talking about is generally referred to as "big data".  Its most obvious aspect is the unbelievable amount of data that is available to and accessible by computer systems.  People want to monetize it.  In plain English this means they want to make money off it.  The problem is that the amount of data is so vast that it is literally beyond the ability of people to make sense of it on their own.  The solution is "machine learning" or, as O'Neil calls it "invisible algorithms".

Clever software roots around in the data looking for patterns.  There is so much data that it is a mistake to ask "is this specific thing in there somewhere"?  Instead "machine learning" and other computer science techniques are used.  (See O'Neil's book if you want more details.)  These techniques look for any pattern or regularity in the data.  How they do this is now so complex that no one knows how they go about it or often what in detail they find.  You have to take what the system spits out on faith.  In the end the only question that is important to the people putting up the money is "if we do what the computer says will we make money"?

We saw this sort of thing run amok on Wall Street in the financial collapse.  "Trading algorithms" that no one understood made lots of money for big banks until they didn't.  When everything went sideways the Wall Street firms could literally not afford to pull the plug on these systems.  Instead they tinkered around the edges and promised "it's fixed and bad things will never happen again".  But the promise had no substance behind it.

Since then computer power and data storage have gotten cheaper.  And the algorithms have gotten more complex  In other words, even less is known about what they are actually doing.  But this kind of thing has become even more wide spread.  O'Neil's subtitle says that the situation has progressed to the point where it "Threatens Democracy".  This kind of real world experience with actual systems makes a good case that some computer systems are already more intelligent than human beings.  And the Wall Street example demonstrates that it may not be possible to turn an even an obviously misbehaving system off.

Let's do another "what if" with something that is looming on the near term horizon.  We are within a few years of having self driving cars on the road.  Since we at doing a "what if", what if self driving car technology is quickly and widely adopted.  And what if it works so well that the traffic fatality rate plunges.  Say it quickly goes from the current 30,000 per year all the way down to 1,000.  It now becomes feasible to thoroughly analyze every fatal incident.  So let's take things one final step further and go full Frankenstein.

What if in 900 cases the self driving software is completely blameless.  Say the fatality is caused by a catastrophic mechanical failure or a lightning strike or whatever, anything that lets the self driving technology completely off the hook.  But now what if in the remaining 100 fatalities it turns out the self driving technology has done something crazy, something that by no stretch of the imagination is what it should have done.  And now what if the software people and the hardware people go over everything with a fine tooth comb and can find nothing wrong.  They are completely baffled as to why the self driving technology apparently actively decided to kill someone.  Then what?  Well the system could be turned off.  But that would raise the fatality rate from 1,000 to 30,000.  So that can't be done.

My point is that it is actually pretty easy to create a credible scenario where people are not able to figure out what is going on but something bad seems to be going on but there are good reasons why the system can't be turned off or even reverted back to an older version.

I don't know exactly what "sentient" means.  I don't think sentience will manifest itself in some kind of unambiguous "it's alive" moment.  And what exactly does it mean if you say that the computer has "taken control of its own destiny"?  Many machine learning systems already in use evolve their behavior as they process additional data.  Certainly computers are currently able to make decisions and take actions in the real world based on those decisions.  But are they self aware?  I don't know.  Does it make any difference?  I don't know.  The best I have is that at some point well after the fact we may look back and decide that at some previous point something important changed and as a result we are not in complete control any more.

Is that a prediction?  I guess it is.

No comments:

Post a Comment