Thursday, August 16, 2018

A Compact History of Computers

For a long time I have thought of myself as a computer guy.  I took my first computer class in 1966 and immediately fell in love.  I then spent more than forty years closely involved with them, first at school and then at work.  I write blog posts about what interests me.  So you would think I would have written a lot about computers.  But I have actually written less than you might think.  And it turns out I have never directly addressed this subject.

Here's a complete list of my computer and computer adjacent posts.  (Normally I would include a link for each but since I have so many I am just going to list the publication date.  You can easily locate all of them by using the handy "Blog Archive" at the right side of each post because it is organized by date.)  So here's the list:  12/21/2010 - Net Neutrality; 2/19/2011 - Artificial Intelligence; 7/30/2013 - Home Networking; 1/24/2014 - Windows 8.1 - try 1; 9/16/2015 - Hard Disks; a 7 part series running from 9/24/2015 to 9/30/2015 on the Internet and networking; and 5/19/2018 - Computer Chips 101.  And I have purposely left one out.  It is my first post on the subject, and the one that is most closely aligned with this subject.  On 10/26/2010 I posted "Computers I have Known".  So that's the list.  Now to the subject at hand.

Most accounts of the history of computers credit a machine called ENIAC as the first computer.  There used to be some controversy about this but it has mostly died down.  But I think it is the correct choice.  (I'll tell you why shortly.)  But before I spend time on ENIAC let me devote a very few words to prehistory.

Perhaps the first digital computational device was the abacus and it did influence computer design.  Then a fellow named Charles Babbage designed two very interesting devices, the Difference Engine (1822) and the Analytical Engine (1837).  He never came close to getting either to work but the Analytical Engine included many of the features we now associate with computers.  But, since he was a failure, he and his work quickly fell into obscurity and had no impact on the modern history of computers.  He was eventually rediscovered after computers had been around a while and people went rooting around to see what they could find on the subject.

In the early twentieth century various mechanical calculating devices were developed and saw widespread use.  These gave some hint of what could be done but otherwise had no influence on later developments.  In the years immediately preceding the construction of ENIAC several interesting devices were built.  The Harvard Mark I is given pride of place by some.  The World War II code breaking effort at Bletchley Park in the United Kingdom spawned the creation of a number of "Colossus" machines.  But they were highly classified and so no one who worked on ENIAC or other early computers knew anything about their design or construction.  So where did ENIAC come from?

It arose out of World War II work but not cryptography.  Artillery field pieces came in many designs.  In order for shells to land where you wanted to they had to be aimed.  To do this a "firing table" had to be developed for each make and model.  If you want this type of shell to land this many yards away then you need to set the "elevation" of the gun to this many degrees.  Once you had fired the gun with a few settings mathematics could be used to "interpolate" the intermediate values.  But with the many makes and models of guns that saw use and with the other variables involved a lot of mathematical computations were necessary.  The US Army literally couldn't find enough "computers", people (usually women) who could and did perform the necessary mathematical computations, to keep up with the work.

Two University of Pennsylvania Electrical Engineering professors named Eckert and Mauchly set out to solve this problem by building a machine to crank these calculations out quickly and accurately.  They lost the race in that ENIAC was not ready soon enough before the end of the War to do what it was designed to, crank out firing tables for the Army.  But in the end that didn't matter.  People found lots of other uses for it.  One of the first tasks it completed was a set of computations used in the design of the first Atomic Bombs.

ENIAC was constructed as a set of semi-independent functional units.  There were units for mathematical functions like addition, multiplication, division, and square root.  There were "accumulator" units that could remember a number for a short period of time.  There were units that could be used to feed lists of numbers into the machine or to print results out.  And so on.  And the machine was not programmed in the modern sense.  To perform a calculation you literally wired the output of one unit into the input of another.  Only simple computations, those necessary for the calculation of firing tables, were even possible.

So the first step was to structure the problem so that it was within the capability of the machine.  Then a plan for appropriately wiring the functional units together in order to perform the necessary computations was developed.  Then the functional units were wired together using hundreds, perhaps thousands, of "patch" cables, all according to the specific plan for the current computation.  Then the whole thing was fired up.

It might take a couple of days to design the calculation, a day to wire up the ENIAC, and several hours to repetitively perform the same calculation over and over, feeding a few different numbers in to each cycle, so that each cycle calculated, for instance, all the numbers to complete one line of the firing table for a specific gun.  ENIAC was able to perform computations at a much faster rate than computers (i.e. people) could.  That was amazingly fast at the time but glacially slow compared to modern machines.  But it was a start.

And if this doesn't sound like what you think of when you imagine a computer, you are right.  ENIAC was missing several different kinds of functional units we now expect to find in even the simplest modern computer.  But it rightly deserves its place as "the first computer" because the designs for all the modern devices we now call computers descended directly from ENIAC.

ENIAC was missing three kinds of functional units now deemed essential.  The first one is the simplest, the "bus".  A bus is an Electrical Engineering term that far predates ENIAC.  The idea is that you have a bar, wire, or set of wires, that connect multiple units together.  All the units share the same bus.  But a bus design allows you use the bus to connect any functional unit to any other functional unit.  With ENIAC a serial design was used instead.  The approximately forty functional units were laid out side by side (the size of the room dictated that they were actually laid out in the form of a large "U") and only functional units that were close to each other could be connected together.

Later computers had a bus (and often several busses) incorporated into their designs.  This allowed much more flexibility in which functional units could be connected together.  There is a disadvantage to this design idea.  If two functional units are using the bus all other functional units must be disconnected from it.  At any single point in time all but two units are completely cut off from communication.

With the ENIAC design many pairs of functional units could be connected together at the same time.  They always stayed in communication.  But it turned out the flexibility and simplicity of the bus was more advantageous than disadvantageous.  (And designs incorporating multiple buses allow multiple parallel connections, at least in some cases.)  Switching to a bus design from the serial design was an easy change to pull off.

The second type of functional unit ENIAC lacked was memory.  ENIAC did incorporate a small number of "accumulators" but these could only be used to store the intermediate results of a longer, more complex computation.  They couldn't be used for anything else and they were very expensive to build.  Computer designers recognized that memory, lots of memory, was a good idea.  But it took them a long time to find designs that worked.  At first, various "one off" approaches were tired.  Then the "mercury delay line" was invented.

A speaker pushed pulses of sound into one end of a tube filled with Mercury.  A microphone at the other end picked up each pulse after it had traveled the length of the tube.  And, since under these circumstances the speed of sound is a constant, it took a predictable amount of time for a specific pulse to travel from speaker to microphone.  The more pulses you wanted to store at the same time the slower things went.  You had to wait for all the other pulses to cycle through before you could pick off the pulse you wanted.  If this design sounds like it reeks of desperation, that's because it did.  But it was the memory technology used by Univac (see below) computers.

After a lot of work Mercury Delay Lines were supplanted by "Ferrite Core" memories.  Little magnetic donuts with wires strung through their centers formed the basis of these devices.  By cleverly strobing high power signals through the correct wires a single specific bit could be "set" or "reset".  By cleverly strobing low power signals a single specific bit could be "read".  This technology was faster and it was "random access".  Any individual bit could be read or written at any time.  But it was slow and expensive compared to the modern solution.  The memory problem was only solved when integrated circuit based memory modules were developed.  They allowed large (gigabyte) fast (gigahertz) cheap (less than $100) memories.

But computers with a small (by current standards) but large (by ENIAC standards) amounts of memory were developed within a few years.  That left the logic unit, sometimes called the sequencer.  Functional units were physically connected together using patch cables in ENIAC.  This involved a slow and error prone process.  If the design was changed to incorporate a bus and if each input interface and output interface of each functional unit was connected to the bus then anything could be connected to anything.  But, as I indicated above, only two at a time.

The logic unit sequentially decided to connect this pair to the bus then that pair to the bus, and so on.  This permitted complete flexibility (within the limits of the hardware) in terms of how the functional units were connected together.  Initially this resulted in a slower machine.  But the increased flexibility got rid of all the rewiring time and greatly reduced the planning time.  And it permitted faster simpler designs to be used for the functional units.  In the end this simpler design resulted in faster machines.

And, as the amount of memory available grew, it was quickly determined that the wiring instructions could be stored in memory as a "program".  This required a more complex sequencer as it had to be able to decode each instruction.  But it again speeded up the process of going from problem to results.  It only took years for all these pieces to be designed, built, and put to good use.  And the reason for this is one of the prime motivators for this post.

Once the ENIAC was built a lot of the details of its design became widely known almost immediately.  This let people focus on making one aspect of the design better.  They could just plug in the ENIAC design for the rest of their machine.  ENIAC was a revolution.  These other machines were an evolution.  And evolution can move very quickly.

The same thing happened when the Wright Brothers flew the first complete airplane in 1903.  As an example, there was a guy named Curtis who was a whiz with engines.  The engine in the Wright plane wasn't that good.  But Curtis could basically take the Wright design and plug his much better engine into it.  So he did.  This resulted in a lot of bad blood and law suits but, for the purposes of this discussion, that's beside the point.

The airplane evolved very quickly once a basic design was out there as a foundation to build on.  World War I saw the airplane evolve at warp speed.  Better engines, better wings, better propellers, better everything, were quickly found and incorporated.  The airplane of 1919 bore only a faint resemblance to the airplane of 1914.  And this was possible because different people could come up with different ideas for improving one or another facet of the overall design and then plug them into an existing design.

The same thing happened with computers.  Pretty much every part of ENIAC needed radical improvement.  But, as with airplanes, an improvement in one area could be plugged into an already existing design.  By 1951 everything was in place.  And that allowed the introduction of the first mass production computer, the Univac I.  Before Univac each computer was hand built from a unique design.  But several substantially identical Univac I machines were built.

At this point "peripheral devices" started to proliferate.  The Univac relied primarily on spools of magnetic tape mounted on tape drives.  The drive could under programmatic control speed to a particular place and read or write a relatively large amount of data relatively quickly.  Over time other types of devices were added to the peripheral pool.  And for comparison, the Univac featured 1000 "words" or memory, each big enough to hold a 12 digit number.  And, as with all subsequent designs, both programs and data were sored side by side in this memory.

Univacs were quite expensive and less than 50 were every built.  But they demonstrated the existence of a market.  Other companies quickly jumped in,  The most successful was IBM.  IBM pioneered a number of technical innovations.  They were among the first to hook a "disk drive" to a computer, for instance.  But IBM was the first company to successfully crack the marketing problem.  They were the best at selling computers.

It may seem obvious in retrospect but computers of this era were very expensive.  Soon a lot of companies came to believe that if they didn't get one they would be left in the dust by their competitors.  But the two industries where computers could obviously do the most good were banks and insurance companies.

Both needed to perform vast numbers of boring and repetitive computations.  And that was just what best fit the capabilities of early computers.  Not to put too fine a point on it, but neither banks nor insurance companies employ large numbers of rocket scientists or other "tech savvy" people.  The people who actually ran these companies, senior executives and members of the board of directors, were scared stiff of computers.

IBM set out to bypass all the people in these companies that would be actually responsible for the installation and operation of computers and instead went directly to these senior executives.  They didn't bother to tout the specifications or capabilities of IBM products.  They knew these people were not capable of understanding them nor did they much care.  What concerned them was "betting the company".

They were afraid that they would end up spending a ton of money on a computer.  Then something terrible would happen involving that computer and the company would go down in flames, all because of something that was beyond the understanding of senior management.  What IBM told these senior executives was "if you buy IBM we will take care of you.  If something goes wrong we will swoop in and fix whatever it is.  Oh, it might cost more money than you had planned on spending, but we promise you that if you go with IBM you will not be putting your company's very existence (and by implication the livelihood of these senior executives) in jeopardy".

And it worked.  In case after case the lower level people would, for instance, say "we recommend GE" or "we recommend RCA".  At the time both GE and RCA were as large or larger companies than IBM.  And both had well established reputations for their technological prowess.  But none of the other companies (and there were several besides GE and RCA) aimed their sales pitches so squarely at senior management.  And in case after case the word came down from on high to "buy IBM anyhow".

And companies did and by the late '60s 80 cents of every computer dollar was going to IBM.  It wasn't that their hardware was better.  It was better in some ways and worse in some ways than the equipment offered by other companies.  But it was good enough.  A saying from the period had it that "no one ever got fired for recommending IBM".   That was true.  And the converse was also true.  People sometimes got fired or got their carriers sidetracked for recommending a brand other than IBM.

It took a long time for the computer industry to recover from the total dominance that IBM held for more than a decade.  But there was one technical innovation that was rolled out by IBM and others at this time that is important to talk about.  That's microcode.

The logic unit/sequencer was by far the most complex and difficult part of a computer to design and build.  It had to take the bit pattern that represented an instruction, break it down into steps of "connect these components to the bus, now connect those components to the bus, now connect these other components to the bus", etc.  It turned out that there were a lot of considerations that went into selecting the correct sequence.  And that made this particular component extremely hard to design and build.  Then somebody (actually several somebodies) had an idea.

What had made the original ENIAC so hard to deal with?  The fact that it had to have customized hand wiring done to it each time a new problem was put to it.  Well, the spaghetti that the sequencer had to deal with seemed similarly complicated.  And if you got something wrong the computer would spit out the wrong answer.  In the ENIAC case you just stopped it, fixed the wiring, and ran the calculation over again.  But once the computer was built there was no even remotely easy way to fix problems with the sequencer.

So several people at about the same time said "why don't we create a computer to run the computer?"  It will run a single special program called "microcode".  If there is a problem and we can change the microcode then we can fix the problem.  And that meant that the sequencer hardware became much simpler.  A lot of the complexity could be exported to the design and coding of the "microcode" program.  And the microcode for a new computer could be emulated on an old computer.  So it could be extensively tested before anything was committed to hardware.

This too sounded like it would slow down things immensely.  But it did not.  The simpler sequencer hardware could be optimized to run much faster than the old more complicated design.  And other tricks were found to make the whole thing go fast in just the same way that replacing the patch cable wiring of ENIAC with a bus and memory resident programs eventually resulted in an increase in computer speed.  By the end of the '60s pretty much all computers used microcode.

Later, ways were found to house the microcode in hardware that allowed it to be updated on the fly.  This meant that microcode fixes could be rolled out well after the hardware was originally manufactured.  Some computer designs have evolved to the point where there are two levels of microcode.  There is the, call it pico-code, that allows the hardware to run multiple versions of microcode that, in turn, implements what appears to be the computer.  This three level architecture is the exception rather than the rule.

The next thing I want to talk about is Microsoft.  Bill Gates was pretty much the first person to figure out that the money was in the software, not the hardware.  When IBM rolled out it's "System 360" family of computers in the mid '60s it laterally gave away the software.  Their thinking was that the value was in the hardware.  And most computer companies followed IBM's lead.  Hardware was the high value profit-maker and software was a loss leader afterthought that you threw in because it was necessary.  Gates was the first person to focus on the word "necessary".

Microsoft was a software company from the get go.  Their first product was a BASIC interpreter for the first generation of personal computers.  At the time you were expected to buy a kit of parts and assemble it yourself.  But almost immediately it became obvious that people were willing to pay extra for a pre-assembled computer that they could just fire up and use.  Either way, however, they still needed Microsoft's BASIC.

Microsoft branched out to other software products, most famously MS-DOS and later Windows.  And they do sell a small line of hardware, keyboards, mice, the odd tablet, etc.  But, unlike Apple, Microsoft has not seriously (some would say successfully) gotten into the hardware business in a big way.  Once PCs took off in a big way Microsoft took off in a big way.  And many other companies have successfully followed this model.  Even Apple has outsourced the manufacture of its extensive line of hardware.  They still do, however, do their own hardware design work.  And they never outsourced their software work.

And even "it's the hardware, stupid" types like IBM have been forced to follow suit.  They were initially forced by anti-trust litigation to start selling their "System 360" software.  From this modest start they have continued to evolve away from hardware to the point where they are now almost entirely a services company.  Over the years they have sold off or shut down most but not quite all of their once very extensive hardware business.  So they do still sell some hardware but it now represents a very small part of their total revenue.

I now want to turn to a product that has been pretty much forgotten about.  A man named Phillipe Kahn started a company called Borland at about the time the first IBM PC was released in 1981.  In 1984 he released a product called Turbo Pascal.  You could buy the basic version for $49.95 or the deluxe version for $99.95.  It was organized around a once popular computer language that has pretty much fallen out of favor called Pascal.  I am not going to get into the differences between Pascal and the well known "C" programming language.  One is better than the other in this or that area but, over all, they are actually pretty similar.   So what did you get for your $49.95 (the version most people bought)?

You got an "integrated development" package.  You could use it to write or modify a Pascal program.  You could then literally push a button and your program would be compiled (turned from a vaguely English-like thing that people could deal with into programming instructions that the computer could deal with).  And the Pascal compiler was lightning fast, even on the PC of this era.  (The process typically took only a few seconds.)  Then (assuming compiler had come across no obvious errors in your program) you could push another button and run your program.

If errors were found by the compiler you were automatically popped back into the "edit" environment.  You could make changes and then immediately recompile your program.  And the package offered similar options for fixing your program after it had compiled cleanly.  If your program seemed to be misbehaving you could run it in a special "debug" mode.  This allowed you to step by step work your way through the execution of your program a line at a time.  You could even examine the current value of variables you had defined for your program to work with.

Once you had seen enough you could pop back to "edit" mode, make modifications, and go through the whole compile/execute/debug cycle over and over as many times as needed to get your program working the way you wanted it to.  Then you could sell your program.  And you cold sell just the executable version that did not disclose the original Pascal "source code" of your program.

With Turbo Pascal and a PC you could go from edit to compile to debug and back to edit within minutes.  This had a profound impact on computer software.  ENIAC required smart, skilled, highly trained people to operate it.  Univac made things easier but it was still very hard.  The IBM 360 made things still easier but the cost and skill level was still very high.  And a single edit/compile/execute/debug cycle could often take all day on any of these machines.

Then there was the snobbery.  The bank I worked for in the late '60s required all of their computer programmers to have a 4 year college degree.  It was presumed that only very smart people (i. e. college educated) were up to the task.  But with turbo Pascal a whole crop of housewives, clerks, blue collar workers, and kids were able to master Turbo Pascal and create interesting, useful, and most importantly, valuable computer programs.

It completely democratized the whole software development process.  It turns out that the only attributes a person needed to become successful in the computer business was a knack for computers, a little training (the documentation that came with the Turbo Pascal package consisted primarily of a single not very big book), and access to now quite inexpensive and ubiquitous home computer equipment.  Not everybody is cut out to be a computer expert but a surprising number of people can master the subject.

And that's about where I would like to leave it off.  Pretty much everything that has happened since is the extension of a trend or movement started during the time period I have covered.  Computers have now gotten faster, more powerful, lighter, more compact, and more portable.  But that's just more of the same.

The hardware has gone from vacuum tubes (essentially light bulbs with extra wires in them) to discrete transistors to integrated circuits (the ubiquitous "chip") but integrated circuits were in wide use before 1980.  Even the Internet is an extension of and an enhancement to the ARPANET, a project that was begun in the late '60s.  And it turns out that people had been connecting computers to networks since well before ARPANET.

I would like to leave you with one last item, well more of a musing.  Since the early days computer components have been divided between hardware and software.  The idea is that the actual parts used to assemble a computer from are hard or, more importantly, hard to change.  Computer programs on the other hand are soft.  They are malleable and easy to change.  But it turns out that actually the opposite is true.  Hardware is easy to change and software is hard to change.

IBM pioneered the idea of an "architecture" in the early '60s when they designed the System 360 family of computers.  Before this every time you upgraded to a new computer you had to redo all the software.  It was presumed that this would not be a difficult or expensive process.  But over time it turned out to become more and more difficult and more and more expensive.

With that in mind IBM designed a family of machines that would all be capable of running the same programs.  They specified an "architecture" that all the machines would adhere to.  The usual reason people replaced computers was because, in the words of an old TV show, they needed "more power".  With the System 360 you could just replace your smaller, less powerful (and less expensive) computer with a bigger one that had more power.  IBM guaranteed you didn't have to change a thing.  All the old software would run just fine on the new hardware.  It would just run faster.

IBM spent a tremendous amount of effort on making sure that the "360 architecture" was implemented uniformly and consistently across all machines in the family.  One of their best people, a guy named Iverson, applied the computer language he had just invented (APL, if you care) to creating models of key components of the architecture that were accurate down to the bit level.  And it worked.

A few years later IBM came out with an upgrade called the "System 370" that was carefully designed to be "backward compatible" with the 360 architecture.  The new line offered additional features but things were carefully arranged so that the old programs would work just fine on the new machines.  So companies were able to easily upgrade to the new machines that, of course, featured more power, without a hitch.

And this became the model for the industry.  The hardware decedents of the System 360 no longer exist.  But software written to the 360 architecture standard (and often quite a long time ago) are still running.  I know because as I am going about my daily business I see all kinds of companies running what I can positively identify as 360 architecture software.  This is made possible by microcode.

Microcode makes it possible for hardware to behave in completely unnatural ways.  The hardware that now runs these 360 architecture programs is the many times descendent of something called a System 38.  The original System 38 bore no resemblance to the grandchildren of the system 360 machines that were in existence at the same time.  But that no longer matters.

In fact, hardware has come a long way since the '60s.  But thanks to microcode the newest hardware can made to faithfully implement the 360 architecture rules so that the old programs still run and still behave just as their programmers intended them to.  And this is in spite of the fact that the hardware that is doing this descended from hardware that was completely incompatible with System 360 family hardware.

Intel developed the first versions of the X-86 architecture in about 1980.  The modern computer chips Intel now sells bear almost no physical resemblance to those chips of yesteryear.  Yet X-86 software still runs on them and runs correctly.  Apple started out using a particular computer chip made by Motorola.  They later changed to a newer, more powerful, and quite different Motorola chip.  Yet they managed to keep the old Apple software running.  Then they made an even more drastic change.  They changed from the Motorola chip to an Intel X-86 family chip.  But they still managed to keep that old software running and running correctly.

It turns out that any individual program, i.e. "piece of software", is fairly easy to change.  But families and suites of software quickly arose.  Many examples of this could already be found when IBM was sitting down to design the System 360 computers.  And these families and suites turned out to be very hard to change.  They behaved in ways more akin to the ways people associated with hardware.  On the other hand, people got very good at making one kind of hardware "emulate", i.e. behave exactly the same as, another quite different kind of hardware.  So hardware started behaving ways more akin to the ways  people associated with software.

This "hardware is soft and software is hard" thing has been going on now for at least a half a century.  But people got used to the old terminology so we still use it.  Is black now white?  Is up now down?  Apparently not.  But apparently Orange is the new Black.

Saturday, August 11, 2018

Global Warming

I haven't addressed this subject in any detail before because for a long time I believed I didn't have anything original to say.  But then I had an epiphany a few days ago.  Most of what is said about the subject boils down to one of two things.  The deniers say "it's all a hoax so you can just ignore the whole thing".  The people who believe it's real say "we have to put things back to exactly the way they were before while there's still time".  Both sides are wrong but the deniers are far more wrong than the believers.

What the believers get right is the fact that it's real.  What they get wrong, at least in what is widely covered in the popular press, is the "okay, so now what?" part.  I am first going to spend a very little time rebutting the deniers.  Then I am going to spend the rest of this post on the "now what" part.

There are dozens of well supported lines of evidence that demonstrate that the deniers are wrong.  I am going to confine myself to only one of them and it is a line of evidence that can be summarized in two words:  "Northwest Passage".

Interest in the Northwest Passage is economically driven.  The most efficient way to move goods, particularly heavy goods, is with boats.  It requires very little energy to move a large ship through water at moderate speed even if this vessel contains a large volume or a large weight of goods.  So where possible shippers transport goods in freighters, ships specially designed to carry goods from here to there.

Nevertheless, there are costs, both in time and money, associated with shipping goods say from China to the East Coast of the US.  If the trip can be shortened either in terms of time or distance, money, a lot of money, can be saved.  It would be really nice (as in "save a lot of time and money") if goods could be gotten from one side of the Americas to the other by a much shorter route than has been available.

This was true two hundred years ago when sailing ships had to go south around Africa or South America to get from the Pacific to the Atlantic, or vice versa.  A little over a hundred years ago the Panama Canal was built to provide just such a short cut.  It was fantastically difficult and expensive to build and moderately expensive to operate but it was a bargain anyhow.

But the Panama Canal could only accommodate ships up to a certain size ("Panamax") and it was a hassle to transit the Canal even if your ship fit.  Nevertheless the Canal was a roaring success from the day it opened.  It was such a success that Panama has recently completed a large engineering project to upgrade the Canal so that larger ships, so called "Super Panamax" ships, can now use it.

But what if there was a passage around the north end of the Americas?  People recognized that this would be a very valuable addition to the options for getting from one side of the Americas to the other.  So they started looking for a "Northwest Passage" around the north end of the Americas several hundred years ago.  Many expeditions set out to find such a route and many lives were lost in the search.  As recently as the 1960s Exxon, the giant oil company, used a specially modified oil tanker to try to force such a passage.  Alas, like all previous attempts it failed.

And then something magic happened.  The "Northwest" Passage" magically opened up one summer a few years ago.  After some fits and starts (it didn't open up every year for a while) it is now a standard feature of Arctic summers.  Enough of the pack ice that surrounds the North Pole melts every summer so that an ice free passage opens up around the north end of North America.  It has now been several years since we have had a summer in which it didn't open up.  So what changed?

Global Warming changed.  If a Northwest Passage opened up at any time in the last 250 years someone would have noticed.  No one noticed until a few years ago because it did not happen before a few years ago.  But now the far north is a lot warmer than it used to be.  It is enough warmer that the ice that is only a little south of the North Pole melts around the edges enough to produce open water every summer.  (FYI, the North Pole sits in the middle of something called the Arctic Ocean.  Underneath all that ice is ocean water.)

Anyhow, enough ice melts that there is an ice free path all the way around the north end of North America every summer.  It now shows up regularly enough that shipping companies can depend on its appearance and route freighters through the Northwest Passage for a while every summer.  The situation is now so predictable that some adventurous recreational boaters transit it for fun.

And only a large and sustained warming of the entire north could cause the Northwest Passage to open up at all.  And all the weather data supports the idea that the entire north is now much warmer than it used to be.  In theory it could be just the north.  But it isn't.  So the "Northern Warming" is just part of a much larger "Global Warming" phenomenon.

And with that I am going to leave that subject and move on to the subject of "so what".  Given a chance, scientists would primarily focus on this question.  They would ask the twin questions of "how big a deal is it?" and "what can be done about it?"  But the well organized "deniers" operation has been so successful at sowing confusion and distrust that most scientists are forced to spend most of their public facing time trying to break through the denial.  This leaves them with little time for these other questions.  But they have spent some time on both of them.

I am also going to give the "how big a deal is it" question short shrift.  It's a really big deal.  As I write this giant wildfires are burning in California and elsewhere.  And large populations are being subjected to record heat which is causing a lot of misery and some deaths.

Scientists have been making and tracking predictions of such things as average temperature rise and average ocean rise for a couple of decades.  It has turned out that they have generally underestimated the amount of change.  The actual numbers have come in mostly toward the "high" (bad) end of their forecasts.  This is because scientists have bent over backward to not exaggerate the amount of change they expect.  Even so they tend to get beaten up for forecasting any change at all.

The whole impetus behind the manufactured "controversy" about Global Warming is driven by the question I am now going to focus on:  now what?  If we accept the reality of Global Warming we are forced to also accept that bad things are going to happen.  And the obvious way to avoid bad things happening is to change things so that Global Warming doesn't happen.  And those changes don't affect everybody equally.  People don't like change.  Large businesses that depend on things being a certain way, and most large businesses find that this applies to them, don't want to change because it looks like the required change would be detrimental to their interests.

There are a lot of smart people working for the fossil fuel industry, especially the oil industry.  It took these people no time at all to figure out that Global Warming was bad for business as usual for these industries.  And there was no simple, reliable, obvious way to cash in on the changes they foresaw so it was all bad news.

So it should come as no surprise that funding for and support of the "denialist industrial complex" has come primarily from Exxon and other large oil companies.  They quickly figured out that if they could delay things, even for a while, a lot of money could be made.  This is a model pioneered by the tobacco industry.  They put off effective regulation of cigarettes for decades and made tons of money as a result.

But big oil is not alone.  Global Warming will eventually force change onto all of us.  And from our current vantage points it looks like the change will be more for the bad than for the good.  So why shouldn't people follow in the footsteps of big oil, and before them big tobacco?  A lot of people have consciously or unconsciously decided "what's good for big oil is good for me" and joined the denialist camp.

And for a long time it was hard to come up with a counter argument.  When Al Gore put out his movie "An Inconvenient Truth" in 2006 he only devoted a few minutes at the end to what could be done to improve things.  The rest of the movie was well constructed and very convincing.  But I found this end section not very convincing.  My reaction boiled down to "oh, shit - we're in for it now".

Fortunately, things now look a lot better.  Things that looked like Science Fiction then (cheap wind power, cheap solar power) are Business Reality now.  Solar and wind are now your cheapest alternatives if you need to bring new electrical generating capacity online.  So there is now some light at the end of the tunnel but it is pretty dim and looks to be a long ways away.

In addressing the impact of Global Warming let me step back for a minute.  I recently read a book about Chief Seattle, the man the city I live in is named after.  It went into some detail about how local Indians lived before the white man arrived.  They were (to somewhat oversimplify) hunter gatherers.  Over the course of a year they would move several times following the resources.  When the berries were ripe they lived here.  When the salmon were running they lived there.  That sort of thing.  Now imagine that the environment changed.  If the carrying capacity of the overall resource base stayed the same they would just have shifted their annual migratory pattern a little and life would have gone on pretty much the same.

But as the intensity with which mankind exploits resources has increased we have lost the kind of flexibility hunter gatherers had.  We now expect things to be a certain way with a high degree of precision.  Take the Colorado River as an example.  It flows through a large area that is naturally very dry.  But our modern way of living demands a lot of water.  And for a lot of communities the Colorado River was the place to go to satisfy that need.  It quickly became obvious that if everybody drew all the water they wanted the Colorado, a rather large river, would go dry.

So the Army Corps of Engineers did a study and determined the average annual flow of the Colorado.  This number was the foundation of a large number of agreements as to who would get how much of the Colorado's flow.  I am going to skip over a lot of hanky-panky and focus on just one observation.  The study was flawed in a critical way.  The Corps study just happened to encompass a time period in which the flow was unusually high.  So the number that is the foundation of all those agreements, the average annual flow of the Colorado, is too high.

But literally millions of people and many businesses large and small now depend critically on getting the full amount of water the agreement promises.  They are now locked in to that critical number that is impossible to achieve.  In our modern world there is no simple and low impact change like redoing an annual migratory pattern.  If nothing else, most people don't routinely pick up and move several times per year.

We see this rigidity show up all over the place.  The North Slope is a place in the far north of Alaska.  There is a bunch of oil equipment necessary to operate the Alaska oil pipeline located there.  The equipment depends on the permafrost in the area staying frozen all year round.  It did a couple of decades ago when all this was set up.  But it now melts every summer due to the fact that the north is now a lot warmer than it used to be.  That turns out to be a giant PITA.

In my neck of the woods we depend on glaciers.  Glaciers store up a lot of water that comes down in the mountains in the form of winter snow.  Then they release it all summer long as the warm weather causes the snow to slowly melt.  But the glaciers are shrinking so we get more flooding in the spring (less snow stays in the mountains where we want it) and water shortages in the summer (less stream flow from smaller glaciers).

These are just two easily explained examples I can come up with off the top of my head.  But in ways known and unknown we as a society depend on things staying precisely as we expect them.  But Global Warming is changing things.  And it's not just Global Warming.  People forget that we as a society are changing lots of things all the time.  (And the deniers are happy to confuse things by mixing it all together.)

There is a sad story about a mother Killer Whale playing itself out as I write this.  Her newborn calf died, likely of malnutrition.  She has been carrying it around with her for a couple of weeks now and this behavior will likely result in her death.  What's going on?  Well, we have overfished for decades and we have thrown dams across rivers that salmon spawn in, also for decades.  The result is small salmon runs and that results in starving Killer Whales.

Neither the dams nor the overfishing have anything to do with Global Warming.  Nor does draining swamps.  But we have also been doing lots of that for a long time.  Back in the day if a stream or river rose much above its normal level it would overflow into swampland and the severity of the flooding was limited.  But we have drained the swamps and put up levies.  So now when flow increases it has no where to go.  So we get floods of a size and frequency that never occurred before.  Again, we have done two things.  We have diminished the ability of nature to moderate extreme events.  And we have diminished our ability to ride through these kinds of events without damage.

So this means we're all going to die, right?  No!  An "existential" crisis is one that might wipe us all out.  Global Warming will not wipe us all out.  It is not a "meteor crashing into the Yucatan Peninsula and wiping out all of the dinosaurs" out kind of thing.  A better description of its impact is "inconvenient".  Things are going to change whether we like it or not.  And we won't like it.  But it won't wipe us out or even come close.  That's the good news.  So what's the bad news?

Well, as one might expect there's lots of bad news.  And the first piece of bad news is it's inevitable.  We have long since passed the point where it is possible to avoid Global Warming entirely.  But wait, there's more.  It hasn't even gotten as bad as it is going to get.  Imagine you are stopped at a light in your car and the light turns green.  You mash on the gas pedal and the car takes off.  (I know you might not mash on the gas pedal but go along me to the extent of pretending that you do.)  The car quickly accelerates and is soon going at a pretty good speed.

Now let's say you take you foot off the gas.  What happens?  The car keeps going.  It may start slowing down but if you don't put your foot on the brake it is going to continue on for a good distance.  This is inertia in action.  And it turns out the environment has a lot of inertia built into it.  So even if we stop doing what we are doing to produce Global Warming there are certain inertias that are going to keep on for quite a while.  And they will continue to make things worse.

But that's not what we have done and are now doing.  We have not taken our foot off the Global Warming gas pedal.  At best what we have done is backed off a little so the pedal is no longer all the way down to the floor.  It's just most of the way down to the floor.  As a result Global Warming is still building up speed.  It's just building up speed a little more slowly than it would if we still had the pedal mashed to the floor; if we hadn't installed all that solar and wind generating capacity, for instance.  We have also done some other things like switching to cars that pollute less.  But all this together has just backed things off a little.  And it certainly is a long way from anything that looks like applying the brakes.

The second piece of bad news is that Global Warming does not affect everything equally.  When scientists started trying to predict the effects of Global Warming they had only simple tools.  So they did what they could do.  They developed a number of "global average" numbers.  The global average temperature will go up this much.  The global average sea level rise will be this much.  That sort of thing.  But turns out no placed is average.

The first big example of this is the arctic, the location of the Northwest Passage and all that other stuff I was talking about.  Scientists noticed that it was warming faster than the rest of the world.  They have now spent a lot of time studying this.  I am not going to go into the details but it turns out that the arctic is more sensitive than other areas.

Scientists now think they have a lot of this additional sensitivity figured out but the bottom line is that every place is special.  So it is going to be more sensitive to this and less sensitive to that.  Scientists have now turned to trying to generate local forecasts that are tailored to the specifics of local areas.

Let me give you just one example.  The ocean is warming.  That, and some other things together produce sea level rise.  But the ocean is like a giant bathtub.  The water level rise will be the same everywhere, right?  Wrong.  It turns out that the ocean is a lot more complicated than a giant bathtub.  As a result (the details are too complicated to get into so I'm going to spare you from them) some places will see more sea level rise and other places will see less.

Let's be honest.  We mostly care about our local area.  So these customized local forecasts, which are only now starting to be rolled out, are of great interest to the people who live in a specific area.  But they are also critically important to local governments and the like.  If they know they don't have to worry so much about this but they do need to seriously worry about that then they can make wiser decisions.

So what is going to happen?  The most obvious answer is that it is going to get warmer.  The temperature will go up more in some places and less in others but it is going to go up.  That's pretty obvious.  There has also been a lot of talk about sea level rise.  Again, this will be more of a problem in some places and less in others.  And, as Superstorm Sandy amply demonstrated, a little sea level rise can have a difference all out of proportion to the average number of inches it rises.

So affected areas are going to have to change.  People have spent big bucks on beachfront property because they want very much to be there.  They are not going to be forced out easily.  But proofing their properties against changing conditions or repairing the damage that has and will be done will be fantastically expensive.

And this is typical.  All the trade offs will be bad.  The choice will boil down to which variety of horrible tasting medicine you are going to end up swallowing.  The good news is that there is a medicine that will work.  It just tastes horrible.  It tastes so bad that no one will voluntarily take it.  But at some point we will all be forced to involuntarily swallow one or more horrible medicines.

Scientists and others have been saying for a long time now that "this medicine might seem like it tastes horrible but if you don't take it now you will be forced to later take medicine that tastes even more horrible".  So far this argument, true though it is, has not gained traction.  So what do these even more horrible medicines look like?

I've already mentioned one of them.  As the sea level rises land that is now much sought after will become unusable.  Mostly we have used flood insurance to cushion the cost of rebuilding and Army Corps of Engineering projects to try to protect them from damage in the future.  But properties keep getting wiped out necessitating a round of flood insurance payments (your tax dollars and mine at work), often for the same piece of property.  And Corps projects (more of your tax dollars and mine at work) fail to provide adequate protection.  At some point taxpayers are going to revolt and the money spigot will get turned off.  At that point lots of people will be forced to abandon their property and their losses will not be made up by government or insurance payments.

We have seen the slow squeeze going on with the Colorado River for some time now.  Phoenix is deep in the heart of libertarian "keep your government regulations away from me" country.  Yet Phoenix has draconian water use regulations.  The fact that Colorado River flows have been inadequate to support water deliveries in the volumes that people would prefer for a fairly long period of time has forced this behavior.  And Phoenix residents have decided that somehow these regulations are a feature not a bug in their overall libertarian beliefs.  This sort of thing (draconian regulations) is going to become much more widespread in the future.

Phoenix now routinely sees periods of triple digit temperatures every summer.  They have responded by going "all air conditioning all the time".  That's an accommodation but it means that there are large parts of the year when it is not practical to be out in the open for a good part of the day and perhaps much of the evening.  So far people are putting up with this.  But at some point if things continue on their current trajectory people will decide to relocate to more hospitable climes.

Global Warming and other man made activities are killing off lots of plants and animals.  There has been a lot of recent coverage about what increased water temperatures are doing to corals (bad, very bad), for instance.  This trend will inevitably continue.  And it has generated numerous "save the [fill in the blank]" responses.  But the response has been mostly ineffective.  Some individual species that have attracted a lot of attention and effort have been brought back from the brink.  But we are losing boring species at a fast and increasing rate.  I see no practical way to reverse this.  And I ask a very fundamental question?  What's the value of species diversity?

There is no real answer to this question.  In general people are sure reducing species diversity is bad but have nothing specific they can point to as a justification.  There is a generic "we don't know what we may need in the future" argument.  But again there is nothing specific people can point to.  The practical justification for a small number of species (i.e. Bald Eagles) is that people like and want to preserve them "just because".  And in a limited number of cases that works.  But for the most part species just go extinct.

So what's the "get out of jail" on this problem?  It turns out there is one.  And there is an acronym for it.  The acronym is GMO, which stands for Genetically Modified Organisms.  Humans have gotten very good at being able to tweak plants and animals.  And it's early days.  We are going to continue to get better and better at this.  So if we need some new critter or we need to tweak some characteristic of a current critter we have, or will soon have, the technology to do that.

There is, however, a well organized anti-GMO community.  I think the right gets way more things wrong than the left does but this is one case where the left is more wrong than the right.  There are anti-GMO factions imbedded within both the right and the left.  But they are currently much more able to successfully advance their agenda with the left than with the right.  One of the things we are going to need to swallow, whether we want to or not as Global Warming intrudes further into our lives, is GMO, a lot of GMO.  That is going to make many people very unhappy.

And I have mentioned ice melting and forest fires.  So Global Warming means more droughts, right?  Well, yes in some places (remember the part about effects being unevenly distributed) but mostly it will mean the opposite.  If you warm water and the air above it you get more evaporation.  And "evaporation" is just a fancy term for putting more water vapor into the air.  And that additional water vapor will eventually return to earth in the form of additional rain and snow.

The world will, on average, become a wetter place.  And, of course, the change will be distributed unevenly.  So some places will get a lot more rain and/or snow.  Some will get a little more or the same or a little less.  And some places will get a lot less.  The short version of all this is that scientists predict more extreme weather events.

But people have been coping with bad weather for a long time.  So it will take some adjustment but, from a "big picture" perspective, most of the time it will be more of an inconvenience than a catastrophe.  So what else?

There is the direct effect of things generally being warmer.  Again, the increase in warmth will be greater in some areas than in others.  Phoenix used to be a wonderful place to live, weather-wise.  It had moderate winters, very pleasant springs and autumns and, unfortunately, hot summers.  But the summers used to be annoyingly hot.  Now summers often features several periods when the weather is unbearably hot.

But is it?  Unbearably hot, that is.  The answer is yes if you are acclimatized to the temperate zones.  But humans in parts of Africa have become adapted to living in the outdoors in temperatures that are just as hot as a Phoenix hot spell.   They tend to be tall and thin.  They have very dark, almost black, skin.  And they have short curly hair.  Finally, they wear almost nothing.  They are completely functional in conditions that would render me hors de combat.

What that means is that if we GMO people we can in one generation turn them into people who can thrive in an extremely hot climate (or, if we prefer, a climate that is merely somewhat warmer).  Do most people find this an adaptation they are comfortable with?  Hell, no!  But would it work?  Yes, actually it would.

But if we look at the parts of Africa where people who are adapted to a very hot climate live, they are places that are not very productive from an agricultural perspective.  And that's one of the biggest concerns Global Warming believers have, food adequacy.  If the world average agricultural productivity was reduced to what's found in these areas something like 6 billion people would die of starvation.  That's really bad even if you are one of the billion or so survivors.

If we wait long enough nature will adapt.  But we are already seeing problems all over the place in the short run.  The ranges of many of our high volume foodstuffs like wheat, corn, and rice, are shrinking.  And that means concerns about there being enough food to go around are rising.

Again that whole "fine tuning" thing kicks in.  Humans have developed strains of wheat, corn, and rice that produce bounteous harvests if grown under the conditions they are adapted to.  But these strains are inflexible.  If you change conditions a little the size of the harvest drops a lot.  This inflexibility was done on purpose.  By eliminating the parts of the plant that would allow it to thrive under a broad range of conditions breeders were able to substantially increase harvests for the narrow set of conditions the plant actually encounters.

But now Global Warming is quickly changing conditions.  More and more often these fine tuned strains find themselves being grown in conditions they are poorly adapted to handle.  So productivity drops.  Given time evolution will fix this by reintroducing characteristics that allow the plant to thrive in new conditions and eliminate characteristics the plant no longer needs because conditions have changed.  But this takes a long time when measured in terms of human lifetimes.  If we wait for nature a lot of us will starve to death.  But we don't have to.

We can use GMO techniques to quickly change the characteristics of the plants.  People can force the evolution of plants to move fast enough to keep up with Global Warming.  Nature, can not.  Left alone nature will eventually get there.  But a lot of critters will starve to death while this is happening.  And if we don't intervene a lot of those critters that are starving to death will be people.

Another thing that people can do is to vastly expand irrigation.  And by this I mean moving water from where it is (or will be as Climate Change changes rainfall patterns) to where we need it to be.  A big reason those really hot parts of Africa have such low agricultural productivity is they currently have little or no water.  If you import lots of water and GMO plants so that they like it hot then productivity in those places could eventually match or exceed current "world average" results.

But wait, there's more.  More in the sense that more needs to be done.  Consider our transportation system.  It currently consumes about a third of the fossil fuels we produce.  On paper we know how to cut this by something like 80%, a real gain in the effort to reduce one the principal current drivers of Global Warming.  We go to electric trains, cars, and trucks.  That's fine as far as it goes.  But we currently don't have enough electricity to do this.

Again there is a solution.  We build a lot of new solar and wind generating capacity.  This could actually be done.  Scientists have done the math and there is enough potential solar and wind power to do the job.  It would mean putting up a lot more wind farms.  And it would mean roughly paving over entire states with solar panels but it could be done.  Getting that done requires a lot of money and a willingness to do it but the technical capability already exists.

But if we did that we don't have the capacity in our electrical grid to handle the loads.  So we would need to dump a ton of money into upgrades to our electrical grid.  And then there is the storage problem.  Wind and solar are intermittent sources.  Sometimes they make a lot of electricity, midday in the case of solar.  Sometimes they make little or none, midnight in the case of solar.  Wind has the same kinds of problems.  Sometimes it blows hard.  Sometimes it blows not at all.

Some of this can be fixed by a much more robust electrical grid.  It can be used to shuttle electric power from here where there is currently a surplus to there where there is currently a shortage.  That's a help but it is not enough.  There will be no solar generation anywhere in the US for many hours each night.  The wind blows harder in some seasons and softer in others.  The obvious solution is to store up the surpluses in times of excess then feed them back into the grid during times of shortage.

The more storage capacity you have, the longer periods of too much or too little power that can be handled.  And statistics are on our side.  A small increase in storage capacity results in a considerable increase in the variations that can be handled.  But we really don't have a good technology for storing electric power.

Mostly the talk today is of batteries.  But batteries are extremely expensive and don't store all that much power.  It turns out that an old, low tech, solution called pumped storage can store a lot more power.  You can pump water up from a lower reservoir to a higher reservoir using surplus power.  Then later you can drain it through the appropriate machinery to turn it back into power when you need it.  These systems are more efficient than you would think because engineers have been working with them for a long time and know how to get the most out of them.

For whatever reason, there are only a few pump storage facilities anywhere in the world.  All it takes is money to build more but at the moment no one is interested.  Another possibility is to link power grids around the world.  The sun is shining somewhere on earth at all times.  But this too is an idea no one is talking about.

It is unclear to me how this problem will be solved.  But it should be clear that there are solutions.  If someone comes up with a solution I haven't listed, that's fine.  But even if they don't, the point is that there are solutions that could be made to work.

And that's my point.  Global Warming is real.  It is already with us.  It is going to get worse.  Right now we are making it get worse faster.  We are a long way from the point where we stop making it worse faster, let alone actually making things better.  But even if we stopped making things worse today the built in inertia of the affected systems means things will continue to get worse for some time.  The sooner we accelerate the processes that move us in the direction of making things better, the better.

But no matter what happens humanity will survive.  It is just a matter of how drastic the adaptations we will eventually have to put in place are.  And it is important to realize that the world gross domestic product (the value of all the goods and services produced world wide) is more than seventy-five trillion dollars.  Annual GDP routinely goes up or down by more than a percent.  And since it's normal it results in little or no disruption.  So if we grab 1% of World GDP and spend it on Global Warming reduction or mitigation (unfortunately more likely the latter) it will allow us to spend about a trillion dollars per year.  And the world economy will sail on pretty much without noticing.

You can do a lot with a trillion dollars per year.  And spin offs like new technology or efficiency improvements might even return all or even more than was spent back to the economy as a whole.  The space race in its early years cost the US government about three billion dollars per year.  But the technology spinoffs probably added more than three billion dollars per year to US GDP.  So overall the US was better off.  That didn't mean that the costs and benefits were spread evenly.  They weren't.  And that always makes this kind of argument a hard sell.

But, as the old commercial has it, "you pays me now or you pays me later".  Right now most people in the US are living with the false hope that when the bill eventually comes due they either won't be around or some miracle will have happened that will make the bill go away.  I will probably be dead when the worst of the Global Warming effects kick in.  But I still would rather not leave that kind of legacy to future generations.  It will not kill them but that's small comfort.

Friday, July 27, 2018

50 Years of Science - Part 10

It's been a while since I wrote a post in this series.  And these days it's more like "58 Years of Science" but I an going to continue to stick with the original theme anyhow.  This is the tenth post in the series.  You can find an index to all the posts in the series at http://sigma5.blogspot.com/2017/04/50-years-of-science-links.html.  I update that post every time I add a new entry to the series.

I take Isaac Asimov's book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science as it was when he wrote the book (1959 - 60).  In these posts I am reviewing what he reported and what's changed since.  For this post I am starting with the chapter Asimov titled "The Origin of Air".  I will then move on to the next section called "The Elements" and discuss the chapter he called "The Periodic Table".

There is nothing static about the composition of air if we look across the entire history of the Earth.  There are processes that add to it and processes that subtract from it.  Asimov starts his discussion with the latter.  What's on top of the air?  Nothing!  So why hasn't it all rushed away?  Asimov's answer is "escape velocity" and it's the correct one.  Particles need a certain amount of speed to escape the pull of Earth's gravity.  It turns out that only a tiny fraction of air molecules have the required speed (6.98 miles per second, if you ignore air friction, etc.).

Asimov launches into a very sophisticated discussion of all this.  And he makes a critical observation.  It is far easier for light molecules to escape than heavy ones.  Oxygen and Nitrogen, the primary constituents of air, are heavy.  Hydrogen (you can make it by splitting water molecules) is light.  So the tiny amount of air that leaks away every year is made up mostly of Hydrogen.  And that means there is essentially no Hydrogen left in Air.  (Since a water molecule includes a heavy Oxygen atom little water vapor leaks away so the Hydrogen that is bound up in the water in the air is still with us.)

If we contrast Earth to Jupiter and Saturn we see a big difference.  Both of these far more massive planets have much stronger gravitational fields.  As a result they have very high escape velocities and thus have been able to hang on to their Hydrogen (and Helium, the second lightest element).  Hydrogen is the most common element in the universe.  (Helium is the second most common.)  So it makes sense that the atmospheres of Jupiter and Saturn have lots of both.  And the early Earth likely did too.  But Earth's mush weaker gravitational field has let it all leak away over Earth's lifetime.

Asimov correctly observes that hotter molecules move faster than colder ones.  So a hot atmosphere should leak away much faster than a cool one.  This feeds into a discussion of how the solar system was created.  A theory of the time posited that some catastrophe like two stars passing close by might be how it happened.  Asimov shoots this down by observing that the Earth has an atmosphere and proceeding from there.  Good enough so far.  But then he launches into the then prevailing theory of the origin of the solar system.

The Sun throws off a lot of heat.  This makes it hard, the theory goes, for Hydrogen and Helium to condense into a planet that is close to the Sun.  So "gas giant" planets would form in the outer solar system and rocky planets composed of "refectory materials" (stuff with a high melting point) would form close to the Sun.  This handily explains why Mercury, Venus, Earth, and Mars (all refectory planets) formed close to the Sun while Jupiter, Saturn, Uranus, and Neptune (all gas giants) formed further out.  (Pluto is an exception that we will just ignore.)

This all worked just fine until the Kepler satellite (launched well after Asimov's book was written) and other exo-planet finders came along and found lots of gas giants in orbits that were extremely close to their respective suns.  In many cases the orbits of these gas giants are closer to their respective suns than Mercury is to our sun.  All these close in gas giants orbiting other stars means that the standard model for the formation of solar systems is wrong.  New models have recently come to the fore but it's early days so things will likely change as more is learned and more study, modeling, and theorizing is done.

The planetary formation model of the day is still holding together.  It seemed pretty solid to scientists of the time.  It is viewed as more wobbly by contemporary scientists due to the way a planetary formation model interacts with a solar system formation model.  In any case, here it is, compliments of Mr. Asimov.

Material would clump together eventually growing large enough for gravity to kick in.  At that point two things would start to happen.  The attraction of material to the clump would accelerate so the planet would start growing quickly once it reached a critical mass.  The other thing that would happen is that heavy things would start sinking toward the center and light things would float to the top.  So Iron, for instance, would collect at the core while gasses would rise to the surface.  Interestingly enough, the material that makes up the "crust", the Earth's surface that we can see and touch, is, for the most part, made up of light materials.  So the distribution of heavy material toward the center and light material toward the surface that we see with Earth aligns with this idea.

As noted above the lightest gases would escape.  We still have a considerable amount of Hydrogen around because it is locked up in water molecules and other chemicals.  And this sort of thing complicates the situation.  Asimov estimates that only one part in seventy million of the Earth's original reservoir of Neon is left because Neon doesn't combine to make molecules.  Oxygen likes to combine into molecules and is relatively heavy so one in six Oxygen atoms is still around.  Nitrogen falls somewhere in the middle so one part in 700,000 of Nitrogen remains.  The larger point is that the original composition of Earth and its current composition are quite a bit different.  Scientists figure their job is not done until they can account for all of this.  And one particular puzzle is water.

How much water has accumulated on Earth over its lifetime and how did the current amount come to be?  These are still very active subjects of investigation.  Asimov briefly mentions two then popular theories.  In the first water was squeezed out of rocks early in Earth's life.  It then was turned to atmospheric vapor since things were hot at the time.  As things cooled it condensed and formed the oceans much as they are now early in the life of the Earth.  Another theory goes for gradualism.  The water was squeezed out of rocks slowly over time.  (BTW, modern rocks actually contain a lot of water.)  So according to this theory the oceans grew to their current size slowly over a long period of time.

There is a third potential source of water that Asimov doesn't mention.  Stuff continuously rains down onto Earth from space and it contains a decent amount of water.  This process was unknown in Asimov's time because the instruments necessary to study this sort of thing didn't exist back then.  Was this process the source of a lot of the water we now see?  We don't know.  The basic "water" problem is still far from being solved.  We now have a lot of data on the level of oceans going back at least hundreds of thousands of years.  We know their total volume changed little over that period.  So if some process is gradually adding or subtracting water to the oceans it is a very slow process.

We do know that the Earth's atmosphere had a quite different composition when the planet first formed.  As I have noted elsewhere (see http://sigma5.blogspot.com/2018/07/deep-genetics.html, for instance) the atmosphere of the early Earth contained lots of Carbon Dioxide.  (Venus contains lots of Carbon Dioxide to this day.  The result is a surface temperature of 800 degrees.)  A study of the geologic record indicates that vast amounts (billions of tons) of Sulfur precipitated out of the atmosphere at one point.  Prior to this it is likely that a significant component of air was Sulphuric Acid.

Later a vast amount of Iron (again billions of tons) precipitated out.  The effect of that much Iron being in or adjacent to the air and the oceans (rather than being locked up in rocks as it now is) is not as obvious as that of Sulphur but it is important.  Scientists have figured out various tricks for determining the amount of Oxygen in the air.  Early in the life of the Earth the amount was effectively none.  Now it makes up about 20% of what's in Air.  (Almost all of the rest is Nitrogen.)  Scientists now have a better idea of the history of the composition of air but I don't know any more than I have noted above.

One final note before moving on, Asimov speculated that the air pressure on Mars was about 10% that of Earth.  We now know it is far lower.  Even so, Mars has weather in the form of dust storms, mini-tornadoes, and other phenomena.  Now on to "The Periodic Table".

The idea that there are four elements:  Earth, Air, Fire, and Water, goes back to the ancient Greeks and may go back even farther.  To this the Greeks added a fifth element.  The heavens were composed of Ether (often spelled Aether with the "a" and the "e" smashed together).  Asimov correctly characterizes the Greek approach as "theoretical and speculative".  As such, they felt no need to subject their theories to experimental verification.  The first group to actually subject this kind of thinking to experimental verification was, of all people, the medieval alchemists.

They started adding elements to the list.  Mercury was responsible for metallic properties.  Sulfur imparted flammability.  Salt imparted resistance to heat, according to one of the best of the medieval alchemists, Paracelsus.  With this theoretical framework it made perfect sense to believe that a "philosopher's stone" existed that would turn "base metal" (lead) into precious metal (gold).  Success would produce vast wealth so quackery eventually became rampant.  Kings could be induced to provide the medieval equivalent of "research grants" in the reasonable expectation of a substantial return.  This quackery eventually destroyed the reputation of alchemists and alchemy but there were many honest and intelligent practitioners.  One of them was Sir Isaac Newton.

Eventually the ethical alchemists started calling what they were doing chemistry and, in an effort to distance themselves from unethical alchemists, disavowed any attempt to find the philosopher's stone.  Boyle wrote "The Skeptical Chymist" as part of this distancing process.  He is now considered a serious scientist and is credited with discovering a modern scientific tenet, "Boyle's Law".  It states that if the temperature of a fixed amount of gas is held constant then an increase in pressure will result in a decrease in volume and vice versa.

He also proposed a very modern definition of the word "element".  It is "a basic substance which can be combined with other elements to form 'compounds' and which, conversely cannot be broken down to any simpler substance".  We can now be more precise due to our subatomic view of the proceedings.  But as a practical matter the definition still works well.

This definition now seems obvious.  The problem then was that most compounds were not very pure and these impurities confused things massively.  But over time scientists got better at creating pure samples and getting predictable, repeatable, results.  That slowly led to progress in classifying elements and compounds.  Cavendish demonstrated that water was a compound consisting of Hydrogen and Oxygen.  Lavoisier showed that air consisted of Oxygen and Nitrogen (the rest of air's constituents are present in such low concentrations that they could then be effectively be ignored).  The list of actual elements grew slowly as "compounds" like Tin were added to the list of elements and "elements" like Salt were added to the list of compounds.

And technical advances were necessary.  Electrolysis was necessary to break down compounds like line and magnesia (Oxygen plus a new element Magnesium).  On the other hand Chlorine was initially thought to be a compound composed of Hydrochloric Acid (assumed incorrectly to be an element) and Oxygen.  And an old concept that dated back to the ancient Greeks soon became relevant.  All matter is composed of small indivisible particles called "atoms", the concept opined.  The concept dates back to Democritus but was resurrected in modern form by Dalton.

He observed that the rules for combining many gasses could be explained if it was assumed that certain gasses were elements and other gasses consisted of compound particles composed of a certain specific number of atomic particles of this elemental gas, a certain specific number atomic particles of that elemental gas, etc.  This "atomic" idea was soon expanded to cover all elements, not just gasses.  He also concluded that one of the most important properties of an atomic particle of a specific element was its weight, what we now call its "atomic weight".

This led to some extremely clever techniques being developed for determining the relative weights of various elements.  An atom of Oxygen weighs almost exactly 16 times as much as an atom of Hydrogen, for instance.  It was far beyond the capability of scientists of the time to determine the absolute weight of a single atom.  The obvious solution was to use these ratios.  The only thing necessary was to pick the base number.  After several tries it was decided to arbitrarily decree that Oxygen weighed 16 and go from there.

At the time nothing was known about isotopes.  An element can exist in several forms.  They all have the same chemical properties but different weights.  Many elements consist largely of a single isotope so, for that element, there isn't a problem.  But Oxygen isn't one of them.  There is lots of what we now call O-16.  But there is also a goodly amount of O-18.  So a mix of isotopes of Oxygen didn't work well as a standard.  In 1959, too late for Asimov's book, the standard was changed so that the atomic weight of the C-12 isotope of Carbon was set to exactly 12.  The "Dalton ratios" were then applied to come up with a revised atomic weight for each isotope of each element.

The first "picture" of a single atom was taken in 1955 by Mueller.  That was a big deal at the time.  But we can now take movies of groups of single atoms on select surfaces.  We can even move individual atoms around.  Something called an "atomic force microscope" can measure forces between single atoms or small collections of atoms.  What we can now do in this area would look like magic to scientists in the '50s.  But back to our story.

The list of elements kept growing and growing.  The urge grew to put this list into some kind of order so that it would be more manageable and useful.  The first version of the list just ordered them by atomic weight.  But in 1862 Cannizarro arranged them into rows and columns such that elements with similar chemical properties fell into the same column.  Renia did the same thing independently but the idea did not catch on until Mendeleev (and others) came up with an improved version of the same idea.

What Mendeleev in particular did was to assign more importance to preserving the regularities of his table and less to putting them in order solely by weight.  He emphasized the periodicities in his "periodic table".  This led him to fix various problems he saw when he strictly adhered to weight order.  If the chemical properties did not align properly when certain elements were placed where their weight indicated they should go he switched things around.

In some cases the generally accepted atomic weight was wrong.  In others an element weighs more than it should for complex quantum mechanical reasons.  We now use "atomic number", the number of protons an element has, instead of atomic weight to organize the periodic table.  Each isotope of an element has a certain number of protons and a certain number of neutrons.  Each of these particles weighs approximately one atomic unit.  So O-16 has 8 protons and 8 neutrons and an atomic weight of 16.  O-18 has the same 8 protons but 10 neutrons for an atomic weight of approximately 18 (the discrepancy is due to quantum mechanical effects).  Chemical properties are determined by the number of protons and unaffected by the number of neutrons.  And none of this was known at the time (roughly 1870).

What made scientists take Mendeleev's work seriously were the holes.  He left three holes in his table because he could find no element that fit.  Shifting things around to close the gaps messed up the orderly progression of chemical properties.  So he left those slots empty and boldly predicted that elements would eventually be found to fill each empty slot.  And they were.  Gallium was discovered in 1875.  Scandium was discovered in 1879.  And Germanium was discovered in 1886.

In 1911 Barkla discovered that each element has a unique X-ray signature.  Laue found that crystals could "diffract" (bend) X-rays.  They were waves and waves have a wavelength.  X-ray studies of elements led to a technique for determining an element's atomic number.  A gap in the sequence of atomic numbers indicated that an element was missing.  A number of gaps were found this way.  Some were filled quickly.  Some took considerably longer.

For a long time it was assumed that the list stopped at 92 (Uranium).  Since then various "artificial" (so called because they were first created using various scientific techniques and were incorrectly thought to not exist in nature) elements have been created.  At the time of the book the list had been extended to 102.  Since then it has been extended to 118.

Since Asimov wrote this book both theory and practice in this area has advanced by leaps and bounds.  Many scientific discoveries follow from better tools.  And the tools have gotten much better.  The energies available to scientists are now much higher.  The distances and times that can be studied have gotten spectacularly smaller.  The work-horse tool of the day was the synchrotron.  This was a device that applied strong magnetic fields to cause charged particles to spin in circles inside the device.  We how have the LHC.  It is a circular tube 27 kilometers long which uses fantastically powerful magnets.  Back in the day, the largest synchrotron fit into a single room.

Computer power and speed have also increased vastly.  So computations that would have taken centuries on computers of that period can now be done in minutes.  And tedious processes have been automated.  If you shoot a charged particle through a special tank it will leave a trail of small droplets that can be photographed.  These photographs can be analyzed by having graduate students look at them and take measurements.  That was then.  Now solid state devices are available that can much more accurately determine the path of a charged particle and instantly analyze it.  This means that billions of paths can be examined where before it was hard to examine thousands of much more low resolution pictures.

These advances and some theoretical advances have allowed us to have a much more nuanced picture of elements, chemical reactions, and the subatomic world.  But I am going to defer discussion of that until I reach the appropriate sections of Asimov's book.

Wednesday, July 18, 2018

Deep Genetics

First, a note about the title.  Many years ago Bill Gates mentioned somewhere that he had learned a lot by studying something called "deep history".  It turns out that the name comes from an idea popularized by Daniel Lord Smail, a Harvard History professor.  I took a video version of a course on the subject that used to be offered through a company called "Great Courses".  Sad to say, it looks like they no longer offer it.  But the course gave me a chance to see what Gates was talking about and I was grateful for the experience.  So what kind of history is "deep" history?

Let's start out with what "history" is.  (And, yes, there is a precise definition of the word.)  History is what people have written about what happened, about people and events.  The key idea is that history is written.  If it's not in a book its not history.  That means "history" covers at most 3,000 years and often far less.  We have histories written by ancient Romans, Greeks, Egyptians, Chinese, etc.  Writing has to have been invented and someone has to have written specifically about events, past and present.

Writing that goes beyond business and tax records hasn't been around for very long.  It is necessary that people engage in this more broad type of writing (and for their work effort to be preserved) for a particular historic record to exist.  And these early historians tended to make things up when they were writing about events that had not happened in the recent past.  So we get, for instance, an entirely mythological description of how and by whom the city of Rome was founded.  But even allowing for this, early histories typically did not venture far back into the past.

What precedes history is archeology. You dig things up and try to figure out what happened and who was involved.  For a long time what archeologists mostly dug up was bones.  The rest was mostly stuff people made like pots and jewelry.  Archeology allows us to study much more of the past.  But archeology as most people envision it (i.e. Indiana Jones digging up stuff) only takes us back so far.  And that "so far" is to something called the "Cambrian Explosion".

Consider most people's idea of an archeological "dig".  A relatively small piece of land is carefully excavated and every interesting object is located, examined, catalogued, and later studied.  How are interesting objects located?  People see them poking out of the ground.  And the important thing about this is that people see them.  The object is large enough to be visible with the naked eye.  It turns out that all life forms that are big enough to be seen with the naked eye date from the Cambrian Explosion or later.  But the Cambrian Explosion happened in round numbers 500,000 years ago.  That sounds like a long time but the earth is 4.54 billion years old.  (This number is known to be correct within an accuracy of  plus or minus 1%.)  So what happened in the intervening 4 billion years?

Interesting question.  And it's one that science has only a general idea of what the answer is.  So let's ask a related question.  How long has life existed on earth?  Here too science has only a general idea.  The best guess (and at this point, it's a guess) is that life has existed on earth for between 3.77 and 4.28 billion years.  So why is it a guess?  And why is the range so large?  The problem is a simple one.  We are trying to positively identify the remains of tiny single celled creatures.  They are quite fragile, have no bones, and are tiny.  This means they leave no trace of themselves behind except in extremely extraordinary and extremely unusual situations.  Finding a needle is a haystack (magnet, anyone?) is a piece of cake in comparison.

As a result the first really solid evidence for life involved finding the mineralized (i.e. fossilized) remains of a single celled microorganism in a kind of rock called "Apex chert" in a formation in Australia.  It has been dated to 3.564 billion years ago.  But the microorganism in question looks pretty complicated.  So scientists speculate that it's predecessors stretch back a short (i.e. to 3.77 billion years ago) or a long (i.e. to 4.28 billion years ago) time.  In any case, it appears that it took less than a billion years for life to originate on earth.  And it appears that not much happened between then and 3 billion years later when the Cambrian Explosion happened.

Before going further let me take a small side trip.  What's with this whole Cambrian Explosion thing, and specifically, what's with the whole "explosion" business?  Glad you asked.  As far as anyone can tell all life was composed of single celled creatures until about 800,000 years ago.  Then something happened and multi-celled creatures appeared seemingly instantly.  They quickly diverged into many different forms.  But these early creatures were still too small to be seen by the naked eye.  And, more importantly, they didn't have bones or shells.  That made them hard to find, especially if you weren't looking very carefully for them.

Then in another "blink of an eye" moment shells and bones and other stuff that gets preserved in sediment started showing up.  And the creatures with the bones, shells, etc. were big enough to be seen by the naked eye.  This point where geologists and archeologists went from "just rock" to "rock with all kinds of wee beasties in it" seemed to scientists of the time to be like an explosion.  They went from no life to all kinds of life (and lots of it looking truly weird) in an instant.  Life exploded into existence.

The Cambrian explosion is usually dated to 541 million years ago.  I used 500,000 above because it is a round number that is close enough for my purposes.  (The same thing is true for my 800,000 number.)  It didn't take long for broad speculation to emerge that there was something before the Cambrian explosion.  And careful study found the multi-celled predecessors of the critters that "seemingly sprang from nowhere".  And thus the explosion was explained.  Lots of life had been around for a while.  It just turned out that in many cases it all of a sudden evolved into the kinds of creatures that can be identified by careful "naked eye" examination.  So it was pretty easy to trace all the variety of life that first appeared during the explosion to earlier life and to trace this earlier life back to single celled creatures.

But that still left the question of what happened between roughly 3.5 billion years ago and roughly 0.5 billion years ago?  And that parallels the approach that the "deep history" lectures took.  Instead of confining themselves to history, the last 3,000 years, the lectures went back to the origin of life on earth.  And, in fact, they went all the way back to the origin of the known universe 13.8 billion years ago.  I don't want to go that far back.  So I am going to focus on the period between 3.5 (roughly) billion years ago and 0.5 (roughly) billion years ago.  And I am going to focus on the evolution of life during this period.

Science knows very little about this time period.  The problem is that what survives of life from this period doesn't tell us much.  Each creature is microscopic and nothing of the actual creature survives.  What does survive can be described as a shadow.  All the chemicals that originally made up the creature are replaced by minerals.  The minerals often start out as liquids but solidify over time leaving the traces we can now dig up and examine.  It would be nice to have DNA from these creatures.  But DNA is a very fragile chemical when subjected to vast amounts of time and to geological processes.

Scientists have become adept at recovering DNA from bones and teeth that are thousands of years old.  But that gets us back only about as far as history gets us.  Various creatures get stuck in amber.  Some DNA from creatures that are tens of thousands of years ago can sometimes be recovered because the amber provides a very stable and protective environment.  That gets us farther back to the most recent part of the archeological period.  The conceit behind the "Jurassic Park" movie was that DNA from tens of millions of years ago could be recovered and used to build dinosaurs.  If this were possible that would gets much deeper into the archeological period.

It seems unlikely this will ever be possible.  And remember, we are now talking about hundreds of millions of years.  We want to get to billions of years back.  It is unlikely in the extreme that DNA from this far back will ever be available.  And the same is true for the other chemicals that make up cells.  It is unlikely that it will ever be able to study anything except the shadow of these creatures.

So scientists have only two sources of information.  First, they get a limited amount of shape information from the shadow in the rock that is all that is left.  The other information is environmental.  The early atmosphere of the earth was quite different than it is now.  At one point vast quantities (billions of tons) of Sulfur precipitated out and formed vast deposits in the earth.  The same thing later happened with Iron.  Most modern life forms can not survive in environments where there is too much Iron or Sulfur.

And we now separate life into plants and animals.  Plants take up Carbon Dioxide and excrete Oxygen.  Animals take up Oxygen and excrete Carbon Dioxide.  Animals could not come into existence until Oxygen came to make up a significant portion of the atmosphere.  And many animals, humans included, can't survive if the atmosphere contains even 1% Carbon Dioxide.  So almost all of the Carbon Dioxide was scrubbed from the atmosphere at some point.  We know this because we know the early atmosphere of the earth contained a lot of Carbon Dioxide.

Life optimizes itself to survive in a particular environment.  So early life was well set up to handle the environment of that period (lots of Carbon Dioxide and other stuff no longer present, no Oxygen and lots of other stuff now present).  It also had to be able to thrive in a general environment containing lots of Sulfur and Iron, just to mention two obvious differences.  How did life of the time do that?  We just don't know.

But those are specific adaptations and I am interested in more general ones.  There seems to be no reason that whatever allowed early single cell organisms to survive lots of Carbon Dioxide, Sulfur, Iron, etc. and little or no Oxygen, etc. from becoming multicellular (and large).  But they didn't.  Why?

When one is discussing evolution the phrase "survival of the fittest" is often bandied around.  It is generally assumed to be vaguely negative.  Only mean beasties with no morals and big teeth need apply.  But that is not at all the situation.  There is no time in the entire period where life existed on the earth when single celled creatures have not been the dominant life form in terms of numbers (or even in terms of aggregate mass).  Lions seem fitter to survive.  But there are only a few Lions.  There used to be more but not so many more that you could add their total weight together and get a total weight that was greater than that of all the green algae in the ocean.  If you go by numbers instead mass the margin favoring green algae is far greater.  Either way, the algae win.

An organism in order to be "fit to survive" need only be as good as or better than its immediate competition.  So a particular strain of green algae need only be worried about being as fit as or fitter than other strains of green algae.  It also needs to be able to thrive well enough so that after all the other critters that eat it have dined there are lots of green algae left.  And my larger point is that Lions and green algae are not in competition with each other.  The green algae is much better at being a green algae than a Lion would be.  The Lion is much better at being a Lion than the green algae would be.  They are pretty much oblivious to each other.

But taking the long view, somehow a green algae-like single celled creature evolved into a multi-cellular Lion.  But that's just the context.  The question I want to focus on is why did some single cell creatures evolved into multicell creatures (and eventually into Lions) when for billions of years they didn't.  I have a guess.

For a long time there was a theory that there was a long term evolutionary trend toward complexity.  People went further.  They said "complex is better" and by "better" they meant fitter.  But a careful study found lots of examples of more complex giving way to less complex and the less complex creatures surviving and thriving.  To provide but one example, some fish survive in caves where they are cut off from all light.  Over generations they lost the ability to see.  The complex eye (and the visual system that backs it) deteriorates and eventually disappears.  The reason is simple.  Eyes (and eyesight) exact a cost.  If eyes and eyesight don't provide a benefit it makes fish with eyes less fit and they get beat out by more fit fish without eyes.

I want to turn this thinking inside out.  But I am going to do it in steps.  In general eyes are very helpful.  That's why almost all animals have them.  And animals have evolved to the point where they have all the machinery necessary to make eyes.  In most situations eyes add complexity but they allow an animal to respond to its environment in more and more complex ways than an animal without eyes.  So the presence of the eye (and its supporting vision system) makes the animal more fit.  But even in this situation things are more complex.

Some animals (eagles) have very good eyes while others (people) have eyes that are not so good.  I want to emphasize the relatively of the concept of "survival of the fittest".  Complexity in general and good eyesight in particular do not always contribute positively to fitness.  An animal with poorer eyesight can be more fit than another animal with good eyesight.  Perhaps the "less fit" animal has other different attributes that are more important and more than make up for the poor eyesight.  You have now had the longer and fuller version of why there is no universal evolutionary drive toward complexity.

Now let me back up and take the long view.  And let me drill down to cells.  A chance confluence of circumstances caused the first living cell to come into existence.  Current scientific thinking says all life descended from a single cell.  But that is based on the fact that all modern cells share a lot of the same cellular machinery.  It's a good scientific theory because it is the simplest.  But that just means it can be tossed out the window and replaced by a different theory if evidence is found that discredits the current simple theory and supports a different more complex theory.

And science knows very little about this first cell.  But another assumption is that it was simple.  All the things that would have had to come together to make an extremely simple cell require an alignment of incredibly unlikely events.  But a lot of things happen when you take into account the entire earth and hundreds of millions of years.  That just means it requires a very large playing field (the whole earth) and a whole lot of patience (nearly a billion years) to create even a very simple cell.

But a more complex cell is, well, complex.  So it seems like this is asking too much to assume the first cells were complex at all.  If the necessary conditions for creating a complex cell happen once in a blue moon then it would be reasonable to believe that the circumstances necessary to create a very simple cell would happen on a daily basis.  It seems more likely, therefore, that more complex cells eventually emerged out of a large population of very simple cells.

Over time and space there would occasionally emerge the circumstances necessary to turn a relatively simple cell into a more complex one.  The more complex cell would evolve out of an environment containing many simpler cells much more quickly and easily than all on its own in a place where there were no simple cells available to jump start the process. This is a very reasonable argument but scientists just don't know enough to do anything but speculate as to whether it is correct or not.

And it is important to keep something in mind.  This whole fitness business is relative.  A very simple cell is probably not very fit compared to modern cells.  But what's its competition?  Nothing.  It doesn't have to be very "fit" because there is nothing out there trying to get it.  Sure, chance, or a change in circumstances (storm, volcanic eruption, etc.), could get it.  But that's just bad luck.  The concept of lunch, as in something seeing it as lunch, implies the kind of active competition for resources that was completely lacking back then.

Now let me return to my "more complex organisms can have a more complex response to their environment than less complex ones" argument.  If an organism is capable of a more complex response to its environment than another organism then the potential exists to do better.  Consider an organism that is only capable of a simple response to a predator, say wiggling.  Now consider a slightly more complex organism that can both wiggle and can chose the direction it wiggles in.  Both may or may not survive a given attack but it seems likely that the organism that can chose its direction can chose to wiggle away from its predator and that should give it a better chance at surviving.

But if neither organism is complex enough to be able to wiggle then it doesn't matter if there is a direction that would work better than others.  You can't do what you can't do.  Simple cells can't do much.  It is unlikely, for instance, that they can wiggle.  But wiggling has a cost.  It uses up energy that could be used for another purpose.  So complexity in the form of being able to wiggle has a cost.  The trick is to gain more in benefit than you lose in cost.  And that means just a random increase in complexity is often going to cost more than it benefits.  And that means adding complexity is not as straight forward as it initially seems.

But if you get it right it is helpful.  And we now don't see simple single celled creatures in our modern world.  The simplest single cell creatures we now see are actually very complex.  As such all modern cells are capable of displaying very complex responses to their environment.  And what that means is that if you introduced a very simple cell into our modern environment it would probably not take long for some modern cell to notice that cell and turn it into lunch.  And the simple cell would be incapable of mounting an effective defense.  So all the simplest cells are long gone from the modern environment.  And that means we have no place to look to see what they would have looked like and how they would have worked.

And you now have all the pieces of my theory.  I theorize that the interactions between cells in a multi-celled creature needs to be quite complicated in order for the new configuration to be beneficial (more fit).  For the reasons elucidated above the evolution of complex cells was a very slow process.  It may have depended on changes like removing most of the Sulfur, Iron, and Carbon Dioxide from the environment and adding Oxygen and other components, or not.  But I speculate that quite a bit of quite complex cellular machinery must be on hand to make a two-or-more cell organism more fit than one celled organisms.

How complex?  I am not an expert.  In fact, I try to avoid the "wet" sciences.  But I have picked up a lot of information along the way.  So let's take a tour of a modern cell.  We start with the outer membrane.  It's job is to keep the outside stuff outside and the inside stuff inside.  But this turns out to be a quite complex process.  Things need to move in the correct direction on a continuous basis.  But only the right things.  So the outer membrane contains things called "ion channels".  Ions are things like Potassium and Sodium.  The inside of the cell requires that the proper amounts of each be maintained.  So each channel must monitor external and internal conditions and cause the right amount of the right kind of ions to flow in the right direction.  And this is the simplest thing that's going on.

In general the outer membrane must pull raw materials in and push garbage out.  But wait, there's more.  There is the whole "signaling" process.  Take a simple example.  The outside of the outer membrane contains things called "receptors".  These receptors continuously sample the environment outside the cell looking for specific chemicals.  When the right chemical comes along it gets "bound" to the receptor.  When this happens something changes within the cell.  This is the process used by Morphine.  Certain brain cells have "Morphine" receptors.  This kicks off a change within the cell that results in pain signals from the rest of the body getting shut down.

There are many cells in our bodies.  Different cells have different suites of special purpose receptors.  Most cells are indifferent to most chemical signals.  But this business of some cells in the body manufacturing some chemical that travels to the site of a different cell then attaches to a receptor is a common method of intercellular communication.  It depends on specific cells being able to manufacture and release specific chemicals which can then travel around in the body and eventually find the correct receptor on the correct cell.

Scientists literally don't know who many of these unique communications paths there are.  Suffice it to say that the number is more than 10,000.  And these chemicals don't always come from a cell located elsewhere in the body.  As morphine (and caffeine and LSD and the AIDS virus and untold other examples) demonstrates, the chemical can come from pretty much anywhere.  That's enough about the outer membrane.  It is complicated but not by any stretch of the imagination the most complicated part of cells.

If there is an outer membrane it stands to reason that there is another membrane.  And the obvious (and correct) name for this is the inner membrane.  So what's inside it?  That's where the cell's DNA and associated machinery resides.  DNA has the form of a double helix.  For our purposes that means it has two backbones.  Is there a difference between the two?  No!  The machinery of the cell can't tell one from the other.  And one of the things DNA is responsible for is holding the recipe for proteins.

Cells need lots of proteins and they need very specific proteins.  DNA contains the recipe for making each and every one of them.  It requires a quite complex set of mechanisms to turn the DNA information into the appropriate protein.  There is a piece of chemical machinery that scans down the DNA backbone.  Which backbone?  It doesn't matter as both backbones are scanned.  Anyhow, at some point a sequence of DNA bases is located that says "the recipe for a protein starts here".  The DNA is scanned a little further to see if it is the recipe for the right protein.  If not then the backbone scan continues.

But if it is the correct protein the chemical machine creates the starting end of the protein.  Then it reads the sequence of amino acids that make up the protein off of the DNA (see http://sigma5.blogspot.com/2016/04/dna-101.html for more on this).  It then fishes a free floating fragment of the correct amino acid out of the soup floating around inside the cell and attaches it.  It then moves on attaching amino acid after amino acid.  Eventually it hits the "this is the end" signal on the DNA backbone, completes the protein and quits.  Many of the various chemical machines are a specialized version of RNA.  RNA is a cousin to DNA.  The biggest difference is that RNA has only backbone while DNA has two.

But this is only one of the tasks the various components of the nucleus are capable of.  Another key process is cell division.  New cells are created by having one cell divide into two.  Part of this process involves copying all the DNA in the nucleus to make a second identical copy.  This is a very complex process.  And these two processes do not exhaust the capabilities of the chemical machinery resident in the nucleus.  But they are all I am going to talk about.

And then there is the middle, the part between the outer membrane and the inner membrane.  This is where the work of the cell is done.  And there are thousands of different kinds of cells in many creatures.  So there is "kidney" machinery in kidney cells, "brain" machinery in brane cells, "skin" machinery in skin cells, and so on.  And remember all these cells contain exactly the same DNA.  And they are all the many times great grandchildren of the single cell we all grow out of.

Think back to the machinery in the nucleus.  As cells divide often one will take on one role and the other will take on a different role.  How does this come to be?  Well, cells have a large amount of very complex regulatory machinery.  Some of it takes its orders from parts of the DNA that don't code for proteins.  Some of it uses different mechanisms.  But it's all there in each cell along with the other stuff I have talked about.

I hope I have given you a flavor of just how complex the cells that make up modern plants and animals are.  And by some process or another a liver cell has to figure out it is a liver cell then behave like a liver cell.  A brain cell must figure out it is a brain cell and then behave accordingly.  And so on.  This specialization was not necessary when critters only had a single cell.  Roughly speaking all the cells of a specific type of critter were the same.  But differentiation, the process of splitting one cell into two cells, each of which behaves differently, but differently in a very specific way, that's a capability that is necessary for multi-cell animals to have.

That's a lot of complexity.  And complex cells with complex behavior requires cells with a lot of cellular machinery.  Given all this and given the fact that complexity is not always a good thing it is not surprising to me that it took three billion years for cells to acquire all the complexity necessary to produce multi-cell critters that were fitter than single cell critters.  It would be nice if scientists knew how all this came to be but they don't.  All they know is that the pieces were finally in place roughly 800,000 years ago.

From there it did not take long (tens to hundreds of thousands of years) for widely different types of extremely successful multi-cellular organisms to come into existence.  Since they were so successful they evolved into many and bizarre critters.  But it didn't take long before there were lots of multi-cellular creatures around.  And at that point the level of fitness necessary to thrive and survive went up a lot.  And that resulted in most of these early different kinds of multi-cellular creatures being wiped out.

But evolution continued and things like bones and shells were developed.  These enhancements were again wildly successful so many wildly different variants quickly appeared.  And that quickly caused the level of fitness necessary to survive and thrive to again increase.  And so may of those early designs were quickly wiped out leaving creatures that for the most part we would recognize today.

But the Cambrian explosion was not the end of it.  Evolution continues to this day.  Darwin took note of animal breeders.  Dogs in particular now come in a wide variety of shapes, sizes, dispositions, attributes, etc.  This is because of the forced evolution dogs are subjected to by people.  There are entire breeds of dogs that were not in existence a few hundred years ago.  Darwin laid all this out more than 150 years ago but people, for reasons that are beyond me, don't find it convincing so how about this?

Scientists can now demonstrate evolution in the test tube in the lab over time periods of a single human lifetime.  There are very short lived single celled creatures that are easy to culture in the lab.  Zillions of them fit in a standard "chem lab" flask where they happily live out their lives.  They are born, live, breed, and die within a day.  That means you can rack up a thousand generations in three years or ten thousand in thirty.

Scientists have started out with a colony consisting of a large number (remember, they are small) of genetically identical single celled creatures of this kind in a flask in a lab.  They have in some cases just let them do their thing for generation after generation to see what would happen.  In other cases they changed their diets, or the amount of light they were subjected to, or the acidity of the solution they were cultured in, or other environmental variables.  In every case they wanted to see what would happen after a few or a few hundred or a few thousand generations.

These creatures are so simple that they can be frozen then brought back to life years later without suffering any harm.  So scientists regularly split one colony into two, each in its own flask.  Or they split a colony into two and freeze one half.  This lets scientists do things like repeating the same experiment multiple times or doing a family of experiments where a single parameter is changed by various amounts.  Later the critters can be genetically sequenced to see what has happened.  This has allowed all the basic tenets of evolution to be proved out in detail and in controlled lab experiments that can be preproduced by others.

My ideas about what happened during that three billion year period are pure speculation.  But the basic tenets of evolution are not.