Saturday, November 21, 2020

60 Years of Science - Part 22

This post is the next in a series that dates back several years.  In fact, it's been going on for long enough that several posts ago I decided to upgrade from "50 Years of Science" to "60 Years of Science".  And, if we group them together, this is the twenty-second main entry in the series.  You can go to https://sigma5.blogspot.com/2017/04/50-years-of-science-links.html for a post that contains links to all the entries in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book The Intelligent Man's Guide to the Physical Sciences as my baseline for the state of science when he wrote the book (1959 - 60).  In this post I will review two sections, "Nuclear Power" and "Radioactivity".  Both are from the chapter "The Reactor".  This is the last chapter in the book.  So, the end of this series is neigh.

The book was written in the middle of the Cold War.  Then, MAD, Mutual Assured Destruction, the ability of either the U.S. or Russia to start a nuclear conflagration that would literally bomb both countries "back to the stone age" was something in the forefront of people's minds.  But nothing happened.  The Cold War ended peacefully with the breakup of the Soviet Empire.

And various crises have since come and gone.  And wars have come and gone or, in some cases, lingered for what seems like forever.  And countries as stable as the United Kingdom and as fringe as North Korea have gotten "the bomb".  In all this time no one has exploded a nuclear weapon in anger.  So, most people now spend little time thinking about them.

Things were different back then.  Nuclear weapons, and the possibility of nuclear war, was a pressing concern.  This scared the shit out of people, and legitimately so.  As a result there was a real yearning for an alternative, an "atoms for peace" program of one sort or another.

But Asimov starts a little earlier.  He notes that there was a legitimate race to be the first to develop an Atom Bomb.  The Nazis did have a legitimate program that Hitler hoped would produce he could use.  And no one doubted that he would use it if he had it.

It turns out that we now know that they never even got close.  But that became clear only after the War was over.  In the mean time, this legitimate concern was part of the justification for moving forward rapidly with a U.S. program, a program that eventually succeeded.  (Many other countries helped.  Principle among them was the U.K.  But the U.S. provided all of the money and most of the resources.)

Against the background that, then and now, the U.S. is the only country to ever explode a nuclear weapon in anger, there was a yearning to balance the bad with the good.  And the most obvious good was to harness the "power of the atom", in this case nuclear fission, to produce power.  This power was first used to propel ships.  But it could also be used to produce electric power.

Both of these technologies emerged from "Project Plowshare", named for the biblical quotation about "beating swords into plowshares".  But producing power that could be harnessed was not the only idea Plowshare explored.  Another was to use atomic bombs for earthmoving.  The obvious candidate was a canal from the Atlantic to the Pacific that would be dug by exploding a series of Atomic Bombs underground.

As a proof of concept a bomb was actually exploded underground in Alaska in an attempt to create an artificial harbor.  Another possibility was to use it for oil drilling.  The reason that you haven't heard of these and other ideas is that they turned out to be far more trouble than they were worth.  They were all abandoned.  Some persisted after Asimov's book was published.  But not for long.   The only Plowshares idea that turned out to have any legs was the nuclear reactor.

Demonstrator nuclear reactors of various kinds started popping up within a few years after the end of World War II.  But, as I have noted elsewhere, it costs a lot of money to come up with a design.  It costs even more money to turn the design into a working device.  That made commercial interests reluctant.  The U.S. Navy, on the other hand, was not reluctant.  As a result, the first nuclear reactor put to practical use was put to use powering a Navy submarine.

The thinking was that submarines are vulnerable on the surface but safer underwater.  And a power plant that required an ample supply of oxygen, as any kind of petroleum based engine does, demands considerable surface time.  Nuclear power requires no oxygen.   And, once a nuclear power plant was developed, a recent conventional submarine design was quickly reworked to make use of it.  The result was the "Nautilus", named for the submarine in Verne's 20,000 Leagues under the Sea.

It was so successful that almost all U.S. Navy subs that have been built since have been nuclear powered.  They can easily stay underwater for 6 months straight.  The biggest ships in the U.S. Navy's inventory were also soon adapted to nuclear power.  Since the '60s, all large Aircraft Carriers are nuclear powered.  These ships have a large fuel budget.  But it is for the planes they carry and not the ship itself.

Efforts to use nuclear power in other ship types has failed.  A nuclear powered cargo ship was built.  It was a technical success but a practical failure.  Everything worked just as it was supposed to.  But it was barred from most seaports for political reasons.  These same political reasons are the reason no other ship type has been attempted.

Most of this happened after Asimov's book was finished.  He spends some time on the Nautilus and mentions several other nuclear powered vessels.  For instance, the keel for the "Enterprise", the first nuclear powered Aircraft Carrier, had been laid down in time for that information to make it into the book.  But she had not yet entered service.

As CVN-65, she entered active service in 1961.  After over fifty years of active service, she was decommissioned in 2017.  Construction of a replacement of the same name, CVN-80, is scheduled to begin in 2022.  CVN-80 is scheduled to enter service in 2027 or 2028.

The first civilian nuclear power plant was built by the Russians in 1954.  The U.K. followed in 1956.  The U.S. joined the club in 1958.  At the time coal fired power plants were cheaper to build and cheaper to operate.  It was hoped that as nuclear power plant construction and operation moved down the learning curve, they would eventually become the cheapest option.

We now know that was never going to happen.  Outside of the Soviet sphere of influence, most designs differed little from each other.  They also differed little from the design used to power the Nautilus.  At the time Asimov wrote his book it was believed that Uranium was hard to find.  It turned out that there was a learning curve when it came to finding Uranium.

Uranium is now known to be plentiful.  It is also known to follow the same rule that applies to pretty much any commodity that is mined.  The higher the price, the more ore deposits there are that can be mined economically.  We are not going to run out of Uranium to mine any time soon.

The construction of many plants of similar design should have driven construction costs down.  But it didn't.  The anti-nuclear people got more and more effective.  They forced regulators to pile on more and more requirements that were supposed to improve safety.  They didn't.  What they did do was to keep pushing construction costs higher and higher.

Three Mile Island, followed by Chernobyl, followed by Fukushima have caused the pressure to only increase.  I have plowed this territory extensively elsewhere so I am not going to go over it again.  Suffice it to say that nuclear power does not now, and doesn't in the near future, look to be a substantial contributor to new electric power generation.

Asimov includes a schematic diagram of a "gas cooled" nuclear power plant.  It describes a design that is more sophisticated than the one used in most nuclear power plants operating today.  Instead of being "gas cooled", they are "water cooled".  But, other than the details of the cooling method, so little has changed since that it accurately portrays how most nuclear power plants work to this day.

A Uranium shortage was them a serious concern.  Asimov responded to this concern by noting that "breeder reactors", reactors that can covert the common U-238 isotope of Uranium into Plutonium, effectively multiply the amount of nuclear fuel available by many times.  Only minor design changes need to be made to turn a Uranium fueled design into a Plutonium fueled design.  He also discusses Thorium as a third alternative.

None of this went anywhere after the book was published.  The primary reason was the discovery that there actually was a lot of Uranium around.  Safety and proliferation issues doomed Plutonium.  It turns out to be relatively easy to harvest reactor grade Plutonium and turn it into a bomb.  The risk associated with Plutonium, and other concerns I am going to skip over, means that breeder reactors are only used in military programs designed to create fuel for bombs.

I am not familiar with the reasons Thorium never took off.  I suspect that it too was doomed by cheap and widely available Uranium.  But I don't actually know for sure.  On to "Radioactivity".

Asimov characterizes radioactivity as a new threat.  He justifies this on the basis that naturally occurring radiation is usually of a pretty low intensity.  High intensity radioactivity he associates with new man made activities like Atom Bombs.  He is correct in the sense that "the bomb" made people acutely aware of radioactivity.

Scientists had known about if for about fifty years by then.  But outside of certain scientific circles it was pretty much unknown.  To his credit he does discuss early radiations induced deaths and illnesses.  Two early victims were Marie Curie and her daughter.  For a while X-Rays were considered completely benign.  But that slowly changed.  Now, of course, safety protocols are routinely followed in places like dentist offices.

Dots of a mixture of Radium and phosphors that would light up were applied to watch dials to make watches easier to read in the dark.  The work was done by women using small brushes.  They would often lick the brushes as they worked.  This resulted in horrible cancers of the face and mouth, and sometimes death.  This practice was outlawed but I don't know whether this happened before or after Asimov's book came out.

Asimov speculated on whether enough radiation would be unleashed to cause widespread harm.  We now know that the answer is no.  But even very small amounts of radiation can be easily measured.  This has allowed scientists to perform some very unusual "tracking" experiments.

Oceanographers have been able to accurately measure the amount of radiocarbon in ocean water.  It spiked during the short period when extensive above ground bomb testing was occurring.  The sharp edge between radiocarbon enhanced water and water containing normal amounts allows them to calculate just how "old" the water was.  That is, how long it's been since the water was at the ocean's surface.

Another interesting development was the discovery of natural nuclear reactors.  Chain reactions depend of the concentration of Uranium being unusually high.  But there are natural events that concentrate Uranium.  And in some cases, these have resulted in chain reactions taking place.  We know this because this situation leaves distinctive isotope profiles behind.

Concentrations never reached the levels necessary to cause a nuclear explosion.  But it never occurred to anyone to think that even a low level chain reaction was possible.  That is, until someone accidently stumbled across the first one.  Since then, many more have been found.

Asimov quickly moves on to a discussion of the mechanics of radioactive decay.  These are subjects I have already covered elsewhere.  He just hits the highlights.  But a lot was known at the time and far more is now known.  But it is detail.  The main picture is clear and hasn't changed in the sixty years since the book was written.

He discusses the concept of a "decay chain".  This isotope decays into that isotope, which then decays into some other isotope.  He also notes that an isotope may decay in several ways.  But in all cases the probabilities are fixed.

He moves on to "half life", a subject which I have already discussed extensively.  From there, he goes on to note that some kinds of radiation are deadlier than other kinds.  The converse of this, which he doesn't discuss, is that it is easier to create an effective shield against some kinds of radiation than it is to create one against other kinds.

He then segues from the fact that everything is radioactive to the subject of "background radiation".  This is another topic I have already treated.  He notes but doesn't go into detail on the idea that background radiation can contribute to evolution.

DNA had just been discovered.  We now know that radiation can damage DNA.  This can result in mutations.  A mutation can be either beneficial or detrimental.  Over time, the beneficial mutations cause species to evolve.  But there are cellular mechanisms for repairing DNA damage, regardless of the cause.  And their are many other ways to cause damage.

Other big causes of mutations are transcription errors, reading errors, and the like.  DNA gets duplicated.  The duplication process is not 100% accurate.  Various processes "read" DNA.  As an example, the cell manufactures thousands of different proteins.

The blueprint describing the specifics each of the many different proteins a cell manufactures is found in the DNA.  A process that is different from, but related to, the duplication process is used to read the DNA.  But the information found that way is used much differently.

Instead of being used to duplicate the DNA itself, a translation process is used to drive a protein assembly process.  DNA provides the details that determine the order and type of the subunits that snap together to make each specific protein.  An error in this process causes the wrong protein to be made.

It is not a wonder that things go wrong with these cellular processes.  What is a wonder is just how infrequently they do go wrong.  It is thought that cancer is caused by key cellular mechanisms going consistently wrong.  Scientists are attacking cancer by figuring out how to get these broken processes back on track.

Various efforts are now under way to reclassify cancers.  The current methods of classifying cancers depend on the symptoms or what organ is affected.  The new method depends on classifying what cellular mechanism goes wrong and how it goes wrong.  This may lead to a single cure that is successful against many cancers, not just one or a few.

This deeper understanding of DNA, the way radiation damages DNA, and all that follows has taken place since Asimov wrote his book.  So, let's get back to it.

He moves on to the "nuclear waste disposal" problem.  This is also something I have discussed extensively elsewhere.  Before moving on I will note that he assumes that the nuclear power industry will grow rapidly.  He also assumes it will eventually become quite large.  That would have resulted in a large amount of nuclear waste.  But the industry did not ever grow very large.  So, the waste disposal problem is actually quite modest.

And, since he overestimates the size of the problem, he ends up taking off on what now look like tangents.  One of them involves building devices that produce small amounts of power for long periods of time.  They work just fine.  But they have not gone into general use due to the public's fear of radiation.  They have only found one use.

We routinely send space missions to the outer solar system.  These missions need power.  The standard solution is solar panels.  Various Mars rovers, the International Space Station, and all manner of other space gadgets, use solar panels for power very successfully.

But the farther from the sun, the less bright sunlight is.  And that means you need giant arrays of solar panels to produce the necessary power.  Queue the RTG, the Radioisotope Thermoelectric Generator.  It is based on the SNAP device Asimov discusses.

Modern RTGs use Plutonium for fuel.  They are radioactive enough to be dangerous.  So they are often put on the end of a boom that distances them from the bulk of the spacecraft.  RTGs power both Voyager spacecraft, now the two most distant man made objects.  One powers the spacecraft that did the flyby of Pluto.  (BTW, that spacecraft is still working fine.)  Their other successes are too numerous to list.  But this application is the only one where "Isotope Power" is used routinely.

Asimov discusses various other attempts at the peaceful use of radioactive materials.  There has been some successful use of radioactive materials in medicine.  That success continues but it is modest.  The other things he discusses never ended up going anywhere.  The public fear of radioactivity eventually blocked any chance of success they might have otherwise had.

He then returns to how to dispose of radioactive material.  It would be nice if the topic had advanced productively since Asimov's day.  But it hasn't.  The same old options are still on the table.  The same arguments are still advanced against each option.  The fact that radioactivity poses no unusual danger, and the fact that we are talking about a very small volume of material, are both still being ignored.

He then moves on to radioactive fallout and the fact that very tiny amounts could be detected, even back then.  He concludes from this that "it is virtually impossible for any nation to explode a nuclear bomb in the atmosphere without detection".  That truth eventually became self evident.  It led to the "Nuclear Test Ban" treaty, which outlawed above ground testing.

At the time that left a loophole.  Countries cold explode bombs in caverns below the ground.  But seismology has grown in sophistication by leaps and bounds since Asimov's time.  It was then possible to detect the underground detonation of a medium or large sized nuclear weapon.  But what about a small one?

In Asimov's time it was thought that such a detonation stood a good chance of going undetected.  But, as I said, seismology has since gotten much better.  It eventually became apparent that even the detonation of a small nuclear weapons would be detected.  There was some nonsense thrown up postulating that there were circumstances under which a detonation could still go undetected.

But the arguments were nonsense and this eventually became apparent.  There is now a treaty banning underground nuclear explosions.  But the U.S. and a number of other countries have not signed it.  Most conspicuous among the non-signers is North Korea.  But that hasn't stopped all of their underground nuclear tests from being detected.

No one has succeeded in concealing an underground nuclear test and no one will.  But that doesn't mean that a country won't develop a nuclear weapon and test it.  North Korea did just that.  It just means that, if they do so but try to keep it a secret, everybody will still find out what they did.

Asimov then launches into a long discussion of the isotope Strotium-90.  It is highly radioactive.  It is particularly dangerous because it is readily absorbed into the bones of growing children.  In this situation, it doesn't take a lot to constitute a dangerous amount.

Another highly radioactive isotope is Iodine-131.  It is particularly dangerous because it is taken up and concentrated by the thyroid gland in the neck.  Again, as a result it doesn't take a lot to constitute a dangerous amount.  Asimov does not discuss Iodine-131.

You will typically see a lot of press coverage of Strontium-90 and Iodine-131 whenever there is an event that releases a lot of radioactive material.  These two materials were discussed extensively in conjunction with the Fukushima nuclear disaster, for instance.  Now you know why they rightly attract so much press attention.

And on that cheery note, . . .

Thursday, November 12, 2020

Cars - The State of Play in 2020

 I like to periodically return to subjects to see how things have evolved since I last wrote about them.  The most obvious example is my long running series, "60 Years of Science".  But it is far from the only example.  This post brings together updates that are joined by the fact that they all have to do with cars.  Let me start with self-driving cars.

I last opined on this subject three years ago.  Here's the link:   https://sigma5.blogspot.com/2018/01/robot-cars-now-new-and-improved.html.  For a long time the conventional wisdom was that Autonomous Vehicles, or AVs for short, would arrive in 2020.  Well, have they arrived?  Nope! The "wisdom" referenced in the post consisted of, in part, an article in Science, the premier scientific journal published in the U.S.  (It is #2 in the world behind the British journal, Nature).

A December, 2017 article in Science opined that AVs would appear "somewhere over the rainbow".  Elsewhere in the same article the author described widespread use of AVs as "still decades away".  Ouch!  Then, as now, I find that prediction too pessimistic.  So, where do we stand now that we are at the end of 2020?

Well, then and now there are several companies working on the subject.  Waymo, the Google subsidiary, is generally assumed to be in the best shape.  But, like everybody else, it is still running "demonstration" and "pilot" projects.  One problem is that a car that was part of one of these demonstration/pilot projects managed to kill a lady in Arizona.  It wasn't part of a project run by Waymo but the death put a pall on the whole industry.

Several people have also died while driving Tesla cars in "Autopilot" mode.  A Tesla car operating in Autopilot mode is not a full up AV.  And the drivers were allowing Autopilot to drive their cars under conditions where they were supposed to be closely monitoring it.  Instead, they adopted a "hands off" attitude, literally.  But these are technicalities that don't influence the thinking of the general public.  The public is extremely concerned about the safety of AVs.

And that has caused everybody to go slow, everybody, that is, except Elon Musk, the CEO of Tesla.  He says that the new iteration of Autopilot will be capable of autonomous operation.  Details are skimpy so nobody knows quite what he means.  The general consensus is that the Tesla Autopilot feature lacks many of the capabilities necessary for true autonomous operation.  So, most people are in "let's see what he actually delivers" mode.

And, there is a great deal of confusion.  The SAE (Society of Automotive Engineers) has defined various "levels" from fully manual to fully autonomous.  Their "level 5" is fully autonomous.  And everybody agrees with the criteria that they have laid out.  That's not the source of the confusion.

The source of confusion is that people expect a level 5 vehicle to be able to operate completely autonomously in all conditions.  They expect to operate in the daylight and at night.  They expect it to operate in good weather conditions and bad.  They expect it to work on city streets, on freeways, on country roads, and even off road.  An AV capable of that level of autonomy truly is decades away.

But the ability to safely and consistently handle in all those conditions is not necessary in order for large numbers of AVs to be on the road and operating successfully.  There is general agreement that off road is the hardest to manage.  So, it will be the last to appear.  On the other hand, there is some disagreement as to whether city driving or freeway driving is the easiest to manage.

One's first impression is that freeway driving is the easiest.  And that's true in most conditions.  But there is a famous example of an AV test car being unable to exit a Phoenix freeway.  The other drivers on the road ganged up on the test car.  They repeatedly blocked it from finding a slot it could use to move into the exit lane.

Trouble exiting, or trouble changing lanes when you need to, is a common occurrence.  If there's a lot of traffic it is hard to find an opening without some cooperation from other drivers.  The solution to a lack of cooperation is to "barge" and force an opening.  But that can be dangerous.

With a little jockeying, and possibly some hurt feelings, it can almost always be pulled off.  But it may involve a game of "chicken" and that's not "safe and sane" driving.  It often involves judging the psychology of the other drivers.  You need to pick on someone who will back off rather than remaining assertive.

It is possible to "program" that kind of behavior (all but the psychological part) into the AV system.  But companies don't want to do that.  Besides being risky, it is bad for public relations.  Self driving cars adhere to speed limits and follow all the other rules of the road.  That makes them far more timid than the average driver.  But it is also the posture that is most reassuring the public.

I don't know if the AV companies have solved the "Exiting" problem.  The incident I heard about happened a couple of years ago so they have had time to work on it.  The solution may have been as simple as going with unmarked vehicles.

But many autonomous designs feature various pieces of distinctive hardware poking out of the roof of the car.  Such a design obviates the need for the cars to be marked in order to be easily identifiable.  A car with a "taxi" bubble on the roof looks like a taxi regardless of whether it says "TAXI" on the side or not.

So, urban areas may turn out to be easier.  I'm sure the AV companies have gathered reams of data on this subject.  I expect them to first introduce AVs into whatever environment they think will be the easiest to get the cars to work in.

There have been some "fully autonomous" licenses issued by states, etc.  This allows companies to put AVs on the road in some places without having to have a test driver onboard.  But nothing of this sort has been rolled out on a scale large enough to attract the attention of the press yet.  (And it may be that COVID has been a major reason for the delay.)

I truly think that we will see some AVs in operation in limited areas by 2022.  The obvious choice is Uber/Lyft.  These companies know exactly where passengers are departing from and exactly where they want to go.  They can also give customers the option to opt in or opt out of a trip in an AV.  That allows them to keep things tightly controlled.

Uber and Lyft are very interested in AVs as they very much want to eliminate the cost of the driver.  And they have the technology that allows them to handle whatever constraints AV operations throw at them.  They can only handle a limited geographic area.

They can handle time-of-day or weather constraints.  They can handle passenger consent issues.  And they can handle changes in any or all of these constraints.

My thinking about AVs and Uber/Lyft mirrors almost everybody else's.  But so far, neither Uber nor Lyft have started so much as a pilot project to dispatch AVs for use by the general public in a limited area and under limited conditions yet.  I'm sticking to my "by 2022" prediction, but I have no special insight into this.

And that's how I expect the AV market to evolve.  It will start out only being used in small, limited ways,  If things go well then AV use will expand into more and different environments.  Eventually, the coverage will be broad enough to encompass a large number of trips but not all trips.  That's good enough.  I do think that it will be a long time before we see off-road AVs.  But that's okay.

Tesla's experience with Autopilot tells us a lot about how fast the psychology of the public can change.  A small set of drivers pushed it far beyond its actual limits.  That got several of them killed.  In spite of this, Autopilot is very popular.  It gets a lot of use, mostly in a responsible way.

But some people still push its use beyond what is safe.do in spite of the well documented risks.  With familiarity comes comfort.  And it doesn't take very long  People quickly came to trust Autopilot, in many cases more than was wise.  Right now, almost no one has any actual experience riding in an AV.  But, once they do, one trip will be enough for most people to become comfortable with the experience.

Next, I want to introduce a closely related subject.  Cruise Control got a lot more capable during the Obama Administration.  In older cars I have owned, all Cruise Control was capable of was maintaining a constant speed as the car went up hill and down dale.  The last car I bought was capable of a lot more.  And the Tesla Autopilot I discussed above is capable of still more.  All this is part of the path to a true AV.

But, as the capability of Cruise Control systems increased by leaps and bounds, lots of people figured out that it would be a good idea for these automated systems to communicate with each other.  Maybe the car in front could see an obstacle your car couldn't see.  And it is useful to know that a close-by car is going to change lanes or speed up or slow down or whatever.

The natural progression is for each car company to independently develop its own system then expect the other car companies to come to them.  (I saw this play out dozens of times in the computer business.)  Except companies adopting standards developed by other companies was never going to happen.  (It almost never did in the computer business.)

The solution is to have a "neutral" standard that everybody would use.  That way all suitably equipped cars could talk to other suitably equipped cars regardless of the make.  The specification is called "V2V" (vehicle to vehicle) communication.

The Obama Administration quickly put together an umbrella group to help this along.  The Government wouldn't set the standard.  They would just facilitate the industry coming together to set the standard.  That way it would be an "industry" standard not a "government" standard.  Better for everybody that way.

All this was moving along nicely when we had an Administration change.  As an Obama initiative, the Trump people wanted nothing to do with it.  Also, in spite of the fact that the government wasn't setting the standard, it smelled like "government regulation" and they were averse to that sort of thing.  So they shut it down.

Caught is the same net was the V2X initiative.  The idea was to extend the V2V specification so that it didn't just cover vehicle to vehicle communication.  It would instead be "vehicle to everything".  A smart stoplight would be able to tell oncoming vehicles when the signal was going to change.  Or it could inform the vehicle that a pedestrian had requested a "walk" cycle.

Anything that might improve things could be included.  Weather alerts could be sent out.  Construction areas could be signal cars what their extent was.  Warnings about ice on the road could be communicated.  Use your imagination.  This program was also effectively shut down.

We could be a lot further along at this point if standards had been adopted.  I'm sure the initiative will eventually come into being.  But the auto industry manufactures about fifteen million vehicles per year.  None of those vehicles are V2V or V2X capable at this point.  The time when most vehicles include V2V and V2X  capabilities will take just that much longer to arrive.  And the benefit will, therefore, take just that much longer to arrive.  So sad.  So unnecessary.

Okay.  On to my next topic, electric cars.  It turns out that I have never dedicated a blog post to this subject.  I'm pretty sure I have peripherally mentioned the subject.  But, in perusing the titles of all of the posts I have made, I don't see anything that is likely to have addressed the subject in any depth.  So, here goes.

In theory, electric cars are a great idea.  DC electric motors (all you really need to know is that this is the kind used in electric vehicles) are capable of providing 100% torque at 0 RPM.  To translate that into English, torque is a measurement of how hard the motor is pushing in an attempt to get the car to go faster.  0 RPM is the situation where you are stopped and you want to start going.  In short, DC electric motors are great if you love jackrabbit starts.

And being able to accelerate quickly (i.e. a jackrabbit start) is what people (mostly guys) use to decide whether a care is "powerful" or not.  Elon Musk made sure that Tesla cars accelerate quickly.  It was a great decision from a marketing point of view.

And there are lots of very powerful electric motors out there.  Diesel train locomotives aren't really "Diesel".  They are actually "Diesel-Electric".  The thing that is turning the wheels is actually an electric motor.  Nuclear Aircraft Carriers, the largest and heaviest moving objects in the world, use electric motors to turn their propellers.  So, if it is possible to build a powerful electric car, what's the problem?

The problem is the battery.  Engineers put wimpy electric motors into cars because a powerful electric motor can quickly drain the battery.  There is a direct tradeoff between a vehicle that feels powerful and responsive and a vehicle that goes a long way between recharges.  (You can try to get the driver to back off but drivers never do.)

If batteries worked well then there would be no problem using powerful electric motors in electric vehicles.  Car makers put powerful gas motors and big gas tanks into cars all the time.  That gives "gas" cars lots of power and lots of range.  Unfortunately, this approach is not possible when it comes to electric cars using current battery technology.

And it's not like we haven't known how to make electric cars until recently.  The "Baker Electric"  was a popular car in the early 1900s.  But its top speed is about 12 miles per hour and it doesn't go far between charges.  Why?  Because it uses "lead acid" batteries.  This is the type of battery that can be found in "gas" and diesel cars.  But it is both large and heavy in terms of how much energy it can store.

Modern hybrid and all-electric cars use "lithium" batteries.  They are similar to the battery in your mobile electronic device.  They are a big improvement over a lead acid battery.  But they still suck.  Lithium batteries are also very expensive.  The "gas tank" in a car costs a few dollars.  The battery pack in a hybrid or all-electric car costs many thousands of dollars.

The battery pack is much larger than a gas tank.  It is also much heavier than even a full gas tank.  And it can't propel a vehicle nearly as far as a tank full of gas can.  The manufacturers of hybrid and all-electric vehicles are forced to make trade-offs.  And none of their options are good.

A smaller battery pack is cheaper, takes up less space, and weighs less.  The lower weight improves the electric vehicle version of fuel economy.  Performance is not determined by the size of the battery pack.  Instead, the most important factor is the size of the electric motors.  This all sounds good so what's the problem?

The problem is range.  In the same way a small gas tank restricts range, a small battery pack restricts range.  Powerful the electric motors also restrict range.  They can drain the battery pack more quickly than small motors.  Of course, small motors translates to wimpy performance.

So vehicle makers go with the smallest battery pack (and often the smallest motors) that they think they can get away with.  Tesla cars come with a relatively large battery pack.  But that makes them expensive.  And, since Musk emphasized performance, if you drive a Tesla in "high performance" mode, it doesn't go very far before it needs a recharge.

You can get a lot of mileage between recharges with a Tesla.  But you have to put the vehicle into "economy" mode.  Then the vehicle delivers anemic performance.  And a Tesla is an expensive vehicle.  If you want to keep the cost down you have to go with a small battery pack.  That guarantees anemic performance.

And that has tilted the market towards hybrids.  These have a small gas engine and a very small battery pack.  The engine is used to recharge the battery pack on the fly.  If both the engine and battery pack are delivering power to the wheels, a hybrid can deliver so-so power.  On the other hand, if the battery pack has run down and only the power from the engine is available, then the car can barely get out of its own way.

But the combination delivers a lot of range.  Buyers have shown a marked preference for the extended range hybrids deliver over reasonably priced all-electric vehicles.  This trade off is forced by the current state of the art in battery technology.

Lithium batteries are far superior to lead-acid batteries.  But what is really needed is a battery that is as superior to a lithium battery as the lithium battery is superior to the a lead-acid battery.  Unfortunately, scientists currently have no clue as to how to create such a battery.

What I find surprising is that there is a role that all-electric vehicles can fill right now.  That role is with respect to delivery vehicles and the like.  As noted above, all-electric vehicles have a limited range.  But it is more than sufficient to satisfy the needs of these vehicles.  So no technological or other improvement is needed.  Why we don't see all-electric delivery vehicles all over the place is a mystery to me.

Drivers are concerned, perhaps excessively so, with the battery running down at an inconvenient time.  But delivery vehicles are used for short trips.  They don't rack up that many miles in a single day.  If the place where they are parked at night is equipped with fast chargers (chargers that run at 220 volts rather than the 110 volts that most household plug sockets deliver) then they can be recharged overnight.

As long as they have enough range to handle the number of miles these vehicles put in each day, an electric vehicle should work fine.  Fleet operators know all the statistics concerning how many miles per day their vehicles rack up.  And they rack up a lot of miles in a year so fuel costs are very important.  To me, it sounds like a perfect fit.

And there is an Uber/Lyft equivalent operating in this space. It's Amazon.  Amazon has a large delivery fleet.  It is tasked with getting packages "the last mile" from the fulfilment center to the customer's door.  Amazon is said to be working busily on an all-electric delivery vehicle.  But so far, it's all talk and no actual vehicles on the road.

I find that quite surprising.  Tesla is working on an "18-wheeler" long haul truck.  I find it difficult to believe that is practical given the current state of the art when it comes to batteries.  But it looks completely practical to do a delivery van.  Or, for that matter, any commercial vehicle that "comes home" every night, and that racks up a modest number of miles per day.

There are currently gas stations everywhere.  It takes five-ten minutes to "gas up".  More and more charging stations are being installed.  But all-electric vehicles don't recharge in five or ten minutes.  You are lucky if it can be pulled off in five hours.  That means that a successful all-electric vehicle needs to be based somewhere and not wander too far from base.

New houses now often feature a 220 volt circuit in the garage.  This is easy to do.  The circuitry is the same as that used to support electric stoves, dryers, and water heaters.  A house may come with the natural gas versions of these appliances.  But any commercial electrician knows how to string the necessary wiring.  It is easy to include in a new house.  It is harder, but usually not that much harder, to retrofit such a circuit into an existing house.

Almost all all-electric vehicles are now sold to consumers that can afford to own multiple vehicles.  They can use their all-electric vehicle for their short trips, and most trips are short tripe.  When they occasionally need to go a long way then they can use one of their other vehicles.

That is not, and will never be, a large part of the consumer vehicle market.  All-electric vehicles need to sell in large numbers if the cost is to be driven down.  Short haul delivery vehicles should enlarge the market substantially.  And that's good.

There are some other market segments that all-electric vehicles should eventually be successful in.  But their success will be limited until the main problem with electric vehicles is solved, the creation of a much better battery.

Finally, I want to talk about supercars.  These have been around for something like twenty years.  They certainly didn't exist when I was a kid.  The situation in the '60s was typical of what came before and what continued for some time after.

When I was a kid the fancy car was the Cadillac.  Sure, a few Hollywood Moguls and the like drove (or were driven in) a Rolls Royce.  But that was more of a fantasy than a reality.  And here's the thing.  A Cadillac didn't cost that much more than a regular car.

My dad bought a Plymouth in the '60s.  The Plymouth was a step up from the Dodge.  The Chrysler was a step up from the Plymouth.  The Chrysler was supposed to be comparable to a Cadillac.  But there is only one Cadillac of Cadillacs.

The Cadillac was the "top of the line" in its time.  But Cadillacs didn't cost all that much more than the Chevrolet, the "economy" entry in the General Motors product line at the time.

My father's Plymouth cost a little under three thousand dollars.  A Chevrolet of the period cost about two thousand dollars.  But the Cadillac only ran four to five thousand dollars.  The "top of the line" car of the time cost between two and three times what an "economy" car cost.

If we update the numbers to what they now are then an "economy" car runs between twenty and thirty thousand dollars.  Picking the high number would mean that a "top of the line" car now costs between sixty and ninety thousand dollars. And you can drive a brand new Cadillac off the lot for sixty to ninety thousand dollars.  So that part tracks.

But ninety thousand dollars doesn't even you get you in the door when it comes to super-premium cars now.  Those run hundreds of thousands of dollars.  And that doesn't even get you close to the top of the market.  Ultra-premium cars now run between a million and ten million dollars.

And it's now been a long time since Rolls Royce was as expensive as it got.  You can drive the Rolls Royce of your choice off the lot for only a few hundred thousand dollars.  It's still a super-premium car but it's not even close to being an ultra-premium car.

What happened, of course is that the rich got a whole lot richer and these people needed something to spend their money on.  There are now lots of people who have fleets consisting of dozens of ultra-premium cars.

I don't know what they do with them.  Traffic speeds have stayed the same since the '60s.  You can easily rack up a very expensive speeding ticket in an economy car.  So, there's literally no place to take advantage of what justifies the price of these vehicles.

And they are not "lap of luxury" vehicles.  The "go to" luxury limousine is a stretched version of the Chevrolet Suburban (or any large SUV like the Ford Expedition).  These vehicles feature roomy interiors with lots of headroom.  The interiors can be customized to make them quite luxurious.  But, no mater how tricked out the interior is, you are still only talking hundreds of thousands of dollars.

All the ultra-premium vehicles, the ones that cost a million or more, are set up as sports or performance vehicles.  The interiors may be well appointed.  But they are cramped.  Few feature more than minimal storage space.  But they all look like they go super-fast.  And most of them actually do.

In the late '60s the goal of "hot car" guys was to have a car that could go two hundred miles per hour.  Few of the vehicles of the period, even the custom ones like the Shelby Cobra, actually could even touch two hundred briefly.  And none of the "street legal" ones could sustain two hundred for any period of time.  That has changed.

The "production" versions of many of these cars are street legal and they can easily maintain a sustained speed of two hundred miles per hour.  And when lots of cars from lots of manufacturers can all do the same thing, it's time to up the ante.  This car can do two twenty.  No, that car can do two thirty.  And so on.  The goal recently became three hundred miles per hour.

And Bugatti the first car maker to pull it off.  They were able to get a "prototype" version of one of their production cars to clock in at 300 MPH while conforming to all the conditions necessary to be officially credited with the record.

The car was slightly modified "for safety reasons".  I don't know if Bugatti will even sell a car set up the same way the "300 MPH" car was.  The actual production (i.e. non-prototype) version of the car comes equipped with a "limiter" that makes the car top out at 261 MPH..  And plan on laying out $3 million or more for the "limiter" version.  Who knows what the non-limiter version would cost.

Bugatti said 300 MPH was fast enough.  I don't understand that.  Because 500 KPH is only 311 MPH.  Surely, they could have gotten the car to go a measly 11 MPH faster.  And 500 KPH is a much more satisfying number.  This is especially true since most of the world runs on KPH rather than on MPH.  If it was me, I would have gone for 500 KPH.

And, apparently I'm not the only one that thinks that way.  An outfit I had never heard of before, even though it is located in Washington, the state I live in, decided to go for it.  The company name is SSC, short for Shelby Supercar.  (And this Shelby is apparently no relation to Carol Shelby of Shelby Cobra fame.)  They make a street legal car called the Tuatara that they thought would go 500 KPH.

They were right.  They too made a run under the appropriate conditions.  Their official speed was 508.73 KPH (316 MPH).  They claimed that the car was unmodified.  It did use "race" fuel, a type of fuel that is commonly found at drag strips and race tracks.  But that was it.

And there have since been claims of "discrepancies".  This has caused the company to promise to redo the run under the most stringent of conditions.  The entire production run is already sold out.  But, if you could order one, your very own Tuatara would only set you back a measly $1.6 million.  A bargain, wouldn't you say?