Thursday, February 20, 2020

A Proper Argument

Very recently it was vigorously brought to my attention just how far out of the mainstream I am.  I have views on how to properly conduct an argument that are at variance with a lot of people.  That is perhaps not surprising.  But I find that I hold views that are at variance with pretty much everybody.  Someone whose thinking I thought was not wildly different from mine turned out to in fact be wildly different from mine.  That was both disappointing and deeply distressing.

I have put more than fifty years of thought and effort into my thinking on the subject of how to determine what's true and what isn't.  I have tried very hard to figure out what works and what doesn't in the context of this pursuit.  I think everybody should value the truth and am disappointed when I come across people who don't.  But it turns out my focus was too narrow.

A lot of my blog posts over the years have been attempts to correct the record.  If there is something floating out there that I think is wrong and others are doing a good job of getting the correct information out there, I leave it to them.  I try to stick with situations where there is either insufficient effort being made to correct the record or where everybody seems to be missing something important.

And I am actually humble when it comes to a monopoly on the truth.  I screw up all the time.  But I figure that if I have gotten something wrong then the only way someone can set me straight is for me to say what I think.  That way people know where I am off base and, therefore, need correction.  I take it as a plus when somebody sets me straight about something I have gotten wrong.

This seems to me to be a reasonable way to approach life.  And I know all about the white lie designed to soothe someone's feelings.  I know that a well placed white lie can smooth out many a social situation.  I am just bad at it.  I know this often holds me back in social situations.  I would dearly love to be much better at it.  I have just never been able to find a way that I can consistently pull off.

But there are social situations and there are social situations.  I try to not be abrasive in purely social situations.  But what about a discourse on the issues of the day?  Is disagreement permitted in these situations?  I would have said the answer to this question would be "yes".  Apparently, I am wrong.

There is a lot of discussion of "echo chambers" and "people talking past one another".  This is universally decried as being a bad thing.  I agree.  But what's the remedy?  Before going into that, at the risk of coming off pedantic, let me restate the problem.  The problem is that disagreement is not allowed.  Beyond that, no one directly engages with the other side's arguments.

The "fix" now becomes obvious.  People should stop engaging in the problematic behavior.  People should be allowed to disagree not only with what the other side says but with what their side says.  Further, both sides should understand and engage with the other side's argument.  And all disagreements should be with the argument, not the person making the argument..

I don't think there is much disagreement with anything I said in the previous paragraph.  (I will go into why there is not across the board agreement below.)  I have now outlined exactly how I proceed.  And I am in trouble for doing so.  Before continuing I am going to make a digression.

Lots and lots of people have outlined roughly the same "fix" as the one I outlined above.  But far too often they add something.  And this is most common when politics is being discussed.  They say some variation on "both sides do it".  This is misleading.

It is true that to some extent both sides do this.  But one side does it a lot more than the other side.  This "both sides do it" argument provides cover for the side that is doing it the most.  They can say "we are only doing it because they are doing it".  I don't think that's true, but as a tactic for getting off the hook, it works great.

Now I could be wrong.  When engaging with this "both sides do it" claim I say "here's why I believe they do it a lot more than we do it".  All you need to do to destroy my argument is to provide evidence that my claim is false.  But nobody ever does that.  Instead they get mad at me.  In other words, they treat me as being on the other side then they fail to engage with my argument.  I have a blind spot.  I am always surprised when this happens.

I think having the argument is critically important.  So there need to be "rules of engagement" for how to conduct a proper argument.  The rules I try to follow are:
  • Understand your argument and the evidence that goes along with it.
  • Understand the other person's argument and the evidence that goes along with it.
  • Engage with the evidence, the data and analysis.  Do it to both sides' argument.
  • Do not confuse the argument with the person who is making it.
Stated this way, I think most people would agree.  But I find that often people don't behave that way.  I find the last item critically important.  I never confuse the argument with the person making it.  But this concept is honored in the breach far more often than I thought it was.  I wasn't expecting that.

I very carefully separate the arguments from the person making it.  Just because I disagree with an argument someone had made I don't think they are a bad person or stupid or ignorant.  I just think that in this specific case they have gotten it wrong.

On the other hand, maybe I have gotten it wrong.  If you point out the error of my ways then I am better off for it and that's how I see it.  I am well aware that not everybody operates the way I do.  But I still think it's the best way to operate and I am disappointed when someone who I thought operated that way doesn't.

And I know a big source of my divergence from the norm.  I spent a lot of time interacting with computers.  To state the obvious, computers are not people.  I find that I get along much better with computers than I do with people.  Computers play by rules I am comfortable with.  People often don't.

Computers are good at giving you instant and unambiguous feedback.  I will write and run a computer program.  It will either behave the way I want it to or it won't.  Here's the thing.  Computers don't hold grudges.  If I run a program and it messes up badly the computer, in effect, says "here's the story".  I look at it and try to figure out what I did wrong.  Then I fix it and try again.  The computer doesn't remember what happened last time.  It just notes what happens this time.

I have gone through this "try - fix" cycle so many times I long since lost count.  In each case I soldiered along until the program did what I wanted it to.  And the computer is a neutral arbiter.  It just follows the instructions my program contains and lets me know what happens.  It does not denigrate my looks or ancestry.  It just does what I tell it to.  If I told it right then the right thing happens.  If I told it wrong then the wrong thing happens.  But the computer doesn't even venture an opinion with respect to the right or wrong of what it was told to do.

I flourished in that atmosphere.  I never took it personally when the computer told me I got it wrong.  I just dug in and tried to do better next time.  I was also okay with not receiving praise from the computer when the program worked.  In short, there was an implicit "nothing personal" about how the computer behaved.

So, what's all this have to do with a proper argument?  Just this.  Computers taught me to get comfortable with criticism of my argument/program and to not take it personally.  In the real world, there is the argument and the person that is making the argument.  They are two different things.  Even though most people don't have the computer background I have I thought thoughtful people knew that.  Apparently I got that wrong.  Silly me.

I have no problem separating the person from the argument they are making.  Maybe it's my computer experience.  Maybe I am just wired that way.  But it just seems so obvious to me that I don't continuously say anything about it.  I think objecting to an argument is NOT objecting to the person making the argument.  But apparently way more people than I thought always see objecting to an argument they are making as some kind of personal attack on them.

It would be nice if this didn't matter but it does.  The Greeks made a distinction between "logic" an effort to determine what is right and true, and "rhetoric", the best tactics to use if you want to win an argument.  Their study of rhetoric focused on what was effective.  But along the way they identified both fair and foul ways to be effective.

If "winning is the only thing" then, by all means, use whatever works.  (These are the people who would not go along with the list of principles I outlined above.)  But we should all be able to identify when someone is using one of those foul means to advance their position.

One of the most common foul means is called the "ad hominem" argument.  "X" and "Y" have a difference of opinion.  "X" says "I'm right because of blah, blah, blah".  "Y" says "X is a bad person so you don't have to pay any attention to what he said".  If a person quickly resorts to ad hominem arguments I assume they are in the wrong unless I see substantial evidence to the contrary.

But, since nothing is ever as simple as I would prefer, sometimes an ad hominem argument is justified.  If a person says "I'm right because I'm and an expert and I have studied the situation carefully" but an opponent presents evidence that the person is not an expert and has not studied the situation carefully, then it is appropriate to take the characteristics of the person making the argument into account.  This all assumes, of course, that the opposition provided credible evidence to back their claim up.

Ad hominem attacks are deployed in order to avoid engaging with the meat of a person's argument.  Unsubstantiated or easily disproven ad hominem attacks are the worst.  They should be routinely denounced.  But this almost never happens.  Instead, we are subjected to ad hominem attacks all over the place.  I try my best to make things better, not worse.

The problem is that in the present environment, bad behavior works.  The most generic version of this sort of thing is called "going negative".  When someone runs for public office they should advocate for their positions and qualifications.  If they instead say "my opponent is a bad person", that's going negative.  And this kind of attack is often extended to "my opponent and all of his supporters are bad people".

A couple of generations ago "going negative" was widely derided.  But it worked and it kept working.  It turns out that voters are happy to support a candidate who go negative.  When it became apparent that going negative was effective everybody started doing it.  I never liked going negative but that is an argument I lost a long time ago.  How long ago?

An early proponent of going negative was Richard Nixon.  He used it successfully to get himself elected to the US House of Representatives.  He later used it to a lesser extent to win a Senate seat and then a slot as Vice President on a winning ticket.  In 1960 he decided to run for President.

He also decided to run a positive campaign.  He was obviously more qualified than his opponent, a relatively inexperienced Senator named Kennedy.  So why not win fair and square?  He lost.  If you look at the debates they engaged in, you will find that their positions were little different.  And conventional wisdom had it that Nixon won the debates if you talked to people who heard them on the radio.  But TV viewers gave the nod to Kennedy.  He looked handsome and confident.  Nixon looked swarthy and untrustworthy.

People didn't decide based on the quality of the candidate.  They decided based on likability and personality.  Nixon also didn't go negative when he ran against a far less well qualified candidate to become Governor of California in '62.  He lost again.  In '68 he went back to his "tricky dick" tactics and won the Presidency.  He won big in his reelection campaign in '72 by using even less savory tactics.

It is hard to fault Nixon for reverting to type.  Playing fair was not a successful strategy for him.  It was the voters who decided what worked and what didn't.  He just decided to go with what worked.  For a while the thought was that Nixon was an outlier.  But then more and more candidates went negative and won as a result.  Voters decided that going negative was okay.  If they had decided otherwise we would now be in a far different place.

I have known for a long time that going negative works when it comes to elections.  But that hasn't meant that I liked it.  And it has not worked when it comes to my vote.  But I do confess to being typical in that I make my decisions based on many factors.  I don't just go with the candidate that is the most honest or the most competent.  I do, however, accord those factors a lot of weight.  But elections are not the only thing we argue about.

This is not my first run at this subject. Back in 2014 I wrote a blog post called "Faith Based Conflict Resolution".  Of all my posts, it is the one I am most proud of.  Here's a link to it:  http://sigma5.blogspot.com/2014/12/faith-based-conflict-resolution.html.  Looking back at it I find that I was too optimistic.  I just assumed the whole business about separating out that argument from the person making the argument was commonly accepted and just focused on the mechanics of the argument.  Before moving on, here's the meat of the argument:
Ultimately the only tactic that is effective in this environment [a "faith based" environment] is the power tactic.  And do we really want to decide all conflicts by a test of power?
A little later I partially answered that question.  I pointed out that my preferred approach, the scientific one, frequently leads to embarrassment.  Then I said:
Well, there's the whole "inconvenient" thing.  In the world of science it is frequently true that everybody is wrong.  An outcome where everybody is wrong is the only one that is worse on our egos than an outcome where we are wrong.
 I knew that this approach would not appeal to everyone.  After all, some people are more interested in being on the winning side than they are on getting the facts or the tactics right.  But I truly believed that there were lots of people who shared my "facts first" attitude.

But the whole "how should conflicts be resolved" issue presupposes that that it is possible to go about the business of disagreeing without it instantly and inevitably turning personal.  Lots of people are comfortable engaging in ad hominem attacks.  Turning all disagreements into something personal is something they are comfortable with.  Apparently more people are comfortable with ad hominem attacks than I thought.  That's bad.  I still think it is important to be able to disagree without it getting disagreeable.

So is all lost?  Actually, no.  I take hope from the most unlikely of sources, sports.

People take their sports and their favorite teams very seriously.  And you don't have to look far to find examples of fans getting totally out of control.  But mostly the opposite is true.  Sports bars are everywhere.  And they serve alcohol, which usually makes things worse rather than better.  But things getting out of hand is actually the exception rather than the rule.

On any day in any city you can find lots of sports bars full of rowdy fans.  And many of these bars are populated by heterogeneous groups.  One group consists of fans for one team or athlete.  Another group consists of fans of another team or athlete.  And they are often very vocal when it comes to their opinion.  And large quantities of alcoholic beverages are consumed.  But at the end of the day almost all of these rowdy fans go home peacefully and quietly.

This actually happened to me.  Many years ago my then girlfriend and I visited the "Cheers" bar in Boston.  Locals take the Sox very seriously and there was a game on between the Sox and the Seattle team when we arrived.  When patrons found out that we hailed from the land of the enemy they derided our team and exalted theirs.  But then the Sox lost quite unexpectedly.  Things could have gone south at that point but they didn't.  Instead, all sides were good sports about it.

So what's going on?  I'm not much of a sports fan.  But I do routinely skim the sports section of the paper.  You know what it's full of?  Facts and data.  Sure, there are opinion pieces.  But page after page is full of box scores, statistical breakdowns, and all kinds of detail about teams and players.  And ask the typical fan in the typical sports bar.  They can reel off statistics and figures until your eyes cross.

Sports fans are deeply knowledgeable about their passion.  Couple that with an unambiguous result.  This team or player won or lost.  The score was whatever.  Modern day sports coverage is deeply analytical.  And that means that sports fans are intolerant of BS.  Even the opinion columns have to back up their opinions with facts and data.  Fans get into arguments with other fans all the time.  But "'cause I say so" just doesn't cut it.

And, while a lot of trash talk goes back and forth, no one gets upset by it.  At the end of the day it's mostly "no hard feelings" and "see you at the next game" rather than "I now hate you from the bottom of my heart".  Sports fans, even drunk ones, have mastered the art of separating the argument from the person making the argument.  That makes them role models of a kind we badly need.

And I think the fact that sports and sports coverage is now so data driven that is a major contributing factor.  Michael Lewis wrote a book called "Moneyball" way back in 2003.  The book discussed something called "sabermetrics", an effort to replace emotion with data when it came to evaluating baseball players.

Baseball fans will no longer tolerate a team that doesn't adopt a sabermetric approach.  And many other sports have since adopted similar approaches.  Fans now demand no less.  A team that now tries to take a "seat of the pants" approach can count on such a decision being greeted with scorn and derision from their fans and from the press.  So sports and sports fans have adopted a scientific approach to their fandom.

Sports is definitely the better for it.  And sports betting is about to become ubiquitous.  It will soon be easy for a fan to lose a lot of money by betting from the heart rather than from the head.  And this will provide additional inducement for fans to behave responsibly.

Sports is supposed to be less important than politics.  But more people invest more time and effort in sports than they do on politics.  Unfortunately, it shows.  Politics would be better off if it adopted the kind of data driven approach that is now common in sports.  Where's the call for a "sabermetrics of politics"?

And people who are not that into sports need to behave more like sports fans do.  Remember!  You heard it here first.

Friday, February 14, 2020

60 Years of Science - Part 16

This post is the next in a series that dates back several years.  In fact, it's been going on so long that I finally decided to bite the bullet and update the title from "50 Years of Science" to "60 Years of Science".  Same series, just an updated title.  And, ignoring the title change, this is the 16th entry in the series.  You can go to http://sigma5.blogspot.com/2017/04/50-years-of-science-links.html for a post that contains links to all the posts in the series.  I will update that post to include a link to this entry as soon as I have posted it.

I take Isaac Asimov's book "The Intelligent Man's Guide to the Physical Sciences" as my baseline for the state of science when he wrote the book (1959 - 60).  With the new year it is now fully sixty years since the book came out.  In these posts I am reviewing what he reported then noting what has changed since.  For this post I am moving on to a chapter he called "The Waves".  I will be reviewing two sections:  "Light", and "Relativity".

Light is fundamental.  As Asimov notes, the first words in the bible are "let there be light".  But for a long time the nature of light was a complete mystery.  Two early ideas were that it was emitted by objects and that it was emitted by the eye.

CGI, Computer Generated Images, now a staple of the movie making business, was not a thing in Asimov's time.  A single CGI shop now has more computer power than existed in the entire world until some time in the '80s.  But one of the techniques employed by CGI is called "ray tracing".  And one way to do ray tracing is to start with the eye of the viewer and trace light paths back to the virtual objects in the CGI image.  So the latter idea is not as nutty as it now sounds.

Little was known about light.  It traveled in straight lines, hence ray tracing.  When it was reflected, say off of a mirror, the angle of the reflected light was equal but opposite to the angle of the incident light.  The transition between materials, say from air to water, caused light to bend or "refract".  That was pretty much it until Newton came along.

Newton published the results of his experiments on light in a book called Optics.  Unlike Principia, Optics is easily understood by regular people.  The experiments he performed and analyzed are clearly described and elegantly analyzed.  This is the complete opposite of the situation that I found when I dived into Principia.  In Optics, it is easy to follow along with him and nothing he has to say is hard to understand.

Newton investigated light's characteristics by completely covering the windows in a room.  Then he poked a small hole in the covering, thus letting a narrow beam of sunlight enter his now darkened room.  The then placed objects, primarily lenses and prisms, into the beam to see what happened.  Using this simple and easy to understand (and reproduce) approach, he was able to determine many of the properties of light.

Both the lenses and the prisms bent light.  And, in the prism's case, it broke light up into a spectrum of colors.  Water droplets in the air do the same thing.  The result is a rainbow.  Lenses curve light so that it either converges to a point or diverges to a band much wider than the original sunbeam.

Newton proved that sunlight is actually composed of a mix of a whole lot of different colors.  He was even able to break light apart into its component colors and then put the colors back together again.  He did this by first guiding the sunbeam into a prism, which broke the light into colors.  He then guided the output of the first prism into a second prism that had been turned the other way.  This reassembled the colors back into white light.  He also observed that the degree to which a lens bent light depended on the color of the light.

All these and many others (I am just skimming the surface of what he so clearly lays out in Optics) make light sound like it is made up of waves.  Nevertheless, Newton concluded that it was actually composed of tiny particles he called "corpuscles" that traveled at very high speed.  (He decided that refraction was caused by a speed change in light corpuscles as they transitioned from one medium, say air, to another, say water.)  This "corpuscular" idea set off a battle over whether light was made up of particles or waves,  That battle took hundreds of years to resolve.  Moving on, . . .

Huygens was an early proponent of the "wave theory".  Waves have a "wavelength", the distance from one peak to the next.  If various colors of light have different wavelengths then many of the attributes of light can be explained.  Refraction, the bending of light, and the color dependence of refraction (light of different wavelengths is bent more or less, depending on its wavelength) could be explained this way.  But particles don't have a wavelength, or so everybody thought.

But the wave theory of light had problems, which I am not going to go into.  The wave people could knock holes in the particle people's analysis.  And the particle people could knock holes in the wave people's analysis.  Both sides believed that the holes in their theory could somehow be patched up but the wholes in the other side's theory were fatal.  So the battle continued until new ideas were introduced.

One experiment that tilted thinking toward the wave theory was the "double slit" experiment pioneered by Young.  Light is passed through two narrow slits.  After that it strikes a screen forming a pattern.  It is easy to do an experiment with waves in a water tank or guns and a target.  One shows the pattern expected if light is waves.  The other shows the pattern expected if light is particles.

The "two slit" pattern shouted "waves".  The experiment was easy to do.  So lots of people tried various adjustments.  The variations allowed the computation of the wavelength of various colors of light.  The numbers turned out to be extremely small.

Fresnel was the first to show that if an object was about the same size as the wavelength of light (bacteria turn out to be too big) then a "diffraction" pattern results.  (His ideas also resulted in the creation of "Fresnel lenses".)  So the particle theory of light is dead, right?  Not so fast.  But first a digression (by scientists, not me).

Now that we know the wavelength of light it should be possible to determine the speed.  Galileo was the first to try.  Flashing lamps from the tops of hills, even hills that were miles apart, didn't work.  What did work was carefully studying when various moons of Jupiter got eclipsed.

Newton had provided a way to very precisely calculate orbits so the expected eclipse times could be very accurately calculated.  Careful observation by Roemer looking for moons eclipsing earlier or later than Newton said they should provided a number, 192,000 miles per second, that is not far off the true number.

Now that they knew what they were up against others were able to bring things down to earth.  If you shine a light between the teeth of a disk that is spinning very fast you can detect extremely small time differences.  Fizeau did just that in 1849.

Foucault introduced some clever modifications that allowed him to come up with a speed of 187,000 miles per second.  His technique was precise enough that he was able to get different results if light traveled through different materials (water versus air, for instance).

Michaelson added more improvements and measured the speed of light in a vacuum as 186,282 miles per second.  In Asimov's time "atomic clocks" and "masers" (the predecessor to lasers) were available.  This degree of accuracy permitted light to be used to measure distances.  In Asimov's time this trick could only be used to measure astronomical distances, millions and billions of miles.

Today we can use it to measure "down to earth" distances.  The speed of light is roughly one foot per nanosecond (billionth of a second).  It is now easy to count nanoseconds and, depending on how much money you have and how much effort you want to put into it, much shorter time durations.  So measuring distances of a few feet using light delay is now easy.  That's how GPS works.  And smartphones can easily do GPS.

If light is a wave the question becomes what's waving?  Sound waves cause air to move.  What's moving in the case of light?  How about something called the "luminiferous aether"?  (This was often short-handed to "ether".)  Let's say some kind of "ether" permeates everything and light works by vibrating it?  This sounded reasonable so scientists went looking for it.  The stuff turned out to be quite elusive.

But its fundamental property was that light propagated through it.  So it should be possible to detect it by carefully measuring the speed of light in multiple directions.  (You can calculate what direction and speed air is moving in by very accurately measuring the speed of sound in multiple directions.)

The thinking of the time was that the ether was fixed in space and the earth moved through it.  The speed of the earth was tiny when compared to the speed of light.  But Michaelson had refined his procedures to the point where it should be detectable.  And everybody knew that the earth moved.

He teamed up with Morley and started making measurements using something called an "interferometer".  The problem is that no matter how hard they looked the speed of light turned out to be the same no matter what direction you measured it in.  If the earth was moving though the ether this was impossible.  Oops.

Newton had developed the idea of a "preferred frame of reference" in Principia.  The idea was that in some sense the universe did not move.  He showed how to translate measurements in one frame of reference to another frame of reference in simple situations.  But he always assumed that there was such a thing as a fixed frame of reference that wasn't moving.  It was very hard to square the Michaelson/Morley results with the ides of a fixed, preferred frame of reference.

The ether was supposed to provide the proof that such a frame of reference existed.  But the experiment that was supposed to once and for all demonstrate the existence of the luminiferous aether failed completely.  In Asimov's time the same experiment cold be performed with a much higher degree of accuracy.  The results were the same.  We can now do it far more accurately than was possible in Asimov's time.  It still fails.  And that failure led to the subject of Asimov's next section, "Relativity".

The first step in moving from what we now call "Classical" or "Newtonian" mechanics was taken in 1893 by Fitzgerald.  He posited that space "contracted" in the direction of motion.  This process became known as "Fitzgerald contraction".  Mathematically, the idea was a great success.  It used a simple mechanism to exactly match experimental results.  Since the fundamental stuff of the universe was affected it meant there was no experiment that would detect it.  That was unsettling.

A side effect of Fitzgerald's work was that, if what he was saying was true, then the speed of light in a vacuum was a universal speed limit.  Nothing could go faster.  That was perhaps even more unsettling.  And Lorentz extended Fitzgerald's work by saying the mass a a particle traveling at neat the speed of light would increase.  In fact, it would go to infinity if it actually reached the speed of light.

This provided a mechanism for enforcing the speed limit.  F=MA, Newton's old formula, was how you "accelerated" particles.  If the Mass went to infinity then the amount of Force necessary to provide that last scintilla of Acceleration would also go to infinity.  Since infinite Force was not available, acceleration all the way to the speed of light was impossible.

All this seemed totally nuts at the time.  (It still does.)  But results like these made it harder and harder to argue that were now called the "Lorentz-Fitzgerald equations" were not only nuts but wrong.  And then there was the annoying fact that all the sensible ideas has been conclusively proved wrong by this point.  It was a good thing that experimental results came to the rescue just when they were needed most.

As noted above, there was no experiment that could detect the Fitzgerald contraction.  However, there were experiments that could be done to detect the absence or presence of the Lorentz effect.  Electrons could be accelerated to very high speed.  And the mass of a fast moving Electron could be measured.  Kauffman did the experiment in 1900.  The Lorentz effect was real.

The "real world" that Newton had explored looked sensible.  It looked "natural".  This world that scientists were now uncovering looked truly weird and very unnatural.  If what was "natural" was that which conformed to the experience and intuition of people going about their every day lives, then scientists' understanding of how the "natural world" worked was diverging more and more as the twentieth century unfolded.

If the results of the Michaelson/Morley experiment had been all that scientists were coping with, that would have been one thing.  Unfortunately for fans of the old understanding of "natural", there was more, much more.  Another problem cropped up almost immediately in what seemed to be an entirely unrelated place.

We are all familiar with the fact that when you heat something up it often glows.  And you can roughly estimate its temperature if you know what the material is and what color the glow is.  For good but obscure reasons to be explained below, scientists call this the "black body problem".

The color/temperature problem can be divided into two parts:  the type of material and the color. We can assign a magic number to the type of material.  If we back this number out of the calculation then the rest of the problem always looks exactly the same.

So scientists picked a mythical "black body" as their name for the "always the same" part.  They then developed tables of magic numbers for specific materials.  They could then back this number out and consult their "black body" calculations for the rest.  That greatly simplified the search for a theory to explain the behavior of their mythical black body.

Black body theory came together quickly after that.  If heat was vibration then they had a formula for translating that vibration into color.  Temperature X should produce color Y.   And it worked, mostly.  But the actual situation was more complex.  Materials did not all vibrate at the same frequency.  Instead there was a frequency distribution.  That resulted in a color distribution.  But there was a reference temperature and a reference color so everything could be tied together.

And the main part worked.  Experiment tied a reference temperature to a reference color just like it was supposed to.  The problem was with the distribution.  It wasn't right.  The detail are complex so I am going to skip them.  Instead I am going to cut to the chase.  A man named Max Plank came up with an idea that fixed the distribution problem.  It's just that his solution was one of those "worse than the problem" solutions.

He decided that the energy involved was "quantized".  It was natural to think that things were vibrating at every frequency, more at this frequency and less at that frequency.  But there would be some vibration at every frequency, even if it was not much.  Plank said "no".  Only certain frequencies were permitted.  If you did the calculations based on this idea then everything came out exactly right.

The problem was that scientists could think of no reason why only certain frequencies were permitted while others were forbidden.  This was another step away from natural and toward weird.  Trust me, if scientists could have thought up something that worked and was natural, Plank's ideas would have immediately been discarded.  But they couldn't.

Plank's idea was extremely simple.  He said there was a fundamental unit of energy he called a "quanta".  Everything had to be done in quanta or exact multiples of a quanta.  It turns out that Plank's quanta is extremely small.  So color or temperature can take on a lot of values.  As a result things look like a smooth or continuous variation is present.

It's only if you look hard that you see that things are actually not smooth.  And the fact that the effect of the quantization of black body radiation is only apparent when you look very closely is why it was not initially apparent.

But that didn't make quantization any less necessary in order for the math to work.  And if it had only been this one small corner of physics that got the quantum treatment then we wouldn't be talking about it.

But this quantum business turned out to be ubiquitous in the world of the small, the world of atoms and subatomic particles.  (That's why the field is now called "quantum mechanics".)  It's now almost impossible to get away from it.  And that means that every part of the world of the small is weird -- really, massively, seriously, weird.

Plank's quantum theory was announced in 1900.  At first it didn't make waves.  Nobody liked it.  Everybody wanted it to go away.  But after Einstein published several papers in 1905 it was too late.  Einstein attacked a couple of seemingly different problems.

One is called the "photoelectric effect".  If you shine a light on the surface of a metal you expect it to kick things like electron and photons of light loose.  That happened but it didn't happen the way people thought it should.  It was another distribution problem.

The details aren't that hard to understand but it would take too long.  So, I am again going to cut to the chase.  Einstein, in one of his 1905 papers, applied "quantum theory" to the problem and out popped a solution that exactly matched the experimental results.  (He later got a Nobel prize for this paper and not Relativity.)  All of a sudden, this "quantum" business was a lot harder to ignore.

Speaking of Relativity, in another paper published in 1905, he introduced what we now call "Einstein's theory of Special Relativity".  In it he introduced the concept of the "Photon".  A Photon sounds like a particle and under some circumstances Photons act like particles.

But a Photon also has a wavelength so Photons act like waves in other circumstances.  In reality, a Photon is neither pure particle nor pure wave.  It has some attributes of either and some attributes of neither.  It's just its own thing.  And, by the way, photons are quantized.

This reformulation of how light worked into this entirely new thing, the photon, allowed Einstein to provide a single coherent explanation for all that was then known about light.  Since everybody -- well, all the scientists working in the area -- had been tearing their hair out because everywhere they looked, they saw problems, it was hard to ignore what Einstein had come up with.

As part of Special Relativity Einstein turned something inside out in an unheard of way.  As I noted above, if you do the Laurence-Fitzgerald thing you come up with a reason why things can't go faster than the speed of light in a vacuum.  But this seemed true "purely as a practical matter".

Einstein turned this inside out.  He said it was a fundamental characteristic of the universe that nothing could move faster than the speed of light in a vacuum.  From that principle he showed how you could derive the Laurence-Fitzgerald equations.  They followed from the absolute speed of light limit.  It was not the other way around.

This inversion might have seemed unimportant.  But Einstein used his view of how things worked to show how a bunch of other things followed from it.  One of these things was the effect on time.  Until Einstein everybody assumed that there was something called "absolute time".

Time worked the same everywhere, right?  It might be hard to synchronize clocks in two places but that was just a practical matter.  If you got it right you would see that all clocks in all places could be used to calculate the time and the time would be the same everywhere.

Einstein said the fact that time didn't always flow at the same speed meant that properly functioning clock didn't always run at the same speed.  He expanded the Lorentz-Fitzgerald equations to include time as well as space and mass.  He then showed how to translate from one frame of reference to another.  There was no such thing as an "absolute frame of reference".

Inherent in the idea of an absolute frame of reference was the idea of absolute time.  But if time could be sped up or slowed down then there was no such thing as absolute time.  And that meant that there was no such thing as an absolute frame of reference.  All frames of reference were always relative.

Einstein's Special Relativity equations showed how to translate between any two frames of reference as long as neither of them was accelerating. In Asimov's time there was still carping in the scientific community about this whole business of time speeding up and slowing down.  It just seemed so unnatural.  There was some evidence that the speed up - slow down was true at that time, but only some.

Now we can measure time much more accurately.  This pertains to both long and short periods of time.  As a result we can easily measure time with enough accuracy to confirm that it behaves exactly as Einstein predicted.  The most obvious example is GPS.

GPS satellites include code to adjust for the fact that they move around the earth at a relatively high speed.  Ignoring this "relativistic effect" would quickly cause the GPS system to get the time wrong.  And that would produce easily detectable location errors.

Moving from the practical to the esoteric, scientists now have access to clocks that are so accurate that raising one of them a single additional foot above the ground is enough to make a measurable change in how fast time flows.  Proof of the veracity of Special Relativity is now unavoidable.

Ten years later in 1915 Einstein came up with General Relativity.  All you need to do Special Relativity is High School Algebra.  That is well within the capabilities of many millions of Americans.  The mathematics of General Relativity are beyond he abilities of all be the most capable mathematicians.  I freely admit it is beyond me.  So we are not going to go there.  But some of the key ideas of General Relativity are easily understood.  They are just super weird.

Remember when Einstein did that inversion and said the constancy of the speed of light was not the effect but the cause.  Well, he did the same thing with Gravity.  Newton said that absent some kind of kick (rocket motor) or drag (friction) things went on at a constant speed in a straight line.  Einstein said that was completely true.  So why do planets like Earth circle the Sun rather than going in a straight line?  Because space is curved in such a way that a "straight line" causes the Earth to orbit the Sun.

This is again one of those things where looking at things this way gets you to the right answer.  But it sounds like a trick or shortcut, rather than who the world really works.  But over time evidence has built up that this actually is the way the world really works.

Special Relativity showed how to translate from one frame of reference to another as long as acceleration was not at play.  General Relativity shows how to translate from one frame of reference to another when acceleration is at play.  Not surprisingly, the math gets Hella complicated.

And scientists would have run from General Relativity except that Einstein was able to make predictions.  (He had this fantastic track record but still, the theory was beyond weird and the math was obscenely difficult.)  I am only going to cover two of those proofs.

Newton had shown how to calculate the orbits of planets.  But predictions based on Newton's equations yielded the wrong answer when it came to Mercury.  The difference between prediction and reality was small.  But astronomers had been tracking it for decades as it got larger and larger.  Einstein was able to apply his equations to get the answer that exactly matched observation.

The problem with Mercury's orbit was a well known one.  Maybe he cooked the books knowing the answer that needed to pop out at the end.  But what if he made a prediction about something that no one had imagined was even possible?  He predicted that in a certain situation something would be a certain amount.  If asked, anyone else would have predicted that nothing would happen.  The answer would, in effect, be zero.

Einstein predicted that if a photon from a star passed very close to the Sun on its path to Earth the path would bend by a certain specific amount.  This would cause the star to appear out be of place for a short period of time.  A star was found and a handy eclipse allowed the confirming observation to be made.

Asimov doesn't even mention Black Holes.  In his time they were considered a quite speculative possible consequence of General Relativity.  But at that time there was no solid evidence that they actually existed.  Gravity Waves were another possible consequence of General Relativity.  But there was no solid evidence for their existence back then either.

A few years later a celestial body called Cygnus X-1 was investigated.  Many astronomers concluded that it contained a black hole at its center.  But for a long time this conclusion was controversial.  But we keep getting better and better and hunting for and finding Black Holes.

We now believe that many, perhaps all, large galaxies contain a supermassive Black Hole at their center.  Our Milky Way contains one that is several million times the mass of our Sun.  Andromeda, a neighboring galaxy, is thought to have one a several billion times the mass or our Sun.

And we have recently been able to detect gravity waves.  The first detection involved the merger of two large Black Holes into one.  Since then dozens of Gravity Wave events have been detected.  But there is an even more interesting post-Asimov development in the General Relativity area.

Einstein applied General Relativity to the fate of the universe.  In his time the universe was assumed by most cosmologists (the people who studied this question) to be in a "steady state".  But evidence piled up that it was evolving from a Big Bang (now estimated to have been about 13.5 billion years ago) through several stages to its present state.

Einstein couldn't get a steady state to come out of his equations until he added a "Cosmological Constant".  He later thought this was an idiotic idea.  It soon became clear that the universe was expanding.  (That was one reason Einstein thought that the Cosmological Constant was a bad idea.)

But, if the Cosmological constant is set to "just right", the expansion of the universe will stop, but only after an infinite amount of time.  Another value causes the universe to expand indefinitely.  Still another, causes it to expand for a while, then collapse back to a "Big Crunch".

For a long time it looked like the universe was expanding at that "just right" speed that would cause it to expand forever.  Now it looks like it expanded slowly for a while but is now expanding faster and faster.  All this can be modeled by fiddling with Einstein's much maligned Cosmological "Constant", which may not even be constant.

Needless to say, scientists now take Relativity, both Special and General, as givens and try to expand on them in various ways.