Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Thursday, October 21, 2021

Windows 11

 The last time I posted on updating to a new version of Windows was way back in 2014.  Here's the link:  Sigma 5: Windows 8.1 - try 1.  Microsoft has a mixed record when it comes to Windows versions.  Many of them are successful, but some of them are failures.

Windows 7 was a big success.  Then Microsoft made a giant leap with Windows 8.  It flopped.  They tried to recover with Windows 8.1.  It too flopped.  As I noted in my long ago post, I ended up giving up on Windows 8.1.  I test drove it briefly, then I went back to Windows 7.  (I never even tried Windows 8.)

Windows 8 was supposed to be a "swings both ways" release.  It was supposed to work well on desktops and laptops, machines that come with a keyboard and a "pointing device" (mouse or touchpad).  Windows 7 worked very well on those kinds of machines.  Windows 8/8.1 was supposed to work equally well on them.

But it was also supposed to work well on tablets, touch screen devices that lack a keyboard and use a the touch screen as a substitute for the pointing device.  Windows 7 worked less well on those kinds of devices.  The idea with 8/8.1 was to provide a common software environment that software developers could write to that would smoothly span both environments.

Windows 8/8.1 was a technical success but it was a commercial failure.  Developers could now write to a common standard.  But, since customers stayed away in droves, why bother?  Microsoft eventually recovered with Windows 10.  It dropped the annoying features of 8/8.1.  It added some improvements, but, for the most part it was perceived as a refresh of Windows 7 that contained little new.

That was an accurate perception.  Windows 10 was a popular success.  I upgraded my machines to it, and so did lots of other people.  That success left Microsoft wondering how to move to a version of Windows that would succeed where 8/8.1 had failed.

It would work well, both in a keyboard/mouse environment, and in a touch-screen environment.  But what they came up with this time also had to be popular with customers.  Microsoft has spent several years trying to come up with such a solution.

But before moving on to the solution that Microsoft has finally come up with, it is worth while looking at a very substantial change that Microsoft made with Windows 10.  It wasn't a technical change.  It was a change in the way they did business.

Previously, if you wanted to move to the new version of Windows, you had to pay an upgrade fee.  For instance, if you were running Windows 7 and wanted to move to 8/8.1 you had to buy an "upgrade" license.  The upgrade version, which was much cheaper than the regular version, included code that validated that you had an authentic Windows 7 license before permitting the installation to proceed.

This business of offering a full price "regular" version and a heavily discounted "upgrade" version of each new release had been the way Microsoft did business all the way back to the days of the DOS 1.1 upgrade to DOS 1.0.  But with Windows 10, Microsoft decreed that if you had a license for Windows 7, 8, or 8.1 you could upgrade for free.

That's not what Microsoft initially said.  They initially said, "if you upgrade within the first six months - it's free.  But if you wait, it will cost you."  But Microsoft never enforced the "it will cost you" part.  Even if you performed the upgrade long after that original offer had expired, you were allowed to upgrade for free anyhow.  Why would Microsoft do that?  It turns out that there are sound business reasons for what they did.

Most people get their Windows license by buying a new computer.  New PCs always come with one or another version of Windows pre-installed.  You used to need to enter an "Activation Code" the first time you used the PC, but not any more. It is now preloaded into BIOS at the factory.  Now, the installation process checks for it, finds it, validates it, and that's that.

You used to need to enter a special "upgrade" Activation Code as part of the process of upgrading to a new version of Windows.  But no more.  The Activation Code for the older version of Windows works for the newer version of Windows.  That makes it easier (and cheaper) to upgrade Windows to a newer version.  But most people don't bother..  And that leads to a lot of people running older versions of Windows.  And that has financial implications for Microsoft.

This new way of doing business means that Microsoft is losing some revenue, the money brought in by selling upgrade licenses.  But it also means that there are lots of customers out there that expect Microsoft to support two, three, or even more versions of Windows.  That entailed substantial cost, likely considerably more than the revenue brought in by selling upgrade licenses.

But wait, there's more.  Viruses and malware started out as a modest problem.  But it has grown and grown and grown.  And Window developed a reputation for being easy to hack.  That was bad for Windows' (and Microsoft's) reputation.

And that hit to Microsoft's reputation had a detrimental effect on Microsoft's earnings over the long term. This has caused Microsoft has put more and more effort into making Windows harder to hack as the years have passed.  Windows is now far harder to hack than it used to be.  But, in many cases the "fix" to Windows involved a substantial rewrite.  Microsoft had plenty of money, so cost wasn't the problem.  But the necessary changes had a ripple effect.

No problem.  Release a "new and improved" version of Windows.  But what if lots of people stick with the old, flawed version?  Eventually, the business case for giving the "upgrade" version of Windows away for free became compelling.  Microsoft put the original version (1511) of Windows 10 out in 2015.  Since then, if you had a PC that ran Windows 7, or anything newer, upgrades to Windows have been free.

That had not caused everyone to upgrade to the new version.  But it has caused a lot more people to do so than otherwise would have.  And Microsoft invested a lot of effort in making sure that old hardware could run the latest version of Windows 10.  I recently did some work on a PC that was built in about 2007 and was originally loaded with Windows Vista.  (That's the version that came after "XP" and before "7".)  Windows 10 runs like a top on that machine.

And we can see this playing out all over the place.  Hackers need a way in to, for instance, install Ransomware.  Time after time their way in has involved exploiting a well known weakness in an older version of Windows.  They wouldn't have been able to get in if the computer was running Windows 10, but it wasn't.

Many organizations (schools, hospitals) don't have the staff necessary to keep on top of upgrades.  Profitable corporations, especially ones that use computers to control machinery, have the money and staff necessary to keep their PCs up to date.

But they use some of their PCs to run vendor supplied software that is used to control the vendor's hardware.  And if the vendor doesn't update its software to work on the newest version of Windows, something that happens far too often (and here I can speak from personal experience), then the corporation is forced to run an old version of Windows on some of its PCs.

This is a widespread problem that no one talks about.  The computers on the Deepwater Horizon, the oil drilling platform that blew up and sunk in the Gulf of Mexico more than a decade ago, was running a very old version of Windows out of necessity.  That disaster spilled millions of gallons of nasty crude oil into the gulf of Mexico.

Hacking was not involved in that disaster.  But computer problems did contributed to the disaster in a big way.  The vendors who provided most of the machinery used on the platform didn't update their drivers.  And that meant that key computers were running an old version of Windows that crashed regularly.  When things started to go wrong the key computer was in the middle of crashing.

Ransomware attacks, not exploding oil drilling platforms, have been much in the news recently.  But often the root cause is the same.  Hackers need a way to get inside and install their malware.  And old versions of Windows have often been their way in.

Once they are in, they can steal data and encrypt files, even if the data and files are located on servers running the latest version of Windows.  Stolen data and encrypted files are the foundation of a successful Ransomware attack.

But back to the subject at hand.  This problem of forcing corporations to sometimes run older versions of Windows on some of their computers has put Microsoft in a bind.  The "the upgrade is free" change in busines practice has at least let Microsoft can say "we provided you with a free upgrade to a version that didn't have the vulnerability".

Unfortunately, although the statement is true, it that only goes so far.  Microsoft does not want to badmouth their customer base any more than necessary.  So, Microsoft ends up sometimes having to step in and help companies that get hit.

But their costs (and reputational hit) are still lower than they otherwise would be.  Since the upgrade is free, many businesses have upgraded many computers that would otherwise still be running old versions of Windows.  And that brings us to Windows 11.

Upgrading to Windows 11 is free to anyone who is currently running Windows 10.  Eventually, all Windows 10 customers will be offered the free upgrade through Microsoft's "Windows Update" feature.  If you don't see that option, and you don't want to wait, then click on this link:  Download Windows 11 (microsoft.com).  There, you will be given several options.

The simplest is to click the "Download Now" option in the "Windows 11 Installation Assistant" section.  This will download a small "helper" program.  If you run that program and select the correct options it will download Windows 11 from the Internet and use it to upgrade your computer.  Warning:  You will need Administrator privilege to do this.  No intermediate steps will be required.  (BTW, for thirty days Windows 11 will include a "Revert" option that will let you revert your machine back to Windows 10.)

However, there is a catch.  A wide variety of hardware could be upgraded to Windows 10.  On paper, it looked like the same would be true for Windows 11.  The "specs" that were widely bruited about before Windows 11 was released were modest.  A Windows 11 capable machine must have  4 GB of RAM and about 70 GB of free disk space on the "C:" drive.  A CPU speed requirement was also listed, but it was so modest that any kind of PC would qualify.

But when Windows 11 became available on October 5, 2021, it turned out that there were additional, far more stringent requirements.  One was easy enough to meet.  From the start, hard disks could be divided into "partitions".  Each partition functioned like an independent disk drive.  This capability required a "Partition Table" to tell the software where things started and ended.  The Partition Table had to be put somewhere.

The early version put it in a place called the MBR - Master Boot Record.  The good news / bad news is that the MBR could hold software.  For instance, this was a handy place to put "Device Driver" code that might be necessary to handle the particular make and model of the Hard Disk on your computer.

But hackers quickly figured out that the MBR was also a great place to put malware.  Placing malware in the MBR let it load and put protections in place before the operating system (Windows) got loaded.  For technical reasons I am not going to go into, that made the malware both harder to detect and harder to dislodge.

For that and other reasons, a new method was created called GPT - GUID Partition Table.  It has better security and some other advantages.  Windows 11 requires that a GPT be used instead of an MBR.  If the Hard Disk on your computer currently uses an MBR then this sounds like a big problem, but it isn't.

First, for many years now the BIOS on PCs have supported both MBR and GPT (or just GPT).  Second, there are utilities that will convert an MBR Hard Disk into a GPT Hard Disk.  So, if your PC has an MBR Hard Disk, all you have to do is to run the utility and convert your Hard Disk from MBR to GPT.  Your PC has to be pretty old for it to not support GPT.

A much bigger problem is that, for reasons I can't figure out (but I have a suspicion), Windows 11 requires that your PC have a relatively new CPU.  If you have an Intel CPU it must be "Coffee Lake" or newer.  Intel started shipping Coffee Lake CPUs in late 2017.

So, if your "Intel Inside" PC was built on or after 2018, you should be okay.  If your PC was built on or before 2016, you are out of luck.  If your PC was built in 2017 your chances are not good.  (There are similar requirements for AMD and other brands of CPU, but I didn't dig into them.)

I think the third new requirement drives the processor requirement.  Your PC must support TPM 2.0.  To do so, a custom Crypto chip must be incorporated into the motherboard.  The required chip supports onboard crypto and some other security related features.  With Windows 10, TPM 2.0 was optional.  With Windows 11, it is required.

I suspect that any PC that has a new enough processor also supports TPM 2.0.  And it does it in such a way as to prevent hackers from interfering with its proper functioning.

Microsoft runs a WinHEC (Windows Hardware Engineering Conference) every year.  That's where hardware issues are hashed out.  The year's results are boiled down and incorporated into a document.  Each annual document provides "new and improved" guidance to the hardware community regarding Windows and hardware requirements.

Microsoft can then shorthand the hardware requirements of a particular version of Windows as "WinHEC version nnnn".  One of these versions laid out how TPM 2.0 was to be implemented.  Windows 11 requires conformance to a much newer version of the WinHEC document than Windows 10 did.

Back in the day, Microsoft used to provide a "Compatibility" utility every time it released a new version of Windows.  You ran the utility and it told you whether you had any hardware issues associated with running the new version of Windows on your hardware setup.  Sadly, they stopped doing that several years ago.  With Windows 11, it's back.  The Windows11 Compatibility utility is called "PC Health Check".

To find out if your PC has any hardware issues that will prevent it from running Windows 11 go here:  Upgrade to the New Windows 11 OS | Microsoft.  Then scroll all the way down to a banner that says "Check for Compatibility".  Then click on the text that says "DOWNLOAD PC HEALTH CHECK APP".

Once the download completes, open the file and install the application.  Warning:  You will need Administrator privileges to do this.  (If you have trouble finding the file, it should be in your "Downloads" directory.)  It will tell you if you're good to go or not.

If you decide to upgrade to Windows 11, what can you expect?  I reviewed the "what's new" articles in the technical press prior to it becoming available and I was underwhelmed.  Microsoft is characterizing it as a major upgrade.  That's why they changed the name from "Windows 10" to "Windows 11".  But, as far as I can tell, that's an exaggeration.  But then I might not be in a position to judge.

I run desktop PCs.  If you run a laptop with a built in keyboard and touch pad then your experience should be similar to mine.  The changes are minor.  But remember that the whole point of Windows 8/8.1 was to provide an operating system that worked well for people like myself, but also for people with touch screen machines.

Windows 11 takes another shot at doing just that.  Since all my PCs have keyboards, and I like it that way, I have no experience with the touchscreen environment.  So, maybe the touch screen crowd are seeing big differences that I am not aware of.

In any case, I was a bit leery of Windows 11 going in.  But my opinion has completely turned around.  Prior to experiencing Windows 11 for myself I grouped the changes into two groups.  Group one consisted of all the changes that I didn't care about (i.e. touch screen changes).

Group two consisted of all the changes I cared about that looked like they replaced something I liked with something I would likely not like.  But I'm a techie.  I have a responsibility to try new things out.  So, with some trepidation, I did.

And I find that I quite like Windows 11.  It does look beautiful.  And I find that they did what I expected.  They changed some thing from something I liked to something I didn't.  But I also found something they changed where the new version looks like an improvement to me.  So, what did they change?  It turns out, not as much as I thought.

One thing that often bedevils upgraders is drivers.  As far as I can tell, Microsoft did not change the driver model.  That means that Windows 10 drivers also work on Windows 11.  So, if you have a Windows 10 PC where the drivers for all your hardware work fine, then the same will be true after you upgrade to Windows 11.  Since driver issues are the source of most upgrade issues, for you, the update process should be a smooth one.

BTW, for planning purposes you should know that it took about 45 minutes to upgrade my machine.  But it is a high-end PC with an SSD disk drive.  If you are lacking in one or the other (or both) of these then the upgrade may take considerably longer on your PC.  And all my non-Windows software came over without a hiccup.  And all my data was all still there, just where I expected it to be.  So, what did change?

The most obvious change is to the Task Bar.  The stuff that was on the left end is now in the center.  I don't know why they did it, but everybody's guess is that they wanted to move closer to how Apple does it.  I wish they hadn't done it, but it is not a big deal.

The change they made to how the right end of the Task bar works is more problematic.  Apparently Microsoft doesn't know what to call that part of the Taskbar, so they refer to this area as the "Taskbar corner".  Lame.  This is where, among other things, some Icons that belong to running applications and services reside.  They are still in roughly the same place.

But there used to be a setting that caused them all to always display.  Without the "all" setting being ON some of them may get hidden under a "^" Icon.  The "all" setting is gone.  And I miss it.  Can I live with the new rules?  Yes!  But I liked the old rules better.  Anyhow, this is a bigger deal but not that big of a deal.

Then there is "Settings".  This has gotten completely reengineered.  Settings seems to be something that Microsoft can't stop themselves from changing every chance they get.  Back in the day there was the "Control Panel" (and something else before that).  The default was groups.  You could then drill down within a group and get to whatever you wanted to fiddle with.  But they gave you a way to "show everything at the same time".  I took advantage of that option.

With Windows 10 they hid (it's still around if you know where to look, even in Windows 11) the Control Panel and replaced it with the "Settings" gear.  Was Settings a big improvement?  No!  It was just a different way of doing the same thing.  It was functionally equivalent to the old Control Panel in the Groups configuration.  Well, with Windows 11 they have now redone Settings to make it more like the old Control Panel with the "show everything" option turned on.

They have made a lot of changes here and there with specific settings.  But the general idea is the same.  It's another of those "change for change sake" things.  But I have been though many generations of "change for change sake" in this area.  So, I have become adaptable.  It took me about fifteen minutes to get used to the new version.  Is it an improvement?  I wouldn't go so far as to say that.

You soon encounter the big change.  What comes up when you click the "Start" button has been completely redone.  With Windows 10 you had columns.  The left-most column contained critical controls like "Power" and "Settings".  Next over, you had a column with an alphabetical list of all the normal applications.  Finally, to the right of these you had the "Wing".  This layout has been changed completely.

The division into vertical columns has been replaced by a division into horizontal sections.  The top section contains the "Search" box that used to be located next to the "Start" button on the Task Bar.  I never used it much, so I don't much care where it is.  But I do appreciate getting the real estate this used to take up on the Task Bar back.

The next section down is a box into which all the stuff that used to be on the Wing has been moved.  The Icons on the Wing came in various sizes, which could be changed.  Wing Icons could also be animated, if the application chose to do so.  For instance, a weather report would continuously update within the "Weather" Icon.  I never liked the Wing.  I always got rid of it by deleting all the Icons on it.

Now all the Icons are small, standard sized squares that are not animated.  I now find this to be a handy place to put links to Applications I frequently use.  (You can still put shortcuts on the Task bar, and I do.)  But, for instance, this is where I now go to find the Settings Icon.  And there is now an "All Apps" link in this area that lets you get to the full alphabetical list of applications.  That works for me.

Below that is another section.  It lists all of the files that you have accessed recently.  I might come to like this.  So far, I just ignore it.  And at the bottom is a section containing a "User" ("User" is replaced by the username you are logged into Windows with) Icon and a "Power" Icon.  So, the three Icons, (Power, Settings, and User) that I used from that left-most column are still easily accessible.  I am very happy with what they have done to the Start Button.

That's it for noticeable differences.  I think that a not-so-noticeable difference will loom large over time.  TPM 2.0 looks like the foundation of a big improvement in security.  With Windows 10 applications developers had to allow for the possibility that it was there and also that it wasn't.  With Windows 11, they can count on it always being there.

And, in this context, Microsoft is a developer.  They develop applications like Office.  Over time they can change Office so that, if it detects that it is running on Windows 11, then it does security a different and more effective way.

Other developers can do the same.  And, in this context Windows itself is an application.  Microsoft can rework more and more of Windows to depend on the presence of TPM 2.0.  This release of Windows 11 was delivered on deadline, so they likely only made the change to a dependance on TPM 2.0 in a few critical places.

But over time Microsoft can update more and more components of Windows to use security based on TPM 2.0.  These updates can be rolled out in an incremental fashion using the Windows Update process.  In many cases, the change may not even be apparent to users.  But let's hope that these changes make life more and more difficult for hackers.

I think that over time incremental updates to Windows 11 will make it much more secure than even Windows 10 was.  If this is not what Microsoft was thinking, then it was stupid for them to obsolete so many computers by making TPM 2.0 required rather than optional.  But, if that's the plan, then the new hardware requirement is one I applaud.

Finally, a note on versioning.  Microsoft has been using a standard way of naming versions of Windows 10 for a couple of years now.  The first two characters are a number.  They represent the last two digits of the year in which the version is released.  The third character is an "H".  It stands for "Half", as in the half of the year the version was released in.  The final character is another digit.   "H1" stands for "first half of the year".  "H2" stands for second half of the year.

The version name of the last Windows 10 release was 21H1 because it was released in the first half of 2021.  The version number of this first release of Windows 11 is 21H2, the version name that would have been assigned to the next update of Windows 10.  Windows 11 version 21H2 represents an incremental improvement to Windows 10 version 21H1, rather than a radical departure.

But that has been the announced plan for Windows for years.  Instead of a big disruptive upgrade very few years Windows would evolve by taking small to medium steps twice per year.  No one step would be a dramatic change from the previous one.  The name change might suggest otherwise, but Microsoft is actually sticking to the incremental evolution plan.  I, for one, am grateful.

Friday, July 2, 2021

A Brief History of the Motion Picture

 This is something I like to do.  I am going to take you on a trip through the history of something.  But all I am going to do is talk about the evolution of the technology that underpins it.  Its positive or negative contributions to society; who does it well and who does it badly; what are good and bad examples of its use; all those questions I leave to someone else.

My subject, of course, is the moving picture.  And even if we include the entire history of still photography the history we will be talking about only goes back about 200 years.  And the technology that has enabled pictures to move has an even shorter history.  For most of this history, the technology involved has been at the bleeding edge of the technology available at the time.    In order to establish some context I am going to start with a necessary precursor technology, photography.

The earliest paintings are tens of thousands of years old.  However, the ability to use technology instead of artistry to freeze and preserve an image only dates back to the early 1800s.  The key idea that started the ball rolling was one from chemistry.  Someone noticed that sunlight alone could change one chemical into another.  It soon became apparent that chemical compounds that contained silver were the best for pulling off this trick.  From that key insight, chemistry based photography emerged.

In the early days it quickly went through several iterations.  But by the middle 1800s one method had come to dominate still photography.  A thin layer of transparent goo was applied evenly to a piece of glass.  This was done in a "dark room".  The prepared glass plate was then inserted into a "magazine" that protected it from stray light.  The "film magazine" could then be stored, transported, and inserted into a "camera".

The meaning of the word "Computer" changed over time.  Originally, it meant a person who performed repetitive arithmetical and mathematical calculations.  In the mid-1900s its meaning changed to instead mean a machine that performed repetitive arithmetical and mathematical calculations.  The word "camera" underwent a similar transformation.

It started out referring to a simple device for focusing an image onto a surface.  By the mid-1800s it began being used exclusively to refer to a device used in photography.  A photographic camera consisted of an enclosed volume that was protected from stray light.  Its back was designed to accommodate the film magazine and its film.

At the front, and opposite the magazine area, was where a lens and a "shutter" were located.  The shutter normally remained closed but could be opened for short periods of time.  This would allow light to pass through the lens and land on the film at the back.

Cameras, film magazines, and the rest were in common use by the start of the Civil War in 1861.  The camera assembly was used to "expose" the film to an appropriate scene.  The film magazine was then used to transport the film back to the darkroom.  There it was "processed" so as to produce the final result.

Exposed film doesn't look obviously different from unexposed film.  Several processing steps are required to produce a picture of the original scene.  In the darkroom the goo side of the film is first exposed to a chemical bath that "develops" the film.  This causes the parts of it that had been hit with light in the camera to turn dark while the other areas remain transparent.  The developed goo is next exposed to a chemical bath containing a "fixer".  This step "fixes" the film so that subsequent exposure to light will not change it.

The result of these processing steps is film with an image of the original scene showing.  But it is a "negative" image.  The dark parts in the original scene are light and the light parts dark.  The image is also a "black and white" image.  It only contains shades of grey, no color.  And while this negative image is apparent and useful in some circumstances, it doesn't look like the original scene.

Fortunately, the fix is simple, put the film through additional processing steps.  Take a photograph of the negative, develop it, and fix it.  The result is a negative of a negative, or a "positive".  Black and white images can be very beautiful and emotionally evocative.  It took more than fifty years for photographers to be able to pull off color photography.

But what we have at this point is "still" photography.  Nothing is moving.  But the first "movie" soon appeared.  The initial method was developed in order to settle a bet.  When a horse is galloping is there any point when all four feet are off the ground?  A group of rich people decided that they were willing to pay good money to find out.

The man they hired tried out a lot of different things.  He quickly concluded that a galloping horse does spend some of its time with all four feet off the ground.  But how could he convincingly prove that?  The obvious answer was photography.  But he found that, while still pictures settled the question, they did not do it in a convincing manner.  More was needed.

So he set up a rig where a galloping horse would trip a bunch of strings.  Each string would be attached to its own camera.  As the horse galloped along it hit each string in sequence causing a photograph to be taken at that point.  One of those photographs showed the horse with all its feet off the ground.  But, as previously noted, simply viewing that photograph was not sufficiently convincing.

He then came up with a way of displaying his still pictures that was convincing.  He set up a device that would flash each still photograph in sequence.  And each photograph would only be illuminated for a fraction of a second.  He set his device up to continuously cycle through the entire set of photographs over and over.

If he operated his device at the right speed the horse appeared to be moving.  More than that, it appeared to be moving at the speed of a normal galloping horse.  By cycling through his roughly dozen photographs over an over he could get the horse to gallop as long as he wanted.  Then he could slow things down and "freeze frame" on the one picture that showed the horse with all four feet off the ground.  That made for a convincing demonstration.

This is considered to be the world's first moving picture.  But, from a practical point of view, its a gimmick.  But something very important was learned.  If you flash a series of pictures on a screen at a the right rate, then the eye working in concert with the brain will stich everything together.  The brain can't tell the difference between a continually moving scene and a series of similar still pictures flashed one after another.

From here it was just a matter of putting all the right pieces together.  The first piece was "celluloid" film.  Cellulose is a natural component of plants.  If you start with the right kind of cellulose and process it with the right chemicals you get a thin sheet of transparent material.  It was possible to manufacture very long ribbons of celluloid film.

The same goo that had been applied to glass plates can be applied to celluloid.  The result is a long ribbon of celluloid film onto which images can be placed.  It is necessary to "advance" the film between exposures so that each separate photograph of the scene ends up on a separate adjacent part of the long ribbon of film.

And celluloid is somewhat flexible.  It could be wound up on a "reel", a spool of film.  It could also be fed through gears and such so that it could be "run" through a "movie camera" or a "film projector".  And it was much cheaper than glass.  It soon became the preferred material to make photographic film out of.  One problem solved.

The next problem was to come up with a mechanism that would quickly and precisely advance the film.  Edison, among others, solved this problem.  The key idea was one that had been around for a while.

If you fasten a rod to the edge of a wheel it will move up and down as the wheel rotates.  More complexity must be added because you want the film to advance in only one direction.  And you want it to advance quickly then freeze, over and over again.  But those were details that Edison and others figured out how to master.

So, by the late 1800s Edison and others were using moving picture cameras loaded with thin ribbons of celluloid film to take the necessary series of consecutive still pictures.  A matching projector would then do the same thing the horse device did, throw enlarged images of each picture on the film onto a "screen" (any suitable flat surface), one after the other.  The projector needed to be capable of projecting consecutive pictures onto the screen at a lifelike rate.  That rate turned out to be 24 frames per second.

And with that the "silent movie" business came into existence.  ("Moving picture" got quickly shortened to "movie".)  At first, a movie of anything was novelty enough to draw crowds to "movie houses", later "movie theaters", and still later just "theaters".  But people's tastes evolved rapidly.

Movies capable of telling stories soon appeared and quickly displaced the older films as the novelty of seeing something, anything, moving on a screen wore off.  "Title cards" were scattered throughout the film.  They provided fragments of dialog or short explanations.  Accompanying music, anything from someone playing a piano to a full orchestra, were also soon added.

The result was quite satisfactory but fell far short of realism.  The easiest thing to fix was the lack of sound.  Edison, of course, is most famous for inventing the light bulb.  It consists of a hot "filament" of material in an enclosed glass shell.  All the air must be evacuated from the shell for the lightbulb to work.  That's because the filament must be heated to a high enough temperature to make it glow.  If there is any air near the hot filament it quickly melts or catches fire.

Edison's key achievement was the invention of a high efficiency vacuum pump.  With a better vacuum pump the filament could be heated to the temperature necessary to make it glow without it melting or burning up.  His original filament material was a thin thread of burnt carbon.  Others quickly abandoned it for Tungsten, but no one would have succeed without the high quality vacuum Edison's pump was capable of.

Edison was an inveterate tinkerer.  Once he got the lightbulb working he continued tinkering with it.  Electricity was used to heat the filament.  It turns out that electrons were boiling off of the filament.  Edison added a "plate" off to the side of the filament and was able to use it to gather some of these electrons.  Moving electrons are what makes electricity electricity.  And this invention, a light bulb with a plate off to the side was the foundation of the electronics industry.

Others took Edison's experiment a step further.  They added more stuff into the light bulb.  If a metal mesh "grid" was inserted between the filament and the plate, then if the grid was sufficiently charged with an electrical voltage it could completely cut off the electron flow.  If it had no charge then the electrons would pass through it freely.  If it was charged with a suitable lower voltage, then the flow of electrons would be reduced but not completely cut off.

Edison's "light bulb + plate" device  was called a diode because it had two ("di" = 2) components.  This new device was called a triode because it had three ("tri" = 3) components.  Charging the grid appropriately could stop and start an electric flow.  Intermediate amounts of charge cold allow more or less flow to happen.  Not much electric power needed to be applied to the grid to do this.  This is a long way of indicating that a triode could be used to "amplify" (make louder) an electric signal.

More and more complex devices were built with diodes, triodes, and newer "tubes", light bulbs with more and more components stuffed into them.  Soon, "electronics" could be made to do truly amazing things.  For instance, a "microphone", invented by Bell, the telephone guy, could be sent through electronics to loudspeakers (invented by lots of people) to create a "public address" system.  Now an almost unlimited number of people would simultaneously hear a speech or a theatrical performance.

Another device Edison invented was the "phonograph".  His original version was purely mechanical.  The energy in the sounds of a person speaking caused a wavy line etched in wax.  Later, a needle traveling along that same wavy wax line could be connected to a horn.  This arrangement would allow the original sounds to be reproduced at another time and place.

This was amazing but ultimately unsatisfactory for a number of practical reasons.   The first thing to be replaced was the wax.  Vinyl was sturdier.  Edison used a cylinder.  That got replaced by a platter.  Finally, the mechanical components got replaced by electronics.

Now a clearer and more complex sound like a full orchestra or a Broadway show could be played and replayed at a later time and in a later place.  Also, the "record" could be duplicated.  Different people could now listen to the same record at different times.   But people could also listen to different copies of the same recording.  A mass audience could now be reached.  By the late 1920s all this was in place so that it could be used to add sound to movies.

And, at first, that was what was done.  A phonograph record containing the sound part to the movie was distributed along with the film.  If the film and the record were carefully synchronized, and if a public address system was added to the mix, then the sound movie became possible.  The first successful example of pulling all this off was The Jazz Singer.

It was terrifically hard to pull off everything that was necessary to create the record.  The process of making the necessary recordings, then combining them appropriately and producing the record was very hard to pull off.  But it also turned out to be hard to keep the film and the record in sync while the movie was playing.

As a result, The Jazz Singer is more accurately described as a silent movie with occasional sound interludes than it is as a true sound movie.  Much of the movie was just a traditional silent movie.  But every once in a while, the star would burst into song.  For those parts the audience heard not local musicians but Al Jolson, the star of the movie.  So, while it wasn't a very good movie, it was a terrific proof of concept.

This process used by The Jazz Singer and other early "talkies" was called "Vitaphone".  The "phone" part harkened directly back to the phonograph part of the process.  But something better was needed.  And it was needed quickly.  The success of The Jazz Singer had caused audiences to immediately start clamoring for more of the same.

Fortunately, the electronics industry soon came riding to the rescue.  Another electronic component that had been invented by this time was the "photocell".  A photocell would measure light intensity and produce a proportional electric signal.  Adding a photocell aimed at part of the film could turn how light or dark that part of film was into something that could be amplified and fed to speakers.

That solved the "theater" end of the process.  What about the other end?  Here the key component had already been invented.  A microphone could turn sound into a proportional electrical signal.  It was easy to turn this electrical signal into an equivalent pattern of light and dark on a part of the film.  Of course, electronic amplifiers (already invented) had to be added into the process at the appropriate points.

In the transition from silent to sound two changes were made to how film was put to use.  First, the film itself was sped up.   Instead of 24 frames per second, 32 frames per second are used in sound films.  Second, a small portion of the film got reserved for the "sound track".

By having the projector shine a bright light through a narrow slot in front of the sound track part of the film, and by then amplifying the result and feeding it to speakers in the movie theater, a talkie would get its "sound track" from the film itself.  A separate record was no longer necessary.

There was one little problem left.  The film must go through part of the projector in a herky-jerky fashion.  We move a picture in position, stop the film, open the shutter, leave it open for while, close it, then quickly move on to doing the same thing for the next picture in line.  The sound track, however, requires that the film move past the pickup slot at a constant speed.  The solution turned out to be simple.

An extra "loop" of film is put in the gap between the part of the projector that unspools film off of the feed reel. and the shutter/lens area.  Another extra "loop" of film is put between the shutter/lens area and the part of the projector that feeds the film to the take-up reel.  The sound pickup slot is located just after this second feed point.  At that point the film is moving at a smooth, even speed.

This "extra loops" design has the advantage that the piece of film that has to move fast then stop is short.  This makes it easier for that mechanism to operate at the necessary speed.  All that is necessary is to place the sound that goes with an image a few inches ahead of it on the film.

On the other end of the process, the sound is handled completely separately from the pictures.  A "sound" camera does not process sound.  That's why Hollywood has used something called a "slate" for years.  It has a flat area on it where the name of the film, the "scene" number and the "take" number are marked.  Waving the slate in front of the camera before the actual scene is filmed makes it easy for the "editor" to know where a piece of film is supposed to go in the finished picture.

But with the advent of sound an extra piece called the "clapper" was added.  The last thing the person waving the slate does before he pulls it out of frame is to "clap" the clapper.  The moving clapper piece is easy to see in the film.  The intentionally loud "clap" noise made by the clapper is easy to hear in the sound recording.  This makes it easy to "sync" sounds to the pictures they go with.

During the phonograph era of sound movies all too often there was a delay between when a person's lips moved and when the audience heard the words they were saying.  This was caused by the record getting out of sync with the film.  Moving the sound from the record to a sound track on the film combined with the clapper system eliminated this problem.  It's too bad this problem didn't stay fixed.  I will be revisiting the "sync problem" below.

By about 1930 almost all of the movies coming out of Hollywood included a sound track.  And it turns out that some "color" movies came out in the period before Hollywood made the transition to sound.  There were only a few of them because the technique used was fantastically difficult and expensive to pull off.

Film itself doesn't care what color the images it carries are.  You shine a bright light through the film and whatever isn't blocked out ends up on the screen.  If what passes through film that has some color in it then that color will appear on the screen.  If there is no color in the film then what appears on the screen will all be in shades of black and white.

To make these early color movies Artists hand painted the color onto the film print.  That meant that every frame of the film had to be colored by hand.  And each print had to separately go through this difficult and time consuming process.  It was done but not often.  More practical alternatives were eventually developed.

The first relatively practical color process was called "three strip technicolor".  In the camera a device split the picture into three identical copies.  Each copy went a different path.  One path ended on film that had goo on it that was only sensitive to red.  Another path ended on film featuring green goo.  Still another path ended on film featuring blue goo.

The reverse was done on the projection end.  The process was complicated and hard to pull off.  It was eventually replaced by a process that needed only a single piece of film.  The film had multiple layers of goo on it.  There was a red layer, a green layer, and a blue layer.

The process of shooting the film, processing the film, and making prints of the fill was difficult and expensive.  But nothing special was needed on the theater end.  They just ran the fancy film through their same old projector and a color picture appeared on the screen.

While all this was going on a separate effort was being made to replace all this film business with an all electronic system.  The decade of the '30s was consumed with making this all-electronic process work.  By the end of the decade limited success had been achieved.

Theoretically, the technology was already in place.  The photocell could act as a camera.  And a light bulb being fed a variable amount of voltage could stand in for the projector.  But neither were really practical.  You see, you'd need about 300,000 of each, one for each pixel.

The word "pixel" is now in common usage.  "Pixel" is shorthand for picture element.  If you divide a picture into rows and columns then, if you have enough of them, you can create a nice sharp picture by treating each separate point independently.  The first PC I owned had a monitor that had 480 rows, each consisting of 640 dots.  That means that the screen consisted of 307,200 pixels.

So with only 307,200 photocells and only 307,200 light bulbs a picture with a resolution similar to that of an early TV set could be duplicated.  And, of course, this would have to be done something like 24 to 32 times per second.  But that's not practical.  Something capable of standing in for those 307,200 photocells and those 307,200 lightbulbs would have to be found.  It tuned out that the lightbulb problem was the easier of the two to solve.

Start with a large "vacuum tube" (generic term for a lightbulb with lots of special electronic stuff jammed inside of it) with a flat front.  Coat the inside of the flat front with a phosphor, something that fluoresces when struck by a beam of electrons.  Add the components necessary for producing and steering an "electron beam" into the other end of the same vacuum tube.

Creating an electron beam turns out to be pretty easy.  Remember that the filament in a light bulb boils off electrons.  A custom filament can boil off a lot of electrons.  Electrons are electrically charged so they can be steered with magnets.

Connect the electron beam generating and beam steering components inside the vacuum tube to suitable electronics outside the vacuum tube but inside the TV set.  When fed suitable signals, they will steer the electron beam so that it can be made to repeatedly sweep across the screen in a series of lines.  The lines are made to sweep down the screen.  The intensity of the electron beam will also need to be precisely controlled.  And the whole process will have to be repeated many times per second.

The intensity of the electron beam is changed continuously in just the right way to "paint" an image on the flat part of the vacuum tube thirty times per second (in the U.S.)  This specialized vacuum tube came to be called a TV Picture Tube.  Add in some more electronic components, components to select only one TV "channel", pull the "video" and "audio" sub-signals out of the composite signal. etc., and you have a TV set circa 1955.

The other end is a variation on the same theme.  Again a vacuum tube with a flat front is used.  This time a special coating is used that is light sensitive.  As the electron beam sweeps across it, the coating is "read" to determine how much light has struck it recently.  More light results in more electrons residing at a specific spot.  These electrons are carefully bled off.  More light on a particular spot causes more electrons to bleed off when that spot is swept.

Making all this work was very hard.  But it was all working in time to be demonstrated at the 1939 New York World's fair.  The advent of World War II put a halt to rolling it all out for consumer use.  Efforts resumed immediately after the end of the War in 1945.

Initially, none of this worked very well.  But as time went by every component was improved.  The first TV standard to be set was the British one.  They based it on what was feasible in 1939.  So British TV pictures consisted of only 400 lines.   Pretty grainy.  The U.S. came next.  The U.S. standard was set in 1946.  U.S. TV pictures consisted of 525 lines.  The French (and the rest of Europe) came later.  They were able to set a 900 line standard.  So French TV pictures were much sharper than U.S. pictures.  And U.S. pictures were significantly sharper than British pictures.

But what about color?  The first attempt was based on the "three strip" idea that was originally used to make color movies.  It was developed by CBS.  They essentially threw the old black & white standard in the trash.  That allowed them to use the same idea of splitting the picture into three copies.  The red signal was extracted from the first copy, the green from the second, and the blue from the third.  On the other end the TV set would process each signal separately before finally combining them back together.

This system would have worked just fine if it had been adopted.  But it would have meant eventually replacing everything at both ends of the process.  And TV stations would have to broadcast separate black and white and color signals on separate frequencies until the old "black and white" TV set were a rarity.  Who knows?  Maybe we would have been better off if we had taken that route.  But we didn't.

But NBC was owned by RCA and RCA was the dominant player in the making and selling of TV sets, cameras, and the rest of the equipment needed to do TV.  If it could be done, they wanted to come up with a "compatible" way to do color.  They came up with a way to do it.

First, they found a way to sandwich additional information into the signal TV stations were broadcasting.  Critically, black and white TV sets would be blind to this additional information.  So, when a TV station started sending out this new signal, it looked just like the old signal to black and white TV sets.  They would keep working just as they always had.

But new Color TVs would be able to see and use this additional information.  The additional information consisted of two new sub-channels.  A complicated subtraction scheme is used that took the black and white signal as a starting point.  Color TVs were capable of performing the gymnastics necessary to put a color picture on the screen.

This probably made color TV sets more complicated than they would otherwise have needed to be had the CBS standard been used.  But by the mid '60s color TVs at a low enough price point for many consumers to manage became available.  And the "compatible" scheme allowed lots of people to stick with their old Black and White TVs well into the '70s.

At this time (mid '60s) RCA made NBC broadcast all of their prime time shows "in living color".  The other networks were soon forced to follow in short order.  The early sets delivered washed out color.  But it was COLOR so people put up with it.  By the mid '70s sets that delivered decent color were ubiquitous and cheap.  Unfortunately for RCA and the rest of the U.S. consumer electronics industry, many of these sets came from other countries.  Japan was in the forefront of this invasion.

Japan started out making "me too" products that duplicated the capabilities of products from U.S. manufacturers like RCA.  But they soon started moving ahead by innovating.  Japan, for instance, pioneered the consumer VCR market.  Betamax and VHS were incompatible VCR standards.  Both came out of Japan.  Betamax was generally regarded as superior but it was also more expensive.  VHS came to dominate the consumer market while Betamax came to dominate the professional market.

By this time the computer revolution was well underway and there was a push to go digital.  But the first successful digital product came out of left field.  Pinball machines had been popular tavern entertainment dating back at least to the '30s.  For a long time they were essentially electro-mechanical devices.  They were devoid of electronics.

But computers had made the transition from vacuum tube based technology to "solid state" (originally transistors, later integrated circuits) starting in about 1960.  By 1970 solid state electronics were cheap and widely available.  A company called Atari decided to do electronic pinball machines.

When making a big change is smart to start with something simple, then work your way up from there.  So an engineer named Allan Alcom was tasked to come up with a simple pinball-like device, but built using electronics.  He came up with Pong.  It consisted of a $75 black and white TV connected to a couple of thousand dollars worth of electronics.  Importantly, it had a coin slot, just like a pinball machine.

The Atari brass immediately recognized a hit.  They quickly rolled it out and revolutionized what we now call arcade games.  Arcade games started out in taverns.  You would put one or two quarters in and play.  The tavern arcade game business was small beer compared to what came after.  But grabbing a big chunk of that market was enough to make Atari into an overnight success.

And the technology quickly improved.  Higher resolution games were soon rolled out.  More complex games were soon rolled out.  Color and more elaborate sounds were soon added.  Soon the initial versions of games like Donkey Kong, Mario Brothers, Pac Man, and the like became available and quickly became hits.

The "quarters in slots in taverns" model soon expanded to include "quarters in slots in arcades", as arcades were open to minors.  But the big switch was still ahead.  The price of producing these game machines kept falling.  Eventually home game consoles costing less than $100 became available.  You hooked them up to your TV, bought some "game cartridges" and you were off to the races.  The per-machine profit was tiny compared to the per-machine profit of an arcade console.  But the massive volume more than made up the difference.

All this produced a great deal of interest in hooking electronics, especially digital electronics, up to analog TV sets.  This produced the "video card", a piece of specialized electronics that could bridge the differences between analog TV signals on the one side and digital computer/game electronics on the other.

In parallel with this was an interest in CGI, Computer Generated Images.  This interest was initially confined to Computer Science labs.  The amount of raw computer power necessary to do even a single quality CGI image was astounding.  And out of this interest by Computer Scientists came the founding in 1981 of a company called Silicon Graphics.  One of its founders was Jim Clark, a Stanford University Computer Science prof.

SGI, started out narrowly focused on using custom hardware to do CGI.  But it ended up being successful enough to put out an entire line of computers.  They could be applied to any "computer" problem, but they tended to be particularly good at problems involving the rendering of images.  I mention SGI only to indicate how much interest computer types had in this sort of thing.

Meanwhile, things were happening that did not have any apparent connection to computers.  In 1982 Sony rolled out the Audio CD, also known as the Digital Audio Compact Disc, or the CD.  This was a digital format for music.  And it was intended for the consumer market.  Initially, it did not seem to have any applicability to computers or computing.  That would subsequently change.

The CD was not the first attempt to go digital in a consumer product.  It was preceded by the Laserdisc, which came out in 1978.  Both consisted of record-like platters.  Both used lasers to process dots scribed in a shiny surface and protected by a clear plastic coating.  The Laserdisc used a 12" platter, roughly the size of an "LP" record.  The CD used a 4 3/4" platter, similar to but somewhat smaller than a "45" record as a "45" is 7" in diameter.

In each case the laser read the dots, which were interpreted as bits of information.  The bits were turned into a stereo audio signal (CD) or a TV signal complete with sound (Laserdisc).  The CD was a smash success right from the start.  The Laserdisc, not so much.

I have speculated elsewhere as to why the Laserdisc never really caught on, but I am going to skip over that.  I'll just say that I owned a Laserdisc player and was very happy with it.  Both of these devices processed data in digital form, but eventually converted it into an analog signal.  When first released, no one envisioned retaining the digital characteristic of the information or connecting either to a computer.  The CD format eventually saw extensive use in the computer regime.  The Laserdisc never did.

So, what's important for our story is that digital was "in the air".  Hollywood was also interested.  Special effects were very expensive to pull off.  The classic Star Trek TV show made extensive use of the film based special effects techniques available when it was shot in the late '60s.  But the cost of the effects was so high that NBC cancelled the show.  It was a moderate ratings success.  But the ratings were not high enough to justify the size of the special effects budget.

When George Lucas released Star Wars in 1972 little had changed.  He had to make due with film based special effects.  There are glaring shortcomings caused by the limitations of these techniques that are visible at several points in the film.  But you tend to not notice them because the film is exciting and they tend to fly by quickly.

But if you watch the original version carefully, and you are on the lookout, they stick out like sore thumbs.  He went back and fixed all of them in later reissues.  So, if you can't find one of the original consumer releases of the film, you will have no idea what I am talking about.

He made enough money on Star Wars to start doing something about it.  He founded ILM - Industrial Light and Magic, with the intent of making major improvements in the cost, difficulty, and quality of special effects.  ILM made major advances on many fronts.  One of them was CGI.

Ten years later a CGI heavy movie called Tron came out.  It was the state of the art in CGI when it was released.  Out of necessity, the movie adopted a "one step up from wire frame" look in most its many CGI rendered scenes.  The movie explained away this look by making its very primitivity a part of the plot.

Tron represented a big improvement over what had been possible even a few years before.  Still, in spite of the very unrealistic rendering style, those effects took a $20 million supercomputer the better part of a year to "render".  At the time, realistic looking CGI effects were not practical for scenes that lasted longer than a few seconds.

CGI algorithms would need to improve.  The amount of computing power available would also have to increase by a lot.  But technology marches on and both things eventually happened.  One thing that made this possible was "pipeline processing".  The Tron special effects were done by a single computer.  Sure, it was a supercomputer that cost $20 million.  But it was still only one computer.

Computer Scientists, and eventually everybody involved, figured out how to "pipe" the output of one computer to become the input into another computer.  This allowed the complete CGI rendering of a frame to be broken down into multiple "passes".  Each pass did something different.  Multiple computers could be working on different passes for different frames at the same time.

If things could be broken down into enough steps, each one of which was fairly simple to do, then supercomputers could be abandoned in favor of regular computers.  All you had to do was hook a bunch of regular computers together, something people knew how to do.  The price of regular computers was plunging while their power was increasing.  You could buy a lot of regular computers for $20 million, for instance.  The effect was to speed the rate at which CGI improved tremendously.

A particularly good demonstration of how fast CGI improved was a TV show called Babylon 5.  It ran for five seasons that aired from 1993 to 1998.  The show used a lot of CGI.  And it had to be made on a TV budget, not a movie budget.  Nevertheless, the results were remarkable.

The season 1 CGI looks like arcade game quality.  That's about what you would expect from a TV sized CGI budget.  The images are just not very sharp.  But year by year the CGI got better and better.  By the time the last season was shot the CGI looked every bit as crisp and clear as the live action material.  The quality of CGI you could buy for a fixed amount of money had improved spectacularly in that short period.

So, that's what was happening on the movie/TV front.  But remember SGI and the whole Computer thing?   As noted above, the first home computer I owned used a "monitor" whose screen resolution was only a little better than a black and white TV.  Specifically, it had a black and white (actually a green and white, but still monochromatic) screen.  The resolution was 640x480x2.  That means 640 pixels per line, 480 lines, and 2 bits of intensity information.

PCs of a few years later had resolutions of 800x600x8.  That's 800 pixels per line, 600 lines, and 8 bits of resolution.  A clever scheme was used to allow this "8 bit resolution" to support a considerable amount of color.  For reference, a modern PC has a resolution of 1920x1280x24.  That's 1920 pixels per line, 1280 lines, and 24 bits of resolution.  Typically, 8 bits of resolution are used to set the red level to one of 256 values.  The same 8 bit scheme is also used for green and for blue.  That's comparable in picture quality to a good "HD" TV.  But back to our timeline.

The video capabilities of PCs increased rapidly as the '80s advanced.  Their capabilities soon easily surpassed the picture quality of a standard TV.  And SGI and others were rapidly advancing the state of the art when it came to CGI.  The later installments of the Star Wars films started using more and more CGI.  Custom "Avid" computers became available.  They were built from the ground up to do CGI.  

Meanwhile, custom add in "graphics" cards started to appear in high end PCs.  By this time games had leapt from custom consoles to the mainstream PC market.  And gamers were willing to spend money to get an edge.  As one graphics card maker has it, "frames win games".  If your graphics card can churn out more sharp, clear frames per second, then you will gain an advantage in a "shoot 'em up" type game.

These graphics cards soon went the SGI route.  They used custom "graphics processor" chips that were optimized for doing CGI.  And, as is typical of solid state electronic devices, they started out expensive.  Top of the line graphics cards are still quite expensive.  But they deliver spectacular performance improvements.  On the other hand, a decent graphics card can now be had for $50.

And, in another call back to SGI, which is now out of business, some supercomputers are now being built using graphics processor chips instead of standard "general purpose" processor chips.  Supercomputers built around graphics chips are not as fast as supercomputers made using general purpose chips.  But they are still damn fast, and they are significantly cheaper.

All these lines of development converged to produce the DVR.  TiVo brought out one of the first successful DVRs built for the consumer market in 1999.  It was capable of processing a TV signal as input.  It even had a "channel selector" like a regular TV.  It was also capable of outputting a standard TV signal.  What was in the middle?  A standard PC disk drive.  The TiVo translated everything to and from strings of bytes, which could be stored on disk.

The TiVo was a big improvement over a VCR.  A "guide" listing every showing of every episode of every show got updated daily.  This was possible because it had a standard PC style processor chip built into it.  All this made possible commands like "Record Jeopardy!".

It could also record one thing while you watched something else.  And you could watch shows you recorded in a different order than you had recorded them in.  And you could stop the show then restart it later without missing anything if the phone rang or someone came to the door.  And you could fast forward through the commercials.

Subsequent models permitted multiple shows to be recorded at once, even though they were being broadcast on separate channels.  Other features were added.  But the point is that, with the advent of the TiVo DVR, anything that could be done with analog TV equipment could now be done with hybrid analog/digital computer based equipment.

Leave that aside for the moment so that we can return to movies.  Recall that in 1972 an effects heavy movie like the original Star Wars was made without recourse to CGI.  But thanks to ILM and others, advances were starting to be made.  By 1982 a movie like Tron could be made.  What came later?  I am going to use the work of James Cameron as a roadmap.

Cameron was a brilliant artist who also understood technology thoroughly.  As a result, The Abyss, a movie released in 1986, only five years after Tron, showcases a spectacular CGI feat.  It included a short scene featuring a large worm-shaped alien.  The alien appeared to be a tube made entirely of clear water.

You could see through it to a considerable extent.  And bright things that were near it could be seen partially reflected in its surface.  And did I mention that the alien moved in an entirely realistic manner.  The alien was completely believable at all times.  The sophistication necessary to achieve this was beyond anything ever seen before.

The requirement for both translucency and reflectivity required much more computation per frame.  That's why he had to keep the scene short.  If he hadn't, the time necessary to make all those computations would have been measured in years.  As it was, it took months and a blockbuster sized movie budget to pull it off.

Five years later he was able to up the ante considerably.  Terminator II (1991) made extensive use of  what appeared to be a completely different CGI effect.  When damaged, which turned out to be a lot of the time, the bad guy had a highly reflective silver skin.  In his silver skin form he was expected to run, fight, and do other active things.  And he had to move like a normal human while doing them.

The necessary computer techniques, however, were actually quite similar to those used for his earlier water alien effect.  Fortunately, by the time Cameron made Terminator II, he was able to create a CGI character who could rack up a considerable amount of screen time.  And he could do it while staying within the normal budget for a blockbuster, and while hewing to a production schedule typical for a movie of that type.  

The CGI infrastructure had gotten that much better in the interim.  And it continued to get better.  He wanted to make a movie about the sinking of the Titanic.  Previous movies about the Titanic (or any other situation where a real ship couldn't be used) had always used a model ship in a pool.  Cameron decided to use a CGI version of the ship for all the "model ship in a pool" shots.  Nowhere in Titanic (1997) are there any shots of a model ship in a pool.

It turned out to be extremely hard to make the CGI version of the ship look realistic enough.  The production ran wildly over budget.  The production schedule slipped repeatedly.  It seemed for a while like the movie would never get finished.   But, in the end it didn't matter.  Titanic was eventually finished and released.  It was wildly popular, so popular that it pulled in unbelievable amounts of money at the box office.

That experience ended up giving Cameron essentially Carte Blanche.  He used that Carte Blanche to create Avatar in 2009.  Again, making the movie cost fantastic amounts of money, most of which went to creating the CGI effects.  It was released in 3D and Imax.  Realistic visuals that stood up under those conditions were seemingly impossible to pull off.  But he did it.  And the movie was even more successful than Titanic.   It too earned more than enough money to pay back all of its fantastically high production cost.

But Titanic and Avatar were in a class by themselves due to their cost.  What about a movie with a large but not unlimited budget?  What did CGI make it possible to do in that kind of movie?  Two movies that came out within a year of each other answered the question.

The movies were What Dreams May Come (1998) and The Matrix (1999).  Both had large but not Cameron-esque budgets.  Regardless, both made heavy use of CGI.  But the two movies used CGI in very different ways.  Creative and unorthodox in each case, but very different.  Both movies affected their audiences strongly, but also in very different ways.

I saw both of them when they first came out.  After seeing them the conclusion I drew was that, if someone could dream something up, and then find the money (enough to fund an expensive but not super-expensive movie), then CGI was now capable of putting that something into a movie, pretty much no matter what it was.

And CGI has continued to get better, especially when it comes to cost.  Now movies and TV shows that make extensive use of CGI are a dime a dozen.  In fact, it is now cheaper to shoot a movie or TV show digitally than it is to use film.  This is true even it it has little or no need for CGI.

It is shot using using high resolution digital cameras.  Editing and other post processing steps are done using digital tools.  It is then distributed digitally and shown in theaters on digital projectors or at home on digital TV sets (or computers or tablets or phones).  By going digital end-to-end the project is cheaper than it would have been had it been done using film.

Does that mean that there is nowhere else for the digital revolution to go?  Almost.  I can think of one peculiar situation that has arisen as CGI and digital have continued to get cheaper and cheaper, and better and better.

It had to do with the making of the movie Interstellar in 2014.  You see, by that point Hollywood special effects houses had easy access to more computing power than did a well connected and well respected theoretical physicist, somebody like Kip Thorne.

Thorne was so well thought of in both scientific and political circles that he had almost singlehandedly talked Congress into funding the LIGO project, the project that discovered Gravity Waves.  LIGO burned through over a billion dollars before it discovered its first set of Gravity Waves.  Congress went along with multiple funding requests spanning more than a decade based on their faith in Thorne.

Thorne's specialty was Black Holes.  But no one knew what a Black Hole really looked like.  The amount of computations necessary to realistically model one was a giant number.  The cost of that much computation was beyond the amount of grant money Thorne could get at one time.  And nobody else had any better luck getting approval to spend that much money, at least not to model a Black Hole.

But his work as a consultant on Interstellar granted him entrée to Hollywood special effects houses (and a blockbuster movie sized budget to spend with them).  The effects houses were able to run necessary computations and to use CGI to turn the results into video.

Sure, the ostensible reason for running the calculations was for the movie.  And the videos that were created were used in the movie, so everything was on the up and up.  But the same calculations (and video clips) could and did serve the secondary purpose of providing answers to some heretofore unanswerable serious scientific questions.  The work was serious enough that Thorne had no trouble getting it published in a prestigious scientific journal.

So we have now seen how movie production and TV production went digital.  That only leaves broadcast television.  The change was kicked off by consumer interest in large format TV sets.  Practicalities limit the size of a picture tube to around 30".  Even that size is hard to produce and very heavy.  Keeping a vacuum of that size under control requires strong, thick walls.  That makes them heavy.  The solution was a change in technology.

Texas Instruments pioneered a technology that made "projection TV" possible.  It soon reached the consumer market.  Front projection units worked not unlike a movie projector.  They threw an image onto a screen.  Front projection TVs just substituted a large piece of electronics for the movie projector.

Rear projection units fit the projector and the screen into a single box by using a cleaver mirror arrangement.  Rear projection systems could feature a screen size of up to about 60".  Front projection systems could make use of a substantially larger screen.

The color LCD - Liquid Crystal Display screen came along at about the same time.  Color LCD TVs became available in the late '80s.  Initially, they were based on the LCD technology used in laptop computers, so the screens were small.  But, as time went by affordable screens grew and grew in size.

But the important thing for our story, however, is that both technologies made it hard to ignore the fact that a TV image wasn't very sharp and clear.  And the NTSC standard that controlled broadcast TV made it impossible to improve the situation.

It was time to move on to a new standard that improved upon NTSC.  The obvious direction to move in was toward the PC.  With no NTSC standard inhibiting them the image quality of PCs had been getting better and better right along.  And the PC business provided a technology base that could be built upon.  The first serious move was made by the Japanese.

In 1994 they rolled out a "digital high definition" system that was designed as the successor to NTSC and other TV standards in use around the world at that time.  This scared the shit out of American consumer electronics companies.

By this time their market share had shrunk and they were no longer seen as leading edge players.  They operated a full court press in D.C.  As a result, the Japanese system was blocked for a time so that a U.S. alternative could be developed.  This new U.S. standard was supposed to give the U.S. consumer electronics companies a fresh chance to get back in the game.

U.S. electronics companies succeeded in developing such a standard.  It was the one that was eventually adopted the world over.  But they failed to improve their standing in the marketplace.  The Japanese (and other foreign players) had no trouble churning out TVs and other consumer electronics that conformed to the new standard.  The market share of U.S. consumer electronics companies never recovered.

That standard was, of course, SD/HD.  Actually, it wasn't a single standard.  It was a suite of standards.  SD - Standard definition was a digital standard that produced roughly the same image quality as the old U.S. NTSC standard.  HD - High Definition produced a substantially improved image.  Instead of the roughly 600x400 lines of NTSC and SD,  the HD standard called for 1920x1080.

And even this "two standards" view was an oversimplification.  HD was not a single standard.  It was a family of related sub-standards.  There was a low "720p" 1280x720 sub-standard, a medium "1080i" 1920x1080 (but not really - see below) sub-standard, and a high "1080p" 1920x1080 sub-standard.

The 1080i sub-standard used a trick that NTSC had pioneered.  (Not surprisingly, the TV people demanded that it be included.)  Even lines were sent during one refresh and odd lines were sent on the next refresh.  That means that only a 1920x540 per screen resolution was needed for each screen refresh.  NTSC had actually sent only about 263 lines per screen refresh.  It used the same even lines then odd lines trick to deliver 525 lines by combining successive screens.

The 1080p "progressive" sub-standard progressively delivered all of the lines with each screen refresh.  That's how computers had been doing things for a long time by this point.  And this "multiple sub-standard within the full standard' idea turned out to be important.  It allowed new sub-standards to be added later.  Since then a "4K" (3840x2160 - 4 times the data but it would have been more accurate to call it "2K") and an "8K" (7680x4320) sub-standard have been added.

The original Japanese specification would have required the bandwidth dedicated to each TV channel to be doubled.  But the U.S. standard included digital compression. Compression allowed the new signal to fit into the same sized channel as the old NTSC standard had used.  

There is a lot of redundant information in a typical TV picture.  Blobs of the picture are all the same color.  Subsequent images are little changed from the previous one.  The compression algorithm takes advantage of this redundancy to throw about half the bits away without losing anything important.  The computing power necessary to decompress the signal and reproduce the original HD picture was cheap enough to be incorporated in a new TV without adding significantly to its price.

The first commercial broadcast in the U.S. that used the new 1080i HD specification took place in 1996.  U.S. TV stations stopped broadcasting the old NTSC signal in 2011.  Adapters could be used that down converted HD signals into NTSC.  But few people bothered.  It was easier to just replace their old NTSC capable TV with a new cheap HD capable TV.

The widespread and rapid acceptance of HD resulted in an unexpected convergence.  A connector cable specification called HDMI came into wide use in the 2003-2005 time frame.  It was ideal for use with HD TV sets.  And the 1080p HD standard turned out to work well for computer monitors.

As a result, HDMI cables have become the cable of choice for both computer and TV applications.  HDMI cables rated to handle TV signals at "4K" resolution, or even "8K" resolution, are widely available.  They are well suited for use with even the most ultra-high resolution computer monitor.

It took a while, but we are all digital now.  Unfortunately this brought an old problem back.  In the digital world we now live in, the picture and the sound are back to taking different paths.  If everybody along the way is careful then everything is fine.  But all too often the sound and the picture get out of sync.

It most often happens on a live show where one or more people are talking from home.  Zoom, or whatever they use, lets the sound get out of sync with the picture.  If the segment is prerecorded this problem can be "fixed in post".  That can't be done if it is a live feed.  And, even if it can be fixed in post, all too often nobody bothers to do so.

I find it quite annoying.  But lots of people don't even seem to even notice.  Sigh!

Wednesday, February 3, 2021

Fixing the Vaccine Rollout

 In this go-go era of Twitter and 24 hour cable news channels, things that happened a few days ago are old news and things that happened a few months ago are ancient history.  So the healthcare.gov fiasco from 2013 counts as prehistory.  BTW, the word "history" has a precise definition.  It consists of the body of events that happened at a time and in a place where someone wrote down an account of them.  Everything else is prehistory.  In spite of the fact that it happened so long ago that it is effectively prehistoric, that particular fiasco bears on the current subject.

And, since I am talking about a prehistoric event, let me review the details.  President Obama spent most of his first two years in office passing healthcare reform.  The final law that was enacted is informally called Obamacare.  The official title is the Affordable Care Act, or ACA, for short.  Components of the ACA rolled out in phases.  One of those phases included a web site  that anyone could use to find an "individual" health care plan.  It didn't matter which state you lived in, healthcare.gov was supposed to steer you to a plan that was available in your area.

The web site went live on October 1, 2013 and promptly crashed.  And crashed.  And crashed.  Soon, many people who should have known what they were talking about, started saying, "it's broken and can't be fixed."  President Obama didn't panic.  Instead he brought in a group of very experienced executives from the tech industry to put it back on track.

They succeeded.  And it only took them 60 days.  I wrote a blog post on how it all went down.  You can find it here:  Sigma 5: Fixing healthcare.gov.  It's a good read.  And my thesis for this post is that there are a lot of parallels between that situation and the one currently surrounding the rollout of the COVID-19 vaccine.  Let's start with a quick review of what went down back then.

The people who were brought in had a tremendous amount of experience managing complex IT projects.  They looked the situation over and decided that the fundamental architecture was fine.  That was good news because architecture issues are difficult and time consuming to fix.  What they did find were a lot of easier to fix problems.  Unfortunately, it would be necessary to fix pretty much all of them before the site would work.

That's because there were a lot of components involved.  Many of them were broken.  Many components also did not play nice with other components.  And a big problem was that the system had to interface with 50 different state systems.  Each had its individual quirks and peculiarities.  But the new team didn't panic.  Instead, they did what good project managers always do.  They created a "punch list".

The idea comes from the construction industry.  You take a tour of the project and look for everything that needs attention.  Each item is a "punch" on the list.  As each item is put right it is "punched" out of the list.  Ideally, you eventually end up with a punch list containing no items.

So the team built a punch list.  Then they prioritized it.  Then they sent out the top priority items to the various contractors working on the project with instructions to fix them.  Then they kept track of the results.  Once these top priority items were fixed they looked at the list and picked out a new set of top priorities and sent it out.  It really was as simple as that.

There are several things that helped.  These people knew what they were doing so they built a good punch list.  The contractors, who it turned out were actually doing good work, knew that these people would not accept second rate work so they set to and started fixing problems.  And the managers were careful to keep their priority list as stable as possible,

You always need to be prepared to change things up as the situation evolves.  But I have spent a lot of time in IT.  And I have frequently found myself in situations where the priority list gets completely rewritten every few days.  It takes time and focus to fix a problem.  You don't get much productive work done by switching from project to project to project all the time without staying on one project long enough to finish it.

The management team also did a lot of communication.  It was important that all the players knew what was going on.  These players included the White House, the various contractors, and each state.  It was particularly important to work individually with each state.

The idiosyncrasies of its particular systems and way of doing business, were different for each state.  But a solution that worked both for the overall system and for each state had to be implemented for the overall project to be a success.

That required a lot of communication and a considerable amount of flexibility.  But the states soon found that they had a partner that was willing to listen to them and to work with them, so it all got ironed out.

For a couple of weeks nothing appeared to be happening.  The site still kept crashing.  Pretty much none of it seemed to be working.  But that was because a lot of things had to be fixed before any change would be apparent to outsiders.

In reality things were being fixed on a daily basis.  But until lots of components were working, and working together, all that was happening was that the point of failure was just being moved around.  But then enough things got fixed that some parts started working.  Then more things got fixed and more parts started working.  And, in a surprisingly short amount of time, it was all working.

The bottom line was that the Obama people really had done a pretty good job.  They just weren't skilled enough or experienced enough to pull a project of that complexity and difficulty off on the required timeline.  With the knowledge and steadying hand provided by the outside experts things came together quickly.  And the good work the Obama people had done in laying a sound foundation made that possible.

Health care is complicated.  Health insurance is complicated.  Tracking a single item, or in this case, a few similar items, is a piece of cake in comparison.  So the fundamental problem presented by the vaccine rollout is much simpler.  But structurally, it has similarities.  This Federal system has to glue everything together.  And it has to deal with the idiosyncrasies of 50 different states.

There is one key difference.  The Obama people believed in doing a good job.  And they felt that what they were doing was an appropriate role for the Federal Government to fulfill.  The Trump people, on the other hand, really didn't believe in government.  So, they doubted that what they were supposed to do was even an appropriate function for the Federal Government to perform.

Assuming the job needed to be done at all, then they were of the opinion that somebody else should do it.  They really don't care if it was the States or private businesses.  Just so long as it is not the Trump Administration.  But it was important to maintain appearances in order to fend off criticism.  So, they put together a system that was more designed to fend off criticism than it was to work well.

As a result, when the Biden people came aboard they found little to work with.  Their standards were completely different.  They expected the system to actually be capable of doing the job, not just pretending to do it.

And a big part of that was providing a system that State Governors, both Democratic and Republican, could make work in their various states.  While the Trump people were in charge Governors found that they did not have a reliable partner at the Federal level to work with.

To pick one well publicized example, a key question is how much vaccine will each State get and when will they get it.  According to lots of public proclamations by various Trump officials the answers were "a lot" and "right away".  But when State officials queried their Federal counterparts they quickly learned that neither was true.

First, the figures put out publicly describing how many doses each State would get were far higher than the actual amount that was later officially promised and still later delivered to each state.  Second, they only learned how much vaccine they would be receiving late in the week before the vaccine would be arriving.

So, states were expected to get by with less.  And they couldn't plan ahead because they didn't know how much vaccine they would be receiving, two, three, or four weeks out.  That made it very hard for them to plan for the efficient distribution and administration of the vaccine they did receive.  It also led to hoarding.  If you don't know how much you are getting, then it seems like a good idea to hold back a lot of what you already have, "just in case".

But it turned out that the problems didn't end there.  Getting doses out of freezers and into arms turned out to be much harder than most predicted.  And it was not just the super-cold freezers that were required.  A key group that everybody prioritized were elderly people living in congregate care facilities.  

These people have a lot of physical and mental issues.  Many of them are bed ridden.  Many of them get confused or upset easily.  You have to go to where they are and you have to provide a lot of extra TLC.  The result was that for this group the amount of time it took to do one injection was about twice as long as forecast.

Plans for tight grouping and tiering also quickly broke down.  The "use it or lose it" characteristic (doses must be used within 6 hours of being "reconstituted") meant that careful plans must be made or many doses would be wasted.  Who was supposed to do this careful planning?  Overloaded and over-stressed State Health Departments and pharmacy chains like CVS and Walgreens.  What could possibly go wrong?

The situation that the incoming Biden team inherited was chaotic and underperforming everyone's expectations.  But the underlying problems were not that complex.  Can vaccine manufactures accurately forecast their production rates?  The answer seems to be "yes".  That's the foundation underlying everything else.

As is typical, the Federal Government is actually doing very little itself.  Others "do" while the Federal Government directs and tracks.  Companies like Pfizer and Mederna manufacture the vaccine.  Companies like FedEx, and UPS ship it.

It gets more complicated than that as we move vaccine doses closer and closer to people's arms.  But it is still a situation where this company or department performs a certain function.  The vaccine needs to be tracked as it moves down the chain.  Then patient information needs to move back up the chain so that we can track what's going on.

One current problem seems to be that long, elaborate, forms need to be filled out for each injection.  That's because in the early going health insurance companies and health care providers, the people who have the information the forms demand, were cut out of the loop.  That is starting to change.

I, for instance, am getting my vaccinations through my regular health care provider.  It already has all the information the forms require in it's computer system.  I know others who have been able to work through their health care provider to schedule and receive their shots too.  That doesn't work for everyone.  But it works for most people.

We all know that the data is going to eventually end up in a compute somewhere.  Any data on a paper forms will have to be keyed in at some point.  So why not do a computer-to-computer transfer in the first place?  It's faster, cheaper, and more accurate.

Things are getting ironed out.  Some of this "ironing out" actually began before Trump left office.  But I expect things to accelerate.  Coordinating vaccine distribution is the easy part.  Compared to getting the healthcare.gov web site working. it is a trivial undertaking.

And collecting and reporting vaccination statistics accurately. and in a timely manner, is also not very complicated.  I expect all of these problems to be ironed out by the end of February.

Getting the vaccine to the states is already working pretty well.  Getting it from there to people's arms is a much more difficult problem.  We have seen progress in this area but much more needs to be done.  Only about 60% of shipped doses have been used, according to the most current CDC statistics.  On the other hand, people have had horrendous experiences trying to finding and schedule an appointment.

One big contributing factor is that demand current vastly outstrips supply.  There is no healthcare.gov one stop web site, for instance.  But the time has passed when it would make sense to create one.  But lots can be done that does not involve a federal web site.

The first thing the Federal government can do is to help states defray the cost.  There is money in the pipeline for this.  And more is coming if the Democratic "COVID" bill is enacted into law.  Even the Republican alternative contains additional funds to help the states with this.

But the federal government can also help with advice and various kinds of technical assistance.  With healthcare.gov, the Federal government went so far as to build the state piece for the states that wanted them too.  Many states took the Federal government up on the offer.  That's not possible in this situation.  But there is a lot the federal government can do to help.  One way or another, I expect this problem to be largely solved by the end of March.

That leaves the biggest problem of all, vaccine availability.  This is totally a Federal responsibility.  And it is the one that will take the longest to solve.  Vaccine makers know that they can sell everything they can make.  So they are making all they can already.

The Federal government can use the Defense Production Act to help the companies out.  While there's nothing that can be done immediately, there is lots that can be done over time.  Ramping up production can only be done so fast, no matter what you throw at the problem.  But the government can be very helpful down the line.

The amount of vaccine that will be produces is pretty much baked in for the next few months.  Production should increase substantially in second quarter (April-June).  It can continue to increase in subsequent quarters.  I expect that supply will be pretty much in alignment with demand by the Fourth of July.  If we are lucky, we will be able to reach that goal by Memorial Day.

That should mean that everybody in the U.S. can get vaccinated before the Summer is over.  And it looks like the same will be true for Europe.  But the combined population of the U.S. and Europe constitutes only about 10% of the population of the world.  And it is the richest and most heavily resourced 10%.  This pandemic will not be under control until the world is vaccinated.  

The vaccines in use in the U.S. are expensive and hard to administer.  They are not the right tools for use in most of the world.  We need vaccines that are equally effective but much cheaper and easier to use.  There are some candidates.  But effectiveness is still a question.  As is cost.  And current world vaccine manufacturing capacity is woefully inadequate.

So, it looks like it will be 2022 or 2023 before the world is shot of this scourge.  And that's a big problem for all of us.  Variants are now popping up all over the place.  Currently the variant of most concern is one that was first identified in South Africa.  All of the vaccine candidates that have been tested against it show substantially reduced effectiveness.

So all is lost, right?  Actually, no.  First, current vaccines are very effective at keeping people out of hospital and, more importantly, at keeping them from dying.  The data currently available indicates that this is true even when the new variants are involved.  Secondly, vaccines can be tweaked.

Many of the vaccines and candidates are based on new technology.  Both the Pfizer and the Moderna vaccines are "mRNA" technology.  As such, they needed to be subjected to more scrutiny that would have been appropriate for a vaccine candidate that worked the old fashioned way.

We are now field testing these new approaches by injecting these vaccines into a lot of people, including me.  If, as expected, vaccines based on mRNA and other new technologies turn out to be safe and effective, then the technologies they are based on become not "new" but "proven".  Heightened scrutiny will no longer be appropriate.

A vaccine needs to be targeted.  One of the big advantages of these new vaccine technologies is that they can be targeted more precisely and more quickly than vaccines based on old technologies.  Vaccine makers that use new technology say that they can quickly and easily retune their vaccines to improve their effectiveness against the variants that are now popping up.

The approval process should take far less time once the basic approach has been proven out.  That means that vaccine makers think they can turn out "new and improved" versions of their vaccines within a few months.  And I believe them.

There is already talk that people like me, who will soon have competed the current process, may need a "booster" in six months to a year.  "New and improved" vaccines that are highly effective against the new strains, and the capacity to produce them at scale, should be ready by then.

I don't know whether this optimistic forecast will apply to the less wealthy parts of the world.  Work is moving forward on vaccines that are effective but also are cheap to make and easy to administer.  They just aren't ready yet.  When they do become available, some of their characteristics will be critical.

I'm not talking about the necessary attributes of being cheap and easy to administer.  I am talking about other attributes.  Will they come pre-tuned for the new variants?  Will they be easy to retune?  Will periodic booster shots be required?

This last attribute may be the whole game.  Periodically administering booster shots in the U.S. and Europe is relatively easy to pull off.  Having to periodically administer boosters to the entire world looks to be neigh on impossible.

There's hope.  But we are still a long way from being out of the woods on this one.