Friday, July 2, 2021

A Brief History of the Motion Picture

 This is something I like to do.  I am going to take you on a trip through the history of something.  But all I am going to do is talk about the evolution of the technology that underpins it.  Its positive or negative contributions to society; who does it well and who does it badly; what are good and bad examples of its use; all those questions I leave to someone else.

My subject, of course, is the moving picture.  And even if we include the entire history of still photography the history we will be talking about only goes back about 200 years.  And the technology that has enabled pictures to move has an even shorter history.  For most of this history, the technology involved has been at the bleeding edge of the technology available at the time.    In order to establish some context I am going to start with a necessary precursor technology, photography.

The earliest paintings are tens of thousands of years old.  However, the ability to use technology instead of artistry to freeze and preserve an image only dates back to the early 1800s.  The key idea that started the ball rolling was one from chemistry.  Someone noticed that sunlight alone could change one chemical into another.  It soon became apparent that chemical compounds that contained silver were the best for pulling off this trick.  From that key insight, chemistry based photography emerged.

In the early days it quickly went through several iterations.  But by the middle 1800s one method had come to dominate still photography.  A thin layer of transparent goo was applied evenly to a piece of glass.  This was done in a "dark room".  The prepared glass plate was then inserted into a "magazine" that protected it from stray light.  The "film magazine" could then be stored, transported, and inserted into a "camera".

The meaning of the word "Computer" changed over time.  Originally, it meant a person who performed repetitive arithmetical and mathematical calculations.  In the mid-1900s its meaning changed to instead mean a machine that performed repetitive arithmetical and mathematical calculations.  The word "camera" underwent a similar transformation.

It started out referring to a simple device for focusing an image onto a surface.  By the mid-1800s it began being used exclusively to refer to a device used in photography.  A photographic camera consisted of an enclosed volume that was protected from stray light.  Its back was designed to accommodate the film magazine and its film.

At the front, and opposite the magazine area, was where a lens and a "shutter" were located.  The shutter normally remained closed but could be opened for short periods of time.  This would allow light to pass through the lens and land on the film at the back.

Cameras, film magazines, and the rest were in common use by the start of the Civil War in 1861.  The camera assembly was used to "expose" the film to an appropriate scene.  The film magazine was then used to transport the film back to the darkroom.  There it was "processed" so as to produce the final result.

Exposed film doesn't look obviously different from unexposed film.  Several processing steps are required to produce a picture of the original scene.  In the darkroom the goo side of the film is first exposed to a chemical bath that "develops" the film.  This causes the parts of it that had been hit with light in the camera to turn dark while the other areas remain transparent.  The developed goo is next exposed to a chemical bath containing a "fixer".  This step "fixes" the film so that subsequent exposure to light will not change it.

The result of these processing steps is film with an image of the original scene showing.  But it is a "negative" image.  The dark parts in the original scene are light and the light parts dark.  The image is also a "black and white" image.  It only contains shades of grey, no color.  And while this negative image is apparent and useful in some circumstances, it doesn't look like the original scene.

Fortunately, the fix is simple, put the film through additional processing steps.  Take a photograph of the negative, develop it, and fix it.  The result is a negative of a negative, or a "positive".  Black and white images can be very beautiful and emotionally evocative.  It took more than fifty years for photographers to be able to pull off color photography.

But what we have at this point is "still" photography.  Nothing is moving.  But the first "movie" soon appeared.  The initial method was developed in order to settle a bet.  When a horse is galloping is there any point when all four feet are off the ground?  A group of rich people decided that they were willing to pay good money to find out.

The man they hired tried out a lot of different things.  He quickly concluded that a galloping horse does spend some of its time with all four feet off the ground.  But how could he convincingly prove that?  The obvious answer was photography.  But he found that, while still pictures settled the question, they did not do it in a convincing manner.  More was needed.

So he set up a rig where a galloping horse would trip a bunch of strings.  Each string would be attached to its own camera.  As the horse galloped along it hit each string in sequence causing a photograph to be taken at that point.  One of those photographs showed the horse with all its feet off the ground.  But, as previously noted, simply viewing that photograph was not sufficiently convincing.

He then came up with a way of displaying his still pictures that was convincing.  He set up a device that would flash each still photograph in sequence.  And each photograph would only be illuminated for a fraction of a second.  He set his device up to continuously cycle through the entire set of photographs over and over.

If he operated his device at the right speed the horse appeared to be moving.  More than that, it appeared to be moving at the speed of a normal galloping horse.  By cycling through his roughly dozen photographs over an over he could get the horse to gallop as long as he wanted.  Then he could slow things down and "freeze frame" on the one picture that showed the horse with all four feet off the ground.  That made for a convincing demonstration.

This is considered to be the world's first moving picture.  But, from a practical point of view, its a gimmick.  But something very important was learned.  If you flash a series of pictures on a screen at a the right rate, then the eye working in concert with the brain will stich everything together.  The brain can't tell the difference between a continually moving scene and a series of similar still pictures flashed one after another.

From here it was just a matter of putting all the right pieces together.  The first piece was "celluloid" film.  Cellulose is a natural component of plants.  If you start with the right kind of cellulose and process it with the right chemicals you get a thin sheet of transparent material.  It was possible to manufacture very long ribbons of celluloid film.

The same goo that had been applied to glass plates can be applied to celluloid.  The result is a long ribbon of celluloid film onto which images can be placed.  It is necessary to "advance" the film between exposures so that each separate photograph of the scene ends up on a separate adjacent part of the long ribbon of film.

And celluloid is somewhat flexible.  It could be wound up on a "reel", a spool of film.  It could also be fed through gears and such so that it could be "run" through a "movie camera" or a "film projector".  And it was much cheaper than glass.  It soon became the preferred material to make photographic film out of.  One problem solved.

The next problem was to come up with a mechanism that would quickly and precisely advance the film.  Edison, among others, solved this problem.  The key idea was one that had been around for a while.

If you fasten a rod to the edge of a wheel it will move up and down as the wheel rotates.  More complexity must be added because you want the film to advance in only one direction.  And you want it to advance quickly then freeze, over and over again.  But those were details that Edison and others figured out how to master.

So, by the late 1800s Edison and others were using moving picture cameras loaded with thin ribbons of celluloid film to take the necessary series of consecutive still pictures.  A matching projector would then do the same thing the horse device did, throw enlarged images of each picture on the film onto a "screen" (any suitable flat surface), one after the other.  The projector needed to be capable of projecting consecutive pictures onto the screen at a lifelike rate.  That rate turned out to be 24 frames per second.

And with that the "silent movie" business came into existence.  ("Moving picture" got quickly shortened to "movie".)  At first, a movie of anything was novelty enough to draw crowds to "movie houses", later "movie theaters", and still later just "theaters".  But people's tastes evolved rapidly.

Movies capable of telling stories soon appeared and quickly displaced the older films as the novelty of seeing something, anything, moving on a screen wore off.  "Title cards" were scattered throughout the film.  They provided fragments of dialog or short explanations.  Accompanying music, anything from someone playing a piano to a full orchestra, were also soon added.

The result was quite satisfactory but fell far short of realism.  The easiest thing to fix was the lack of sound.  Edison, of course, is most famous for inventing the light bulb.  It consists of a hot "filament" of material in an enclosed glass shell.  All the air must be evacuated from the shell for the lightbulb to work.  That's because the filament must be heated to a high enough temperature to make it glow.  If there is any air near the hot filament it quickly melts or catches fire.

Edison's key achievement was the invention of a high efficiency vacuum pump.  With a better vacuum pump the filament could be heated to the temperature necessary to make it glow without it melting or burning up.  His original filament material was a thin thread of burnt carbon.  Others quickly abandoned it for Tungsten, but no one would have succeed without the high quality vacuum Edison's pump was capable of.

Edison was an inveterate tinkerer.  Once he got the lightbulb working he continued tinkering with it.  Electricity was used to heat the filament.  It turns out that electrons were boiling off of the filament.  Edison added a "plate" off to the side of the filament and was able to use it to gather some of these electrons.  Moving electrons are what makes electricity electricity.  And this invention, a light bulb with a plate off to the side was the foundation of the electronics industry.

Others took Edison's experiment a step further.  They added more stuff into the light bulb.  If a metal mesh "grid" was inserted between the filament and the plate, then if the grid was sufficiently charged with an electrical voltage it could completely cut off the electron flow.  If it had no charge then the electrons would pass through it freely.  If it was charged with a suitable lower voltage, then the flow of electrons would be reduced but not completely cut off.

Edison's "light bulb + plate" device  was called a diode because it had two ("di" = 2) components.  This new device was called a triode because it had three ("tri" = 3) components.  Charging the grid appropriately could stop and start an electric flow.  Intermediate amounts of charge cold allow more or less flow to happen.  Not much electric power needed to be applied to the grid to do this.  This is a long way of indicating that a triode could be used to "amplify" (make louder) an electric signal.

More and more complex devices were built with diodes, triodes, and newer "tubes", light bulbs with more and more components stuffed into them.  Soon, "electronics" could be made to do truly amazing things.  For instance, a "microphone", invented by Bell, the telephone guy, could be sent through electronics to loudspeakers (invented by lots of people) to create a "public address" system.  Now an almost unlimited number of people would simultaneously hear a speech or a theatrical performance.

Another device Edison invented was the "phonograph".  His original version was purely mechanical.  The energy in the sounds of a person speaking caused a wavy line etched in wax.  Later, a needle traveling along that same wavy wax line could be connected to a horn.  This arrangement would allow the original sounds to be reproduced at another time and place.

This was amazing but ultimately unsatisfactory for a number of practical reasons.   The first thing to be replaced was the wax.  Vinyl was sturdier.  Edison used a cylinder.  That got replaced by a platter.  Finally, the mechanical components got replaced by electronics.

Now a clearer and more complex sound like a full orchestra or a Broadway show could be played and replayed at a later time and in a later place.  Also, the "record" could be duplicated.  Different people could now listen to the same record at different times.   But people could also listen to different copies of the same recording.  A mass audience could now be reached.  By the late 1920s all this was in place so that it could be used to add sound to movies.

And, at first, that was what was done.  A phonograph record containing the sound part to the movie was distributed along with the film.  If the film and the record were carefully synchronized, and if a public address system was added to the mix, then the sound movie became possible.  The first successful example of pulling all this off was The Jazz Singer.

It was terrifically hard to pull off everything that was necessary to create the record.  The process of making the necessary recordings, then combining them appropriately and producing the record was very hard to pull off.  But it also turned out to be hard to keep the film and the record in sync while the movie was playing.

As a result, The Jazz Singer is more accurately described as a silent movie with occasional sound interludes than it is as a true sound movie.  Much of the movie was just a traditional silent movie.  But every once in a while, the star would burst into song.  For those parts the audience heard not local musicians but Al Jolson, the star of the movie.  So, while it wasn't a very good movie, it was a terrific proof of concept.

This process used by The Jazz Singer and other early "talkies" was called "Vitaphone".  The "phone" part harkened directly back to the phonograph part of the process.  But something better was needed.  And it was needed quickly.  The success of The Jazz Singer had caused audiences to immediately start clamoring for more of the same.

Fortunately, the electronics industry soon came riding to the rescue.  Another electronic component that had been invented by this time was the "photocell".  A photocell would measure light intensity and produce a proportional electric signal.  Adding a photocell aimed at part of the film could turn how light or dark that part of film was into something that could be amplified and fed to speakers.

That solved the "theater" end of the process.  What about the other end?  Here the key component had already been invented.  A microphone could turn sound into a proportional electrical signal.  It was easy to turn this electrical signal into an equivalent pattern of light and dark on a part of the film.  Of course, electronic amplifiers (already invented) had to be added into the process at the appropriate points.

In the transition from silent to sound two changes were made to how film was put to use.  First, the film itself was sped up.   Instead of 24 frames per second, 32 frames per second are used in sound films.  Second, a small portion of the film got reserved for the "sound track".

By having the projector shine a bright light through a narrow slot in front of the sound track part of the film, and by then amplifying the result and feeding it to speakers in the movie theater, a talkie would get its "sound track" from the film itself.  A separate record was no longer necessary.

There was one little problem left.  The film must go through part of the projector in a herky-jerky fashion.  We move a picture in position, stop the film, open the shutter, leave it open for while, close it, then quickly move on to doing the same thing for the next picture in line.  The sound track, however, requires that the film move past the pickup slot at a constant speed.  The solution turned out to be simple.

An extra "loop" of film is put in the gap between the part of the projector that unspools film off of the feed reel. and the shutter/lens area.  Another extra "loop" of film is put between the shutter/lens area and the part of the projector that feeds the film to the take-up reel.  The sound pickup slot is located just after this second feed point.  At that point the film is moving at a smooth, even speed.

This "extra loops" design has the advantage that the piece of film that has to move fast then stop is short.  This makes it easier for that mechanism to operate at the necessary speed.  All that is necessary is to place the sound that goes with an image a few inches ahead of it on the film.

On the other end of the process, the sound is handled completely separately from the pictures.  A "sound" camera does not process sound.  That's why Hollywood has used something called a "slate" for years.  It has a flat area on it where the name of the film, the "scene" number and the "take" number are marked.  Waving the slate in front of the camera before the actual scene is filmed makes it easy for the "editor" to know where a piece of film is supposed to go in the finished picture.

But with the advent of sound an extra piece called the "clapper" was added.  The last thing the person waving the slate does before he pulls it out of frame is to "clap" the clapper.  The moving clapper piece is easy to see in the film.  The intentionally loud "clap" noise made by the clapper is easy to hear in the sound recording.  This makes it easy to "sync" sounds to the pictures they go with.

During the phonograph era of sound movies all too often there was a delay between when a person's lips moved and when the audience heard the words they were saying.  This was caused by the record getting out of sync with the film.  Moving the sound from the record to a sound track on the film combined with the clapper system eliminated this problem.  It's too bad this problem didn't stay fixed.  I will be revisiting the "sync problem" below.

By about 1930 almost all of the movies coming out of Hollywood included a sound track.  And it turns out that some "color" movies came out in the period before Hollywood made the transition to sound.  There were only a few of them because the technique used was fantastically difficult and expensive to pull off.

Film itself doesn't care what color the images it carries are.  You shine a bright light through the film and whatever isn't blocked out ends up on the screen.  If what passes through film that has some color in it then that color will appear on the screen.  If there is no color in the film then what appears on the screen will all be in shades of black and white.

To make these early color movies Artists hand painted the color onto the film print.  That meant that every frame of the film had to be colored by hand.  And each print had to separately go through this difficult and time consuming process.  It was done but not often.  More practical alternatives were eventually developed.

The first relatively practical color process was called "three strip technicolor".  In the camera a device split the picture into three identical copies.  Each copy went a different path.  One path ended on film that had goo on it that was only sensitive to red.  Another path ended on film featuring green goo.  Still another path ended on film featuring blue goo.

The reverse was done on the projection end.  The process was complicated and hard to pull off.  It was eventually replaced by a process that needed only a single piece of film.  The film had multiple layers of goo on it.  There was a red layer, a green layer, and a blue layer.

The process of shooting the film, processing the film, and making prints of the fill was difficult and expensive.  But nothing special was needed on the theater end.  They just ran the fancy film through their same old projector and a color picture appeared on the screen.

While all this was going on a separate effort was being made to replace all this film business with an all electronic system.  The decade of the '30s was consumed with making this all-electronic process work.  By the end of the decade limited success had been achieved.

Theoretically, the technology was already in place.  The photocell could act as a camera.  And a light bulb being fed a variable amount of voltage could stand in for the projector.  But neither were really practical.  You see, you'd need about 300,000 of each, one for each pixel.

The word "pixel" is now in common usage.  "Pixel" is shorthand for picture element.  If you divide a picture into rows and columns then, if you have enough of them, you can create a nice sharp picture by treating each separate point independently.  The first PC I owned had a monitor that had 480 rows, each consisting of 640 dots.  That means that the screen consisted of 307,200 pixels.

So with only 307,200 photocells and only 307,200 light bulbs a picture with a resolution similar to that of an early TV set could be duplicated.  And, of course, this would have to be done something like 24 to 32 times per second.  But that's not practical.  Something capable of standing in for those 307,200 photocells and those 307,200 lightbulbs would have to be found.  It tuned out that the lightbulb problem was the easier of the two to solve.

Start with a large "vacuum tube" (generic term for a lightbulb with lots of special electronic stuff jammed inside of it) with a flat front.  Coat the inside of the flat front with a phosphor, something that fluoresces when struck by a beam of electrons.  Add the components necessary for producing and steering an "electron beam" into the other end of the same vacuum tube.

Creating an electron beam turns out to be pretty easy.  Remember that the filament in a light bulb boils off electrons.  A custom filament can boil off a lot of electrons.  Electrons are electrically charged so they can be steered with magnets.

Connect the electron beam generating and beam steering components inside the vacuum tube to suitable electronics outside the vacuum tube but inside the TV set.  When fed suitable signals, they will steer the electron beam so that it can be made to repeatedly sweep across the screen in a series of lines.  The lines are made to sweep down the screen.  The intensity of the electron beam will also need to be precisely controlled.  And the whole process will have to be repeated many times per second.

The intensity of the electron beam is changed continuously in just the right way to "paint" an image on the flat part of the vacuum tube thirty times per second (in the U.S.)  This specialized vacuum tube came to be called a TV Picture Tube.  Add in some more electronic components, components to select only one TV "channel", pull the "video" and "audio" sub-signals out of the composite signal. etc., and you have a TV set circa 1955.

The other end is a variation on the same theme.  Again a vacuum tube with a flat front is used.  This time a special coating is used that is light sensitive.  As the electron beam sweeps across it, the coating is "read" to determine how much light has struck it recently.  More light results in more electrons residing at a specific spot.  These electrons are carefully bled off.  More light on a particular spot causes more electrons to bleed off when that spot is swept.

Making all this work was very hard.  But it was all working in time to be demonstrated at the 1939 New York World's fair.  The advent of World War II put a halt to rolling it all out for consumer use.  Efforts resumed immediately after the end of the War in 1945.

Initially, none of this worked very well.  But as time went by every component was improved.  The first TV standard to be set was the British one.  They based it on what was feasible in 1939.  So British TV pictures consisted of only 400 lines.   Pretty grainy.  The U.S. came next.  The U.S. standard was set in 1946.  U.S. TV pictures consisted of 525 lines.  The French (and the rest of Europe) came later.  They were able to set a 900 line standard.  So French TV pictures were much sharper than U.S. pictures.  And U.S. pictures were significantly sharper than British pictures.

But what about color?  The first attempt was based on the "three strip" idea that was originally used to make color movies.  It was developed by CBS.  They essentially threw the old black & white standard in the trash.  That allowed them to use the same idea of splitting the picture into three copies.  The red signal was extracted from the first copy, the green from the second, and the blue from the third.  On the other end the TV set would process each signal separately before finally combining them back together.

This system would have worked just fine if it had been adopted.  But it would have meant eventually replacing everything at both ends of the process.  And TV stations would have to broadcast separate black and white and color signals on separate frequencies until the old "black and white" TV set were a rarity.  Who knows?  Maybe we would have been better off if we had taken that route.  But we didn't.

But NBC was owned by RCA and RCA was the dominant player in the making and selling of TV sets, cameras, and the rest of the equipment needed to do TV.  If it could be done, they wanted to come up with a "compatible" way to do color.  They came up with a way to do it.

First, they found a way to sandwich additional information into the signal TV stations were broadcasting.  Critically, black and white TV sets would be blind to this additional information.  So, when a TV station started sending out this new signal, it looked just like the old signal to black and white TV sets.  They would keep working just as they always had.

But new Color TVs would be able to see and use this additional information.  The additional information consisted of two new sub-channels.  A complicated subtraction scheme is used that took the black and white signal as a starting point.  Color TVs were capable of performing the gymnastics necessary to put a color picture on the screen.

This probably made color TV sets more complicated than they would otherwise have needed to be had the CBS standard been used.  But by the mid '60s color TVs at a low enough price point for many consumers to manage became available.  And the "compatible" scheme allowed lots of people to stick with their old Black and White TVs well into the '70s.

At this time (mid '60s) RCA made NBC broadcast all of their prime time shows "in living color".  The other networks were soon forced to follow in short order.  The early sets delivered washed out color.  But it was COLOR so people put up with it.  By the mid '70s sets that delivered decent color were ubiquitous and cheap.  Unfortunately for RCA and the rest of the U.S. consumer electronics industry, many of these sets came from other countries.  Japan was in the forefront of this invasion.

Japan started out making "me too" products that duplicated the capabilities of products from U.S. manufacturers like RCA.  But they soon started moving ahead by innovating.  Japan, for instance, pioneered the consumer VCR market.  Betamax and VHS were incompatible VCR standards.  Both came out of Japan.  Betamax was generally regarded as superior but it was also more expensive.  VHS came to dominate the consumer market while Betamax came to dominate the professional market.

By this time the computer revolution was well underway and there was a push to go digital.  But the first successful digital product came out of left field.  Pinball machines had been popular tavern entertainment dating back at least to the '30s.  For a long time they were essentially electro-mechanical devices.  They were devoid of electronics.

But computers had made the transition from vacuum tube based technology to "solid state" (originally transistors, later integrated circuits) starting in about 1960.  By 1970 solid state electronics were cheap and widely available.  A company called Atari decided to do electronic pinball machines.

When making a big change is smart to start with something simple, then work your way up from there.  So an engineer named Allan Alcom was tasked to come up with a simple pinball-like device, but built using electronics.  He came up with Pong.  It consisted of a $75 black and white TV connected to a couple of thousand dollars worth of electronics.  Importantly, it had a coin slot, just like a pinball machine.

The Atari brass immediately recognized a hit.  They quickly rolled it out and revolutionized what we now call arcade games.  Arcade games started out in taverns.  You would put one or two quarters in and play.  The tavern arcade game business was small beer compared to what came after.  But grabbing a big chunk of that market was enough to make Atari into an overnight success.

And the technology quickly improved.  Higher resolution games were soon rolled out.  More complex games were soon rolled out.  Color and more elaborate sounds were soon added.  Soon the initial versions of games like Donkey Kong, Mario Brothers, Pac Man, and the like became available and quickly became hits.

The "quarters in slots in taverns" model soon expanded to include "quarters in slots in arcades", as arcades were open to minors.  But the big switch was still ahead.  The price of producing these game machines kept falling.  Eventually home game consoles costing less than $100 became available.  You hooked them up to your TV, bought some "game cartridges" and you were off to the races.  The per-machine profit was tiny compared to the per-machine profit of an arcade console.  But the massive volume more than made up the difference.

All this produced a great deal of interest in hooking electronics, especially digital electronics, up to analog TV sets.  This produced the "video card", a piece of specialized electronics that could bridge the differences between analog TV signals on the one side and digital computer/game electronics on the other.

In parallel with this was an interest in CGI, Computer Generated Images.  This interest was initially confined to Computer Science labs.  The amount of raw computer power necessary to do even a single quality CGI image was astounding.  And out of this interest by Computer Scientists came the founding in 1981 of a company called Silicon Graphics.  One of its founders was Jim Clark, a Stanford University Computer Science prof.

SGI, started out narrowly focused on using custom hardware to do CGI.  But it ended up being successful enough to put out an entire line of computers.  They could be applied to any "computer" problem, but they tended to be particularly good at problems involving the rendering of images.  I mention SGI only to indicate how much interest computer types had in this sort of thing.

Meanwhile, things were happening that did not have any apparent connection to computers.  In 1982 Sony rolled out the Audio CD, also known as the Digital Audio Compact Disc, or the CD.  This was a digital format for music.  And it was intended for the consumer market.  Initially, it did not seem to have any applicability to computers or computing.  That would subsequently change.

The CD was not the first attempt to go digital in a consumer product.  It was preceded by the Laserdisc, which came out in 1978.  Both consisted of record-like platters.  Both used lasers to process dots scribed in a shiny surface and protected by a clear plastic coating.  The Laserdisc used a 12" platter, roughly the size of an "LP" record.  The CD used a 4 3/4" platter, similar to but somewhat smaller than a "45" record as a "45" is 7" in diameter.

In each case the laser read the dots, which were interpreted as bits of information.  The bits were turned into a stereo audio signal (CD) or a TV signal complete with sound (Laserdisc).  The CD was a smash success right from the start.  The Laserdisc, not so much.

I have speculated elsewhere as to why the Laserdisc never really caught on, but I am going to skip over that.  I'll just say that I owned a Laserdisc player and was very happy with it.  Both of these devices processed data in digital form, but eventually converted it into an analog signal.  When first released, no one envisioned retaining the digital characteristic of the information or connecting either to a computer.  The CD format eventually saw extensive use in the computer regime.  The Laserdisc never did.

So, what's important for our story is that digital was "in the air".  Hollywood was also interested.  Special effects were very expensive to pull off.  The classic Star Trek TV show made extensive use of the film based special effects techniques available when it was shot in the late '60s.  But the cost of the effects was so high that NBC cancelled the show.  It was a moderate ratings success.  But the ratings were not high enough to justify the size of the special effects budget.

When George Lucas released Star Wars in 1972 little had changed.  He had to make due with film based special effects.  There are glaring shortcomings caused by the limitations of these techniques that are visible at several points in the film.  But you tend to not notice them because the film is exciting and they tend to fly by quickly.

But if you watch the original version carefully, and you are on the lookout, they stick out like sore thumbs.  He went back and fixed all of them in later reissues.  So, if you can't find one of the original consumer releases of the film, you will have no idea what I am talking about.

He made enough money on Star Wars to start doing something about it.  He founded ILM - Industrial Light and Magic, with the intent of making major improvements in the cost, difficulty, and quality of special effects.  ILM made major advances on many fronts.  One of them was CGI.

Ten years later a CGI heavy movie called Tron came out.  It was the state of the art in CGI when it was released.  Out of necessity, the movie adopted a "one step up from wire frame" look in most its many CGI rendered scenes.  The movie explained away this look by making its very primitivity a part of the plot.

Tron represented a big improvement over what had been possible even a few years before.  Still, in spite of the very unrealistic rendering style, those effects took a $20 million supercomputer the better part of a year to "render".  At the time, realistic looking CGI effects were not practical for scenes that lasted longer than a few seconds.

CGI algorithms would need to improve.  The amount of computing power available would also have to increase by a lot.  But technology marches on and both things eventually happened.  One thing that made this possible was "pipeline processing".  The Tron special effects were done by a single computer.  Sure, it was a supercomputer that cost $20 million.  But it was still only one computer.

Computer Scientists, and eventually everybody involved, figured out how to "pipe" the output of one computer to become the input into another computer.  This allowed the complete CGI rendering of a frame to be broken down into multiple "passes".  Each pass did something different.  Multiple computers could be working on different passes for different frames at the same time.

If things could be broken down into enough steps, each one of which was fairly simple to do, then supercomputers could be abandoned in favor of regular computers.  All you had to do was hook a bunch of regular computers together, something people knew how to do.  The price of regular computers was plunging while their power was increasing.  You could buy a lot of regular computers for $20 million, for instance.  The effect was to speed the rate at which CGI improved tremendously.

A particularly good demonstration of how fast CGI improved was a TV show called Babylon 5.  It ran for five seasons that aired from 1993 to 1998.  The show used a lot of CGI.  And it had to be made on a TV budget, not a movie budget.  Nevertheless, the results were remarkable.

The season 1 CGI looks like arcade game quality.  That's about what you would expect from a TV sized CGI budget.  The images are just not very sharp.  But year by year the CGI got better and better.  By the time the last season was shot the CGI looked every bit as crisp and clear as the live action material.  The quality of CGI you could buy for a fixed amount of money had improved spectacularly in that short period.

So, that's what was happening on the movie/TV front.  But remember SGI and the whole Computer thing?   As noted above, the first home computer I owned used a "monitor" whose screen resolution was only a little better than a black and white TV.  Specifically, it had a black and white (actually a green and white, but still monochromatic) screen.  The resolution was 640x480x2.  That means 640 pixels per line, 480 lines, and 2 bits of intensity information.

PCs of a few years later had resolutions of 800x600x8.  That's 800 pixels per line, 600 lines, and 8 bits of resolution.  A clever scheme was used to allow this "8 bit resolution" to support a considerable amount of color.  For reference, a modern PC has a resolution of 1920x1280x24.  That's 1920 pixels per line, 1280 lines, and 24 bits of resolution.  Typically, 8 bits of resolution are used to set the red level to one of 256 values.  The same 8 bit scheme is also used for green and for blue.  That's comparable in picture quality to a good "HD" TV.  But back to our timeline.

The video capabilities of PCs increased rapidly as the '80s advanced.  Their capabilities soon easily surpassed the picture quality of a standard TV.  And SGI and others were rapidly advancing the state of the art when it came to CGI.  The later installments of the Star Wars films started using more and more CGI.  Custom "Avid" computers became available.  They were built from the ground up to do CGI.  

Meanwhile, custom add in "graphics" cards started to appear in high end PCs.  By this time games had leapt from custom consoles to the mainstream PC market.  And gamers were willing to spend money to get an edge.  As one graphics card maker has it, "frames win games".  If your graphics card can churn out more sharp, clear frames per second, then you will gain an advantage in a "shoot 'em up" type game.

These graphics cards soon went the SGI route.  They used custom "graphics processor" chips that were optimized for doing CGI.  And, as is typical of solid state electronic devices, they started out expensive.  Top of the line graphics cards are still quite expensive.  But they deliver spectacular performance improvements.  On the other hand, a decent graphics card can now be had for $50.

And, in another call back to SGI, which is now out of business, some supercomputers are now being built using graphics processor chips instead of standard "general purpose" processor chips.  Supercomputers built around graphics chips are not as fast as supercomputers made using general purpose chips.  But they are still damn fast, and they are significantly cheaper.

All these lines of development converged to produce the DVR.  TiVo brought out one of the first successful DVRs built for the consumer market in 1999.  It was capable of processing a TV signal as input.  It even had a "channel selector" like a regular TV.  It was also capable of outputting a standard TV signal.  What was in the middle?  A standard PC disk drive.  The TiVo translated everything to and from strings of bytes, which could be stored on disk.

The TiVo was a big improvement over a VCR.  A "guide" listing every showing of every episode of every show got updated daily.  This was possible because it had a standard PC style processor chip built into it.  All this made possible commands like "Record Jeopardy!".

It could also record one thing while you watched something else.  And you could watch shows you recorded in a different order than you had recorded them in.  And you could stop the show then restart it later without missing anything if the phone rang or someone came to the door.  And you could fast forward through the commercials.

Subsequent models permitted multiple shows to be recorded at once, even though they were being broadcast on separate channels.  Other features were added.  But the point is that, with the advent of the TiVo DVR, anything that could be done with analog TV equipment could now be done with hybrid analog/digital computer based equipment.

Leave that aside for the moment so that we can return to movies.  Recall that in 1972 an effects heavy movie like the original Star Wars was made without recourse to CGI.  But thanks to ILM and others, advances were starting to be made.  By 1982 a movie like Tron could be made.  What came later?  I am going to use the work of James Cameron as a roadmap.

Cameron was a brilliant artist who also understood technology thoroughly.  As a result, The Abyss, a movie released in 1986, only five years after Tron, showcases a spectacular CGI feat.  It included a short scene featuring a large worm-shaped alien.  The alien appeared to be a tube made entirely of clear water.

You could see through it to a considerable extent.  And bright things that were near it could be seen partially reflected in its surface.  And did I mention that the alien moved in an entirely realistic manner.  The alien was completely believable at all times.  The sophistication necessary to achieve this was beyond anything ever seen before.

The requirement for both translucency and reflectivity required much more computation per frame.  That's why he had to keep the scene short.  If he hadn't, the time necessary to make all those computations would have been measured in years.  As it was, it took months and a blockbuster sized movie budget to pull it off.

Five years later he was able to up the ante considerably.  Terminator II (1991) made extensive use of  what appeared to be a completely different CGI effect.  When damaged, which turned out to be a lot of the time, the bad guy had a highly reflective silver skin.  In his silver skin form he was expected to run, fight, and do other active things.  And he had to move like a normal human while doing them.

The necessary computer techniques, however, were actually quite similar to those used for his earlier water alien effect.  Fortunately, by the time Cameron made Terminator II, he was able to create a CGI character who could rack up a considerable amount of screen time.  And he could do it while staying within the normal budget for a blockbuster, and while hewing to a production schedule typical for a movie of that type.  

The CGI infrastructure had gotten that much better in the interim.  And it continued to get better.  He wanted to make a movie about the sinking of the Titanic.  Previous movies about the Titanic (or any other situation where a real ship couldn't be used) had always used a model ship in a pool.  Cameron decided to use a CGI version of the ship for all the "model ship in a pool" shots.  Nowhere in Titanic (1997) are there any shots of a model ship in a pool.

It turned out to be extremely hard to make the CGI version of the ship look realistic enough.  The production ran wildly over budget.  The production schedule slipped repeatedly.  It seemed for a while like the movie would never get finished.   But, in the end it didn't matter.  Titanic was eventually finished and released.  It was wildly popular, so popular that it pulled in unbelievable amounts of money at the box office.

That experience ended up giving Cameron essentially Carte Blanche.  He used that Carte Blanche to create Avatar in 2009.  Again, making the movie cost fantastic amounts of money, most of which went to creating the CGI effects.  It was released in 3D and Imax.  Realistic visuals that stood up under those conditions were seemingly impossible to pull off.  But he did it.  And the movie was even more successful than Titanic.   It too earned more than enough money to pay back all of its fantastically high production cost.

But Titanic and Avatar were in a class by themselves due to their cost.  What about a movie with a large but not unlimited budget?  What did CGI make it possible to do in that kind of movie?  Two movies that came out within a year of each other answered the question.

The movies were What Dreams May Come (1998) and The Matrix (1999).  Both had large but not Cameron-esque budgets.  Regardless, both made heavy use of CGI.  But the two movies used CGI in very different ways.  Creative and unorthodox in each case, but very different.  Both movies affected their audiences strongly, but also in very different ways.

I saw both of them when they first came out.  After seeing them the conclusion I drew was that, if someone could dream something up, and then find the money (enough to fund an expensive but not super-expensive movie), then CGI was now capable of putting that something into a movie, pretty much no matter what it was.

And CGI has continued to get better, especially when it comes to cost.  Now movies and TV shows that make extensive use of CGI are a dime a dozen.  In fact, it is now cheaper to shoot a movie or TV show digitally than it is to use film.  This is true even it it has little or no need for CGI.

It is shot using using high resolution digital cameras.  Editing and other post processing steps are done using digital tools.  It is then distributed digitally and shown in theaters on digital projectors or at home on digital TV sets (or computers or tablets or phones).  By going digital end-to-end the project is cheaper than it would have been had it been done using film.

Does that mean that there is nowhere else for the digital revolution to go?  Almost.  I can think of one peculiar situation that has arisen as CGI and digital have continued to get cheaper and cheaper, and better and better.

It had to do with the making of the movie Interstellar in 2014.  You see, by that point Hollywood special effects houses had easy access to more computing power than did a well connected and well respected theoretical physicist, somebody like Kip Thorne.

Thorne was so well thought of in both scientific and political circles that he had almost singlehandedly talked Congress into funding the LIGO project, the project that discovered Gravity Waves.  LIGO burned through over a billion dollars before it discovered its first set of Gravity Waves.  Congress went along with multiple funding requests spanning more than a decade based on their faith in Thorne.

Thorne's specialty was Black Holes.  But no one knew what a Black Hole really looked like.  The amount of computations necessary to realistically model one was a giant number.  The cost of that much computation was beyond the amount of grant money Thorne could get at one time.  And nobody else had any better luck getting approval to spend that much money, at least not to model a Black Hole.

But his work as a consultant on Interstellar granted him entrée to Hollywood special effects houses (and a blockbuster movie sized budget to spend with them).  The effects houses were able to run necessary computations and to use CGI to turn the results into video.

Sure, the ostensible reason for running the calculations was for the movie.  And the videos that were created were used in the movie, so everything was on the up and up.  But the same calculations (and video clips) could and did serve the secondary purpose of providing answers to some heretofore unanswerable serious scientific questions.  The work was serious enough that Thorne had no trouble getting it published in a prestigious scientific journal.

So we have now seen how movie production and TV production went digital.  That only leaves broadcast television.  The change was kicked off by consumer interest in large format TV sets.  Practicalities limit the size of a picture tube to around 30".  Even that size is hard to produce and very heavy.  Keeping a vacuum of that size under control requires strong, thick walls.  That makes them heavy.  The solution was a change in technology.

Texas Instruments pioneered a technology that made "projection TV" possible.  It soon reached the consumer market.  Front projection units worked not unlike a movie projector.  They threw an image onto a screen.  Front projection TVs just substituted a large piece of electronics for the movie projector.

Rear projection units fit the projector and the screen into a single box by using a cleaver mirror arrangement.  Rear projection systems could feature a screen size of up to about 60".  Front projection systems could make use of a substantially larger screen.

The color LCD - Liquid Crystal Display screen came along at about the same time.  Color LCD TVs became available in the late '80s.  Initially, they were based on the LCD technology used in laptop computers, so the screens were small.  But, as time went by affordable screens grew and grew in size.

But the important thing for our story, however, is that both technologies made it hard to ignore the fact that a TV image wasn't very sharp and clear.  And the NTSC standard that controlled broadcast TV made it impossible to improve the situation.

It was time to move on to a new standard that improved upon NTSC.  The obvious direction to move in was toward the PC.  With no NTSC standard inhibiting them the image quality of PCs had been getting better and better right along.  And the PC business provided a technology base that could be built upon.  The first serious move was made by the Japanese.

In 1994 they rolled out a "digital high definition" system that was designed as the successor to NTSC and other TV standards in use around the world at that time.  This scared the shit out of American consumer electronics companies.

By this time their market share had shrunk and they were no longer seen as leading edge players.  They operated a full court press in D.C.  As a result, the Japanese system was blocked for a time so that a U.S. alternative could be developed.  This new U.S. standard was supposed to give the U.S. consumer electronics companies a fresh chance to get back in the game.

U.S. electronics companies succeeded in developing such a standard.  It was the one that was eventually adopted the world over.  But they failed to improve their standing in the marketplace.  The Japanese (and other foreign players) had no trouble churning out TVs and other consumer electronics that conformed to the new standard.  The market share of U.S. consumer electronics companies never recovered.

That standard was, of course, SD/HD.  Actually, it wasn't a single standard.  It was a suite of standards.  SD - Standard definition was a digital standard that produced roughly the same image quality as the old U.S. NTSC standard.  HD - High Definition produced a substantially improved image.  Instead of the roughly 600x400 lines of NTSC and SD,  the HD standard called for 1920x1080.

And even this "two standards" view was an oversimplification.  HD was not a single standard.  It was a family of related sub-standards.  There was a low "720p" 1280x720 sub-standard, a medium "1080i" 1920x1080 (but not really - see below) sub-standard, and a high "1080p" 1920x1080 sub-standard.

The 1080i sub-standard used a trick that NTSC had pioneered.  (Not surprisingly, the TV people demanded that it be included.)  Even lines were sent during one refresh and odd lines were sent on the next refresh.  That means that only a 1920x540 per screen resolution was needed for each screen refresh.  NTSC had actually sent only about 263 lines per screen refresh.  It used the same even lines then odd lines trick to deliver 525 lines by combining successive screens.

The 1080p "progressive" sub-standard progressively delivered all of the lines with each screen refresh.  That's how computers had been doing things for a long time by this point.  And this "multiple sub-standard within the full standard' idea turned out to be important.  It allowed new sub-standards to be added later.  Since then a "4K" (3840x2160 - 4 times the data but it would have been more accurate to call it "2K") and an "8K" (7680x4320) sub-standard have been added.

The original Japanese specification would have required the bandwidth dedicated to each TV channel to be doubled.  But the U.S. standard included digital compression. Compression allowed the new signal to fit into the same sized channel as the old NTSC standard had used.  

There is a lot of redundant information in a typical TV picture.  Blobs of the picture are all the same color.  Subsequent images are little changed from the previous one.  The compression algorithm takes advantage of this redundancy to throw about half the bits away without losing anything important.  The computing power necessary to decompress the signal and reproduce the original HD picture was cheap enough to be incorporated in a new TV without adding significantly to its price.

The first commercial broadcast in the U.S. that used the new 1080i HD specification took place in 1996.  U.S. TV stations stopped broadcasting the old NTSC signal in 2011.  Adapters could be used that down converted HD signals into NTSC.  But few people bothered.  It was easier to just replace their old NTSC capable TV with a new cheap HD capable TV.

The widespread and rapid acceptance of HD resulted in an unexpected convergence.  A connector cable specification called HDMI came into wide use in the 2003-2005 time frame.  It was ideal for use with HD TV sets.  And the 1080p HD standard turned out to work well for computer monitors.

As a result, HDMI cables have become the cable of choice for both computer and TV applications.  HDMI cables rated to handle TV signals at "4K" resolution, or even "8K" resolution, are widely available.  They are well suited for use with even the most ultra-high resolution computer monitor.

It took a while, but we are all digital now.  Unfortunately this brought an old problem back.  In the digital world we now live in, the picture and the sound are back to taking different paths.  If everybody along the way is careful then everything is fine.  But all too often the sound and the picture get out of sync.

It most often happens on a live show where one or more people are talking from home.  Zoom, or whatever they use, lets the sound get out of sync with the picture.  If the segment is prerecorded this problem can be "fixed in post".  That can't be done if it is a live feed.  And, even if it can be fixed in post, all too often nobody bothers to do so.

I find it quite annoying.  But lots of people don't even seem to even notice.  Sigh!