Saturday, August 22, 2020

COVID test pooling

 To be perfectly honest, this post exists to justify the writing of a computer program.  I have written two computer programs recently and, being weird, have thoroughly enjoyed the process.  I want to do more, but to do more I need a reason.  The reason just needs to be good enough to allow me to talk myself into writing the program.  Here's my new reason.

Pooling COVID tests is now a thing.  (In this post, if I say "COVID" I mean COVID-19 and not the other kinds of COVID that exist.)  The FDA has recently authorized a commercial lab to pool up to four samples when testing for COVID.  So what does that mean and why bother?  Let's start with the "why bother" part.

The consensus among the experts is that we are not testing enough.  What's enough?  You want to test everyone entering a hospital that might be COVID positive so that you can both treat them appropriately and also so you will know if hospital personnel need to take extra precautions to stay safe.  But that's the absolute minimum.

You also want to contain the spread of the virus because it is deadly to some people.  To do that you need to test the people who have come in contact with known COVID-positive people.  You also want to test people who come in contact with high-risk people.

Beyond that, hospitalization and other medical costs are very high for those who get seriously sick.  If you can find COVID-positive people early while they are not very sick maybe you can keep some of them from getting seriously sick.  If nothing else, this would help to minimize medical costs.

Then there are various behavioral measures.  No one likes any of them but the impact varies tremendously between measures.  It would be nice to know which measure work and which are a waste of time.  That means you need to know who is getting infected and how they got infected.  That requires testing.

And ideally you need to test people you don't suspect of being positive.  That's the only way you discover that something unexpected is happening.  The experts have cranked the numbers.  They know how much testing and who should be tested if we want to get a handle on this.

We aren't doing either.  We aren't testing enough and we aren't testing the right people.  As a result the U. S. is far and away the country that is doing the worst job of containing the virus.  The devastating impact this has on the economy is just the beginning of the damage this is causing.

Everybody wants to see the introduction of medical measures that would help.  A vaccine that would completely block infection is what most people are focusing on.  But if we had measures that reduced the rate at which people need to be hospitalized, reduced the severity of the care then need once they get there, or reduced the death rate, any or all of these would help a lot.

But we have made at best modest progress on any of these measures.  And it looks like we aren't going to have access to any new game changing measures soon.  It could easily take months, maybe even a year or more, for the situation to improve substantially.

Testing is the first component in an effective containment strategy that is based almost entirely on non-medical measures.  You need to test everyone who might have the virus.  Then you need to isolate them so they can't spread it to anybody else.  You also need to do contact tracing, figuring out who that person has been in contact with so you can test them.

Right now, the U. S. is employing a "none of the above"' strategy.  We aren't testing enough people.  We are not effectively isolating those who test positive.  We aren't contact tracing effectively.  All three of these things are much easier to do if you only have a few cases to deal with.  But we have more than 50,000 confirmed new cases per day and who knows how many additional unconfirmed new cases.

For the remainder of this post I am going to ignore much of what's going wrong and concentrate on how we can increase our testing capacity.  Right now, about half of the roughly 700,000 tests we are doing per day are done by commercial labs.

Demand is so high that it is taking them, on average, the best part of a week for them to return results.  I am going to skip over all the reasons why such slow turn around is bad and concentrate on what could be done to increase capacity.  Hopefully, more capacity would result in quicker turn around.

The obvious strategy is "more".  Just increase the capability of doing what we are currently doing so that we can do more of it.  That has been in the works since at least March and, while these labs have been able to ramp up capacity, they haven't been able to ramp it up fast enough to keep up with rising demand.  So we want to do as much of this as we can but we also need to think about trying other things too.

Before I delve into test pooling, the subject that justified my latest foray into computer programming, I am going to dive into biology.  I am going to discuss the biology of COVID.  It's a virus.  That means it can't exist and reproduce on its own.  It needs the cells of a plant or animal.  So let's dive into a little cell biology.

Cells are incredibly complex things.  I avoided Biology like the plague while I was going to school because everything in Biology is complex.  I am going to spare you as much as I can by only talking about a few key things about cells.  Cells have an inside and an outside.  The two are separated by a cell wall.  Like everything else in Biology, the cell wall is very complex all by itself.  One of its big jobs is to act as a gatekeeper.

Cells need to take stuff in if they are to do their thing.  They also need to excrete other stuff.  I am going to ignore the latter and only talk about the former.  There is lots of stuff outside the cell, much of it bad or dangerous.  One of the jobs of the cell wall is to keep that stuff out while letting in the stuff the cell needs.  This is done by "receptors", specialized parts of the cell wall..

A typical cell wall has dozens of different types of receptors.  Each receptor has a specialization.  And they all use a "lock and key" system.  If a chemical has the right "key" it inserts it into the "lock" part of the receptor.  This causes the receptor to behave like a door.  Once activated, the door opens, sucks the key and whatever is attached to it into the cell, then immediately slams back shut.

Viruses typically have a "spike protein" poking out of them  It's the "key" part.  The spike protein on the COVID virus fits the lock part of a receptor with the scientific nickname of "ACE2".  (Trust me -- you don't want to know what the real name of this particular receptor is.)  Different viruses have different keys, spike proteins.  These keys operate the locks of different receptors.  That is one of the things that makes one virus different from another virus.

Once a virus finds a cell with the right lock it uses its key to enter the cell.  It then takes over the cellular machinery and reprograms it to make copy after copy of itself.  When enough copies have been made the cell literally bursts apart and that frees the many viruses that the cell has manufactured to roam around looking for another cell with the right receptor on its surface.

Viruses make us sick in two different ways.  Most obviously, a cell that drops what it would normally do so that it can instead make copies of a virus, is not doing what it is supposed to do.  That's bad.

But the body has various systems that are designed to notice this sort of thing and react accordingly.  If these reaction mechanisms work the way we want them to then the body contains the damage and things go back to normal.  Obviously, the body can underreact and fail to contain the infection.  But the body can also overreact.

For instance, part of the reaction may be to raise body's temperature, give the patient a fever.  This often helps the body to fend off an infection.  But, if your body's temperature gets raised too high for too long, this "fever" response can kill you.  Inflammation, another common bodily reaction to invasion by an infectious agent, can also get out of control and kill you.  The list goes on.

This is how viruses work and COVID is no exception.  One of the things that makes COVID particularly hard to deal with is that there are ACE2 receptors on many types of cells in the human body.  COVID doesn't care what kind of cell it encounters.  It only cares whether it has an ACE2 receptor or not.

The two most common places COVID finds cells that have ACE2 receptors are cells in the lining of the nose and cells in the lungs.  But it sometimes finds itself attacking cells that are found in the liver, the kidneys, the walls of blood vessels, the brain, and more.  To paraphrase the famous bank robber "that's where the ACE2 receptors are".

Now lets apply what we now know to COVID tests.  The current "Gold Standard" test is shorthanded as the "PCR test".  PCR is a lab technique the test uses.  The technique can be tuned to an extreme degree to look for a specific kind of genetic material.  One bottleneck associated with PCR COVID tests is that only these materials that have been finetuned to respond to COVID and nothing else can be used. 

A swab collects some goo from the walls of your nose.  Deeper is better for some reason so they swab deep into your nasal passages.  This swab is then subjected to the COVID specific version of the PCR process.

The result is that, if the swab contains COVID genetic material, then that material will be amplified to millions of times its original concentration.  At that point it is easy to detect.  And the PCR process is unbelievably specific.  It only amplifies the specific genetic material it has been customized to amplify and nothing else.

This process s complicated and takes time.  But it is a standard process.  Lots of labs use it routinely to study all kinds of things.  You need the right kind of machine.  You need the specific stuff that makes the process hone in on COVID and nothing else.  But that's it.  Typically, the process takes a couple of hours.

So what's the problem?  It turns out that there are plenty of machines around.  But the supplies, often referred to as "reagents", are hard to come by.  And the machines can be used in lots of different ways.  If you are running COVID tests you are not running other kinds of tests and vice versa.  But apparently, COVID testing bottlenecks are caused by reagent issues and nothing else. 

Commercial labs can't get the quantities of reagents they need so they ration.  And the way rationing manifests itself is in its effect on turnaround time.  They hold up processing swabs until they have the reagents they need.  I got a PCR based COVID test.  I was swabbed at 2:15 PM on a Tuesday and I viewed my result by entering a code into a web site at 9:30 AM the next day.

My test was processed locally and not by a commercial lab.  The lab that processed my test used the same equipment, reagents, and process, that a commercial lab would, so the result was just as accurate and reliable.  But my test only had to travel from one place in Seattle to another place in Seattle.  There they had sufficient supplies on hand to process it immediately.

With commercial labs, samples typically need to be shipped across country.  Fast turnaround in this environment is 24-48 hours.  That should be possible but currently is not.  And now that the commercial labs have a reputation for slow turnaround, fewer samples are being sent to them.  It's a viscous cycle at this point.

And if you have COVID in your nasal passages but no where else you are, at worst, only slightly sick.  The PCR test does not determine if you have COVD in your lungs, or anywhere else, for that matter.  There are other tests for that.  But before discussing them I want to talk about another "nasal" test.  It's the test the White House is using and it's called an "antigen" test.

The body's response to invasion by a virus or other "foreign invader" is complex.  A part of this response is the creation of antigens.  These are specialized chemicals whose job is to take out the foreign invader.  And they are more or less customized.  You wouldn't want them attacking things that are supposed to be in your body.

Scientists know a lot about antigens and how the human body's "antigen response" works.  And antigen detecting chemicals can be manufactured in advance in the lab.  This means it is possible to build an chemical that "lights up" when it comes in contact with a particular type of antigen.  Done right, the kinds of antigens your body makes to help it fight a COVID infection is what should set this chemical off.  So they set about making such chemicals.

One thing they can do is to design the chemical so that it includes something that fluoresces when activated.  It is easy to detect light of a certain color even when there isn't much of it.  And the "color" doesn't even need to be visible to the human eye.  It can be, for instance, in the ultraviolet band.  As long as there is a gadget that can detect it at low levels we are good.

So an "antigen test" contains chemicals that have been customized to look for COVID antigens and have been designed to do something that is easily detected when the encounter happens.  The problem is that these tests are not 100% accurate.  They can be set off by antigens that are similar to COVID antigens but not only and exactly COVID antigens.

That apparently is what happened recently.  A governor tested positive according to the White House antigen test but tested negative according to a PCR test.  The advantage of antigen tests are that they are quick and cheap.  These kinds of problems are why these tests have not been widely adopted.  Should they be?  Maybe.  Maybe not.

Both the PCR and the White House test have other problems.  If a person has been infected but it's early days then both will give a negative result.  There isn't yet enough of what they are looking for to set them off.  They have another potential problem.  What if the disease has run its course?  Then there is no remaining virus and a PCR tests will return a negative result.  I'm not sure whether the White House test will also come up negative in this situation or not.

These two tests also assume that everybody who has COVID has COVID viruses in their nasal passages.  That's mostly true.  But the virus can go straight to the lungs without transiting through the nasal passages.  This scenario is unlikely but not impossible.

In fact, if it was easy to contact COVID by touching surfaces there should be lots of paths to infection that do not involve the nose.  We are constantly bombarded with entreaties to continuously wash surfaces, hands, etc.  But very few cases have turned up where there is no COVID in the nasal passages.  That's solid evidence that the vast majority of COVID infections are caused by airborne transmission.

There is still another kind of test.  Like the others, it has its advantages and its disadvantages.  It is called an "antibody" test.  It is like the antigen based test in that it takes advantage of the complex response the body has to infection.  It looks for antibodies manufactured by the immune system to deal with infections.

Antigen, antibody, what's the difference?  And, while I am at it, let me throw another word at you, "enzyme".  For our purposes, these are all the same.  The specifics are quite different.  But they are similar in the ways we care about.  They are cheaper than a PCR based test.  They deliver an answer much more quickly.  Many of them use equipment and supplies that are widely available.  In other words, they scale up easily.  And lastly, they tend to be far less accurate than a PCR test.

I'll get back to them later but let me now return to the PCR test.  It is currently the most used test, in part because it produces a 100% reliable result.  And, because it is so accurate, only results from this test are included in the widely reported "number of COVID tests performed" number that news reports often feature.

And, if you have been following the news, the reported number of tests is trending in the wrong direction.  Experts agree that we are not performing nearly enough tests.  So we want this number to be increasing rapidly instead of decreasing.  What can we do?

That's where the computer program I wrote comes in.  One widely circulated idea is called "sample pooling".  Let's say you mix together some of the raw material from ten swabs into one blob, then you run a PCR test on it.  If the test comes back negative then you know immediately that all ten people are negative.

You have, in effect, multiplied the reach of performing a single PCR test by a factor of ten.  So why don't we set about pooling right away?  After all, it would immediately increase the effective number of PCR tests we can do by a factor of ten.

Well, things are only this simple if the test from the pooled sample comes back negative.  In the real world sometimes it will come back positive.  What happens then?  Is there still a benefit to sample pooling and, if so, how much?  The answer is "it depends".  And the dependency is complicated.  And that's why I wrote my computer program.

My program performs what is called a Monte Carlo simulation.  That's right.  The technique is named after the famous Casino at Monte Carlo, where the rich and famous go to gamble (and gambol) in Europe.  Why?  Because the technique is based on gambling.  In this case, instead of a roulette wheel we use a random number generator.

I use my computer program to answer "what if" questions.  In the real world we want to know who has COVID and who doesn't.  Going in we don't know which ones do and which ones don't.  We don't even know what percentage of a larger group are COVID-positive.  We have to test everybody to find out what the answers are.  But we currently can't test everybody.

Fortunately, I only wanted to find out if sample pooling is a good idea or not.  Additionally, it would be nice to find out which situations it is useful in and which situations it isn't.  I devised a computer program that shed some light on those questions.

In setting about writing my program I had an intuition that the usefulness of pooling would depend on what percentage of "patients" that we were "testing" were COVID-positive.  So I said "let's assume that X% are positive.  How well would pooling work in that situation?"  I wrote a computer program that would give me the answer to that question.

This may seem artificial but bear with me.  An area where I was able to make the program behave in a very realistic manner had to do with how I had it determine which "patients" were COVID-positive and which weren't.  I fed some parameters to it that allowed the program to calculate how many "patients" it would need.  Let's say that it calculated that it would need 1,000 "patients".

For each of these 1,000 "patients" I had the program individually compute a "random" number.  (I have a reference book that devotes 150 pages of dense prose to the business of having computers generate what are technically called pseudo-random numbers.  Let me leave it at, the process I used was sufficient to the task.)  The "random" number was then manipulated in such a way that the likelihood of it coming up "positive" exactly matched the "Pct Pos" (percent positive) number that I had input.

So each "patient" was randomly assigned a status of either positive or negative.  But the percentage of "patients" in the pool as a whole that ended up being flagged as COVID-positive ended up close to the target percentage I had input. If the process is truly random then you rarely hit your target dead center.  So, for a pool of 1,000 patients, I would expect about 100 of them to come up positive, if my "Pct Pos" was 10 (10%).

This process mirrored the real world where there is a certain amount of randomness associated with whether any specific person gets COVID.  And we have historical data that tells us what percentage of various large groups people have ended up testing positive for COVID.

My program simulated this situation with a high degree of fidelity.  This method of applying a random number generator to individually model some attribute of each item in a group of items is what the Monte Carlo technique consists of.

I ran a bunch of scenarios through my program.  In each scenario I input specific values for the parameters (like "Pct Pos") that I was investigating.  That allowed me to try out a bunch of different scenarios to see what happened in each.

And I don't run each scenario I investigated just once.  I wanted to see what happened "on average".  So, I ran each of my scenarios ten times then averaged the result.  And I ran 21 different scenarios.  In all, I ran a total of 210 independent Monte Carlo simulations by the time I was done.

In each case, the program went through a setup stage where it created as many "patients" as it needed for a particular scenario.  It then I modeled how things would turn out using sample pooling.  Specifically, it calculated how many tests would be required to determine the COVID status of all "patients".  This number was then compared to how many tests would be required to just test everybody once.

There are a number of subtle effects that come into play so the results were sometimes not what you would think they would be.  (This kind of unexpected behavior is why Monte Carlo simulation is a popular way of exploring any situation complex enough to be hard to analyze directly.)  But the simulations answered the basic question, "how much could we stretch our limited testing capacity by doing things this way or that way".

For instance, in one scenario I assumed that the positivity rate was 10%.  Why?  Because it's a nice round number.  (It is also lower than the positivity rate we have seen in some real world cases.)  I used a "Pool size" of 20.  That means I was simulating mixing 20 samples together and then performing a single test on the resulting mixture.  Then I simulated 50 pools.  That meant I had 1,000 (20 x 50) simulated patients.

The program divided the "patients" up into 50 pools of 20 and then examined each pool individually to see if it contained at least one "patient" who had tested positive.  If so, then I assumed the real world equivalent would have caused that pool sample to come up positive.  Then the program counted up how many of the pools turned out to have at least one positive "patient" in it.

In the real world it would take fifty tests to check all the pools.  All the patients who were in a pool that came out negative could confidently declared to be negative without even needing to do any further testing.  So, at that point in our simulation we have done 50 tests.  But we are not done yet.

In the real world we would have had to go back and test every individual who was in one of the pools that came up positive.  The positive pool tests said that somewhere between one and twenty of the people in that specific pool were positive but it didn't tell us who was who.  (I had the program conveniently forget that it actually knew which "patients" were positive and which weren't because in the real world the test lab wouldn't have had this information.)  Thus, I had the computer program calculate that it would take another 20 tests for each "positive" pool.

Computers are good at math.  So, all this took the blink of an eye.  I had the program use this logic to tote up up how many tests would be required for each of my simulated sets of patients.  I then compared this to the simple method of just testing all the patient samples in the first place.  The improvement was the amount that the calculated number of tests differed from the number of patients in the pool.

When I actually ran the simulation I have just described ten times then averaged the results the outcome was disappointing.  Going through the elaborate pooling process only cut the number of tests we would need by 9%.  Hardly worth the trouble.

But with my handy computer program I could plug different numbers in and see what happened.  If we cut the percentage of our patients that were positive down to 5% then we got a 31% improvement.  Better, but not exactly life changing.  So let's look at some other scenarios.  BTW, this is what scientists and engineers call "exploring the parameter space".  What if we made this bigger?  What if we made that smaller?  As you are about to learn, it is often hard to guess whether a change will make things better or worse.

So, if we stick with a 10% positivity rate but change the pool size to 10 samples per pool then the improvement goes from 9% to 22%.  The FDA has approved pooling four samples together in the case of a particular test.  So going all the way down to a pool size of 4 but leaving the other parameters the same gives us a 42% improvement.  Tests go almost twice as far.

If we return to assuming the positivity is 5% then moving from a pool size of 20 to a pool size of 10 increases our efficiency to 51%.  Tests now actually go twice as far.  Decreasing the pool size to 4 increases our efficiency modestly.  It goes to 55%.  See!  Things change in unpredictable ways.

What if our positivity rate is only 2%?  Then if we use a pool size of 20 we get a 61% improvement in efficiency.  This is the best result yet.  However, a pool size of 10 gives us a 73% improvement.  That's almost a 4 - 1 improvement.  So going down to a pool size of 4 must be a good idea, right?  Nope.  The efficiency drops to 67%.  See!  You don't know how well it's going to work until you do it (or simulate it, in our case).

I also explored what happened if the positivity rate was only 1%.  For a pool size of 20 I got a 73% improvement.  For a pool size of 10 I got a 79% improvement, almost 5 - 1.  Again, going down to a pool size of 4 was a disappointment.  The improvement was 71%. Live, or in thee case simulate, and learn.

While I was at it I wanted to explore another idea that was floating around.  What about pooling two different ways.  We would have two sets of pools, the A" set of pools and the "B" set of pools..  A part of each sample would be added to a pool in each set.

Call all the patients in a specific "A" or "B" pool a "cohort".  There are various ways to make sure that for every patient there is no overlap between the other members of the cohort of whatever "A" pool the patient ends up in and the other members of the cohort of whatever "B" pool the patient ends up in.  Ideally, this would allow us to figure out which patients are positive without having to do any patient level testing.

Let's create an entirely artificial situation in order to demonstrate how this works.  Let's say we have 100 patients and exactly one of them is positive.  Let's say further that patient #1 is our positive patient and patients 2-100 are negative.  This forces us to use a pool size and pool count  of 10.  In the "A" pool let's say patient #1 ends up in pool A-1 so it comes up positive when it is tested.  Pools A-2 through A-10 all come up negative so we immediately know that patients 11-100 are negative because we used a simple "group by tens" rule to decide which "A" pool a patient ends up in..

Now let's assume that patient #1 ends up in pool B-5.  (Here we used a more complicated rule.  It doesn't matter what it is as long as it works.)  We have carefully arranged things so that patients 2-10 all ended up in some pool other than B-5.  So when we untangle things we find that we have done 20 tests.  And those tests have allowed us to determine that patient #1 is positive and all the other 99 are negative.  Twenty tests have allowed us to tell the tale for all 100 patients.  We have improved how many patients we can test by a factor of 5.  That would be great.

But, of course, the real world, and even my simulated world is more complicated.  So, I am going to have pity on you.  I'll spare you the details and just cut to the chase.  (The "chase" is where I tell you how things came out.)  It turns out that things are a lot simpler if we use identical numbers for both the pool size and the pool count. 

This is not as big a problem as it sounds.  We can just run a bunch of batches.  If we want to test 1,000 patients and we are using a pool size and count of 10 then each batch contains 10 x 10 = 100 patients.  To get to the same 1,000 total patients we just divide the 1,000 patients into 10 batches of 100 each and proceed.  So, in the end, making things work when the pool size and count are different does not turn out to be worth the trouble.  So I didn't do it.

I incorporated the more complex logic that handles things correctly for two pool scenarios into my computer program and ran some two pool scenarios through it to see what would happen.  As I had before, I ran the same scenario ten times and averaged the results.  I started with a 25 x 25 scenario, 25 pools of 25 tests each.

If our positivity rate is 10% then we actually go backwards.  It takes 1% MORE tests to get a result for all 625 patients than it would if we just tested each one of them individually in the first place..  Things get better when we drop the positivity rate down to 5%.  Then we get a 26% improvement.  Not much but better than going backwards.  Dropping the positivity rate to 2% gives us a 54% improvement.  Dropping it to 1% gives us a 70% improvement.  Now, we're getting somewhere.

If we use 10 x 10 pools, then at a 5% positivity rate we get a 43% improvement.  At a 2% positivity rate we get a 76% improvement.  But, if the positivity rate drops to 1% we only match the 76% improvement we got at 2%.

I then moved on to a 4 x 4 scenario.  It made no sense to try high positivity rates as there are only 16 patients in the pool so I only tried 2% and 1%.  A 2% positivity rate yields an improvement of 55% and a 1% positivity rate of 73%.

So what does all this tell us?  It tells us that sample pooling is a big waste of time if the positivity rate is high.  Using the single pool scheme we got about a 2 for 1 increase with a positivity rate of 5% and pool sizes of 10 or 4.  A pool size of 20 was a waste of time.  At 2% and 1% positivity rates we could get about a 4 to 1 improvement using pool sizes of 10 or 4.

And the double pooling scheme looks like a waste of time unless the positivity rate is very low.  The lowest rate I tries was 1%.  My program couldn't handle fractions of a percent so that's the lowest I could go.

But, if you are double pooling and there are zero positives in a block then you only need to run tests on one pool to determine this.  So, if you are using a 10 x 10 batch, that means you only need to run 10 tests to determine that 100 patients are negative.  If the positivity rate is well below 1% then most of your blocks are going to have no positives in them.

So, we now know just how far a pooling scheme can stretch a limited number of PCR tests.  The short answer is "not far enough".  So is all lost?  Maybe not.

There was a very interesting article in the August 7, 2020 issue of Science Magazine.  The title of the article is "Fast, cheap tests could enable safer reopening".  The "fast, cheap" tests they are talking about are the non-PCR tests I discussed above.

The authors acknowledge the accuracy problems that seem to be unavoidable with these tests.  Their analysis concludes, however, that if you test people over and over, and if you retest frequently enough, then the accuracy problems can be overcome.

Now, the rate of testing they recommend, multiple times per week, may seem extreme to some.  And it would be if each test cost $100, as a PCR test apparently does.  But Yale university has created a test that costs about $4 per test.  The Yale test just got emergency approval from the FDA.

Yale is planning on publishing DIY instructions so that anyone with the required expertise, and lots of labs have that level of expertise, can create their own version of the test without needing outside help, a license, or any of the usual folderol.  That means that they may be able to figure out ways to drive the cost of this test down significantly.

Even more interestingly,  apparently the Israelis have developed a quick, easy test that costs twenty-five cents a pop.  It's not FDA approved,  I don't know how accurate it is.  But if it, or something similar, becomes widely available, the idea of testing a large percentage of the population on something like a "twice a week" schedule becomes completely practical.

And the best thing to do is probably a hybrid solution.  If we test lots of people using these "fast, cheap" tests we will likely get lots of positives.  But, as the example with the Governor and the White House test above demonstrates, that may lead to a lot of confusion and unnecessary concern because many of the "positives" may turn out to be false positives.  But what if we stop widely administering PCR tests and instead reserve it for people who test positive using a "fast, cheap" test?

We already have the capacity to handle that rate of administering PCR tests.  The current regime is generating about 50,000 positives per day.  If the "fast, cheap" tests produce one false alarm for each true positive that would mean we would need to PCR test about 100,000 people per day.  Even if we double or triple the number of people who turn up positive by testing widely and often, we can handle the 600,000 tests per day that would require.  

But 600,000 tests per day is a worst case scenario.  Likely we would need to turn around far fewer tests.  And that should mean that the PCR infrastructure would no longer be overloaded.  And that would mean that the PCR tests infrastructure should be able to reliably produce results in 24, or at most 48, hours.  That puts us in a far better situation than the one we now occupy.

No comments:

Post a Comment