We’ve done many videos and substack articles talking about false positives for diagnostic test results and, in particular, for the covid-19 PCR tests. Unfortunately, many people including scientists and top legal scholars still do not really get it.
Someone on twitter just posted this article about a Portuguese court ruling against the legality of the PCR test in the context of using it to quarantine healthy people who happened to test positive – in this case some German tourists who brought the case.
In fact the story is from Nov 2020. Since we believe the covid PCR tests should never have been used, we completely agree with the ruling but it still infuriated me. That’s because, although the ruling was correct, the judges still did not fully understand the extent of the problems of false positive results. The judges said that the two most important reasons that a positive test does not correspond to a Covid case are that,
“the test’s reliability depends on the number of cycles used’’ and that “the test’s reliability depends on the viral load present.’’
Now, while these are well known and common reasons for false positives, the judges missed the elephant in the room just as most people seem to do: the impact of the underlying population infection rate (ignoring it is called the ‘base rate fallacy’). Even if you could fix systemic problems like cycle thresholds set too low, wrong number of positive genes considered, cross reactivity with other and dead viruses etc (all things we have written about on this substack) these problems are dwarfed by the underlying infection rate. Even if the false positive rate is low, when the infection rate is very low almost all positive results will be false positives.
For those who still don’t ‘get it’ this short video hopefully will make it clear.
I usually use a silly example to make the point.
Suppose you have Prof McBonkers who invents a test for a virus that **doesn't even exist**. The test is rather good. Should the virus ever come into existence it has a 99% sensitivity and specificity.
He decides to test 100,000 people and, lo and behold, finds that 1,000 people are "infected".
Every single one of these is a false positive - because the virus doesn't even exist. At a 'population' level there's a 100% false positive rate, even though at the 'test' level the false positive rate is only 1%
Another way I use to illustrate this is to think of a binary communication channel in which only the symbol '0' is transmitted. Errors on the channel mean that some of those end up at the receiver as the symbol '1'. Every single one of those is an error. All of them.
You can then adapt this model to the occasional transmission of the symbol '1' (representing the prevalence). Intuitively you can see roughly that if you're sending about one '1' symbol for every 100 transmissions, then an error rate of 1% on the channel means that any '1' that is received is only 50% likely to have come from a genuine signal.
Thankfully we still have some academics like Professor Fenton who are not bought off.
Since the start of the covid crimes, we’ve seen a horde of “experts” supporting an agenda of cash & control.
They start with the result they wish to see (greater ‘vaccine’ uptake; more funding; businesses closed...) then announce “the science” supports them. Which is of course the direct opposite of the scientific method.
THANK YOU Professor Fenton for having & maintaining integrity.