Dirty Deeds Done Dirt Cheap
Our 'cheap trick' paper is rejected by a mainstream journal, confirming that captured peer review survives unabated
Readers will be familiar with our efforts to expose the nefarious and commonplace practice of intentionally miscategorising those vaccinated against covid as unvaccinated if they contract ‘covid’, despite receiving a vaccine, or suffer a side effect or die of the vaccine within (usually) 14 days of vaccination. The motivation for employing the cheap trick is that it massively inflates vaccine efficacy, even if the vaccine were a placebo. This bias ensures that it will be a mathematical certainty that any vaccine will be estimated as highly effective, no matter how bad it actually is.
We dubbed this malpractice the ‘cheap trick’ to help bring this practice to the public’s attention. It is also known by other names, such as (the rather difficult to remember) case window counting bias.
Readers might be unaware that this practice is not new. Categorising those vaccinated as unvaccinated is a trick that has been used for well over a century to manipulate vaccine mortality statistics, being first documented by Alfred Wallace in 18891.
Encouraged by the fact that two papers covering some aspects of the cheap trick had been published in the ‘Journal of Evaluation in Clinical Practice’ - one by Peter Doshi and colleagues and another by Raphael Lataster - we decided to submit a much more detailed paper on the subject to the same journal. Our thinking being that the journal must be free of the influence of the vaccine peddlers and hence might receive a fair hearing and be subject to an even-handed peer review process.
But it turns out we were wrong. With the first round of peer review things looked promising. The feedback we received was balanced, fair and objective. We addressed all of the issues and concerns listed from two reviewers, and resubmitted the paper, expecting a fast turn-around. But, after a 6-month delay, we were informed that two entirely new reviewers were appointed and that, because these reviewers could not agree, a new third reviewer was to be appointed to adjudicate their disagreement. This is highly unusual, and we have never witnessed anything similar at any point in our academic careers.
Shortly after being told of this odd decision by the editor, he then informed us that the paper was rejected. This rejection was based on feedback from two completely new reviewers. Neither of these were involved in the initial round of reviews and nor did it look like they were the two reviewers who disagreed with each other so much they required adjudication.
We would summarise the reasons for rejection as “The ‘cheap trick’ is common practice in vaccine ‘science’. We all do it. We will continue to do it. It is in our interests to protect it. We will censor any attempts to reveal it”.
The cheap trick is a key component in the armoury protecting the vaccine complex so much so that our attempts to expose this practice have been censored or pilloried by the establishment at every turn.
Professor Michael Loughlin (pictured below) is Editor in Chief of the ‘Journal of Evaluation in Clinical Practice’ and as editor he manages the peer review process employed by the journal.
The paper
The latest version of our paper can be found on medRixv. Apart from some very minor points of clarification, it is the revised version that was submitted to the journal. You can judge the quality of the work for yourself and determine whether, based on review comments, it was given a fair hearing.
The abstract for the paper communicates what we aimed to achieve with the work and what we showed:
It is recognised that many observational studies reporting high efficacy for Covid-19 vaccines suffer from various biases. Systematic review identified thirty-eight studies that suffered from one particular and serious form of bias called miscategorisation bias, whereby study participants who have been vaccinated are categorised as unvaccinated up to and until some arbitrarily defined time after vaccination occurred. Simulation demonstrates that this miscategorisation bias artificially boosts vaccine efficacy and infection rates even when a vaccine has zero or negative efficacy. Furthermore, simulation demonstrates that repeated boosters, given every few months, are needed to maintain this misleading impression of efficacy. Given this, any claims of Covid-19 vaccine efficacy based on these studies are likely to be a statistical illusion or are exaggerated.
The reviews and our responses are set out below.
First round of reviews
The comments from reviewer 1 are pretty straightforward in that they suggest minor errors, omissions and make helpful suggestions on those points required improved communication or clarity. This is a ‘normal’ review.
One thing they specifically wanted from us is a protective scenario where the vaccines are assumed effective.
Here are the comments:
Page 1 Line 46. This is not selection bias. Study participants have already been selected for the study and the assignment of their follow-up time as exposed and unexposed is unrelated to study participation. See Rothman, page 134.
Page 2 Line 8. This sentence contradicts the previous one. Artificially lowering the infection rate for the unvaccinated cohort would make the vaccine appear less effective.
Page 2 Line 20. Why are these biases being introduced into these studies? For miscategorisation, the focus of this paper, I’m surprised that there’s no discussion of the time it takes to mount an immune response given that’s likely the reason for the delay in switching vaccinated people from the unexposed group to the exposed group.
Page 2 Line 39. Do you have a citation for the statement that miscategorisation selection biases inevitably exaggerate vaccine efficacy? I would expect nondifferential exposure misclassification to move the effect estimate towards the null while differential exposure misclassification could bias the estimate in any direction. See Rothman, page 137.
Page 2 Line 59. Given the Introduction focuses on effectiveness, why is safety a required keyword?
Page 4 Line 45. You have “exclusion (c)” here but excluded is type d on page 3.
Page 4 Line 46. Why are the scenarios only for no effect and harmful? Shouldn’t there be a protective scenario?
Page 4 Line 38. I’d like more information on the simulation methods. For example, how are you calculating vaccine effectiveness? And how do you treat the infected population – do they recover, are they eligible for reinfection?
Page 5 Line 39. Can you explain why in Figure 1, Panel A, the infection rate would increase for the unvaccinated in week 2? The unvaccinated group is composed of vaccinated people and unvaccinated people because of the delay but both groups have been assigned an infection rate of 1% right? And why is the rate for the unvaccinated group consistent across time regardless of the amount of delay used? Similarly, why is the rate in the vaccinated group lower than the assigned 1%? A table of the population states (weekly counts of exposed and unexposed and the number of outcomes in each group) would help readers follow what they’re seeing in the figures, even if it’s only supplied as a supplemental file.
And here are the comments from reviewer 2
This paper will potentially be an important contribution to JECP and will arouse interest. Consequently it needs to report sound methods, specifically providing the details required for replicability. To this end, please provide more detail in the Methods on the review described in Section 3. Whilst I understand that the review per se was not the focus of the paper, more information is critical to establish the soundness of your science in establishing your dataset. Please follow PRISMA guidelines for reporting the review methods and results (including a PRISMA flow chart). If you critically appraised the papers, please provide details on methods and the critical appraisal scores. More detail on how you determined the five types of miscategorisation bias in the literature you identified in the search is required (not only the criteria for determining any (the step prior to the list you provided in Section 4,, but also the robustness of your determination (independent evaluation of it, blinding, differences in findings between reviewers, and how these were resolved etc).
A positive review and one that recognises the importance of the work.
So, both of these reviews look like a ‘pass’.
Our resubmission
Adhering to normal practice we resubmitted the paper with required changes along with a covering letter detailing what changes were made and where. Here is the covering letter:
The main changes we made included making it clear that the paper is focused on miscategorisation bias and not selection bias and documenting the claims made about the role the invalid claim of an immune response plays in producing biased estimates of effectiveness. Here is what we said on this later topic:
A consistent bias in studies of Covid-19 vaccine effectiveness arises from the assumption that it is necessary to allow an incubation period (typically up to 21 days) for an immune response to take effect. Under this assumption subjects are not categorised as vaccinated until this period has lapsed. This is justified, for example in (Polack et al, 2020), with data that indicate that a divergence in Covid-19 cases between the vaccinated and unvaccinated only occurs after at least 12 days. However, in (Lauer et al, 2020) the authors admit that “Our current understanding of the incubation period for COVID-19 is limited.” However, from observational studies, such as (Pilishvili et al, 2021), it is known that a disproportionately larger number of Covid-19 cases are detected, in the vaccinated cohort compared to the unvaccinated cohort, within the first 10-14 days after vaccination. They reported that for the period of 0-9 days after receipt of the first dose, vaccine effectiveness was 12.8% and vaccine effectiveness at 10-13 days after receipt of the first dose was 36.8%.
Clearly, this is not an indication of an ‘effective vaccine’. Indeed, we might consider a hypothetical scenario where this incubation period assumption operates and where every vaccinated person is infected within the first 21 days yet is not categorised as vaccinated and is instead categorised as unvaccinated. Logically, if at least one genuinely unvaccinated person is infected within the period of the study, we would then conclude the vaccine, despite offering zero protection against infection, is 100% effective.
We also added the protective scenario for the purposes of being seen to be even handed, as requested by one of the reviewers (not that we believe it but because it is only fair to do so).
Next, we addressed the thorny issue of denominators:
“The method for calculating vaccine effectiveness and all scenario results are provided in supplementary materials. Note that two methods of calculating the vaccine effectiveness denominators are possible, one where the population denominators are adjusted to remove those cases excluded from, or transferred between, categories and one where they are not. The later method (denominators unadjusted) is the convention, and our results are calculated using that method. Note that when the infection rates are low, and the population is large one method approximates the other.”
Unknown newly appointed reviewers radically disagree, and a mystery referee is appointed
Given we addressed all of the reviewers concerns and were able to turn the paper around so quickly we expected it to be accepted.
We waited approx. 6 months for a response and finally received one in early December 2024. Note that it is normal practice for the reviewers appointed in the initial round of peer reviews to scrutinise the resubmission and to decide whether we have adequately addressed their concerns and done so sufficiently well to merit acceptance for publication. This did not happen.
Two (new?) peer reviewers were appointed, and a third mystery referee reviewer was also now on-board to supposedly adjudicate (unspecified) disagreements between these two entirely new reviewers.
Here is the correspondence from the editor Prof. Loughlin:
06-Dec-2024
Dear Professor Neil,
Re JECP-2024-0172.R1 - 'The extent and impact of vaccine status miscategorisation on covid-19 vaccine efficacy studies'
I am writing to apologise for a delay in the processing of your paper. I hope you will appreciate my being frank with you, but the problem is that we have a radical disagreement between two very well qualified reviewers, and I have decided to appoint a third reviewer for this paper. I am hoping to have a decision for you soon, but I thought I should explain the situation to you, as I regret keeping you and your co-authors waiting for a decision for so long.
Yours sincerely,
Michael Loughlin
Journal of Evaluation in Clinical Practice
We immediately smelled a rat and wondered: Who were these ‘well qualified’ reviewers? Were they the old reviewers, and if not, why were they appointed as new reviewers? And why did they need a referee? Who was the referee?
Needless to say, this email is a gigantic red flag. It is highly unusual for new peer reviewers to be appointed mid-way through a peer review, and this obviously deviates from any accepted norms. At the risk of mixing our metaphors this decision moved the goal posts by changing horses mid race!
Paper rejection
We were not much surprised to receive an email from Prof. Loughlin on the 27th of December 2024 informing us our paper had been rejected:
27-Dec-2024
Dear Dr Neil,
Manuscript ID JECP-2024-0172.R1 entitled "The extent and impact of vaccine status miscategorisation on covid-19 vaccine efficacy studies".
I am writing to you regarding the above manuscript which you submitted to the Journal of Evaluation in Clinical Practice.
Your manuscript was sent for review to expert referees whose reports are available at the end of this email. While the referees expressed some interest in this manuscript, they have raised significant concerns about the experimental design and presentation and interpretation of the data. In view of these criticisms, I am unable to accept your manuscript for publication in Journal of Evaluation in Clinical Practice.
I hope you find the referees' comments to be constructive. I would like to take this opportunity to thank you for submitting your work to Journal of Evaluation in Clinical Practice and I hope that you will consider sending further manuscripts to us in the future.
With my best wishes,
Professor Michael Loughlin
Editor in Chief
Journal of Evaluation in Clinical Practice
And here are the reviewers’ comments.
First reviewer 1:
Thank you for providing the supplementary materials – this clarifies your simulation methods and explains your results. Unfortunately, you have made a major error in your simulation design.
Using Scenario A, Case 1 as an example, in week 2, you calculate the infection rate among the vaccinated as Cases in fully vaccinated (>1 week) divided by Cumulative ever vaccinated (i.e., 100/210,000). The denominator of this calculation is incorrect since the 200,000 newly vaccinated individuals in week 2 should not count as vaccinated people – their outcomes are attributed to the unvaccinated population so they should also be part of the unvaccinated population, in accordance with both fundamental study design principles and as stated in the specific study design decision that is the topic of this paper. The correct calculation for the vaccinated rate in week 2 is Cases in fully vaccinated (>1 week) divided by Total fully vaccinated (>1 week) which is 100/10,000 resulting in a rate of 1%, which is consistent with your simulation specifications. This also corrects the calculations for the unvaccinated population – you currently do (new cases + cases in newly vaccinated)/Total unvaccinated when the correct calculation is (new cases + cases in newly vaccinated)/(Total unvaccinated + Newly vaccinated). This corrected calculation is (7900+2000)/(790000+200000) resulting in a rate of 1%, which is again consistent with your simulation specifications. Notably, with a rate of 1% in both your vaccinated and unvaccinated groups, this reproduces a vaccine effectiveness of no effect as intended by your simulation. This mistake is also seen in the other scenarios.
In making this mistake, your simulation diverges from the study design decision that you claim introduces bias – when researchers delay assigning individuals to the vaccinated group for 14 days after a vaccination, they assign both any outcomes that occur and these people’s follow-up time to the unvaccinated group until 14 days have passed. Your simulations do not recreate this scenario.
The reviewer concludes we made a simple arithmetical error in our simulation design for Scenario A - miscategorisation when a placebo is used as the vaccine. They are not explicit, but we think it is fair to assume they are referring to RCTs and ignoring the vast majority of observational studies we review. In observational studies, because the focus is on cases, typically only these people in the population are classified as unvaccinated if they are infected within the exclusion period. In contrast in RCTs we accept that cases can be excluded on both placebo and treatment arms within the exclusion period, but this is still a source of exclusion bias since cases are leaving the trial that should not. All of the simulations we modelled reflect this practice, and the purpose of the (we hope) fictional placebo scenario A was simply to illustrate how loaded the game is in favour of the vaccines! We were not arguing we have evidence that RCTs used placebos in all trial arms as should have been obvious to even the most casual reader, but this seems to have eluded the reviewer who focused on this nit-picking point to the exclusion of everything else.
Clearly, the reviewer expected us to simply accept and encode the bias in our simulation studies and confirm the vaccines as highly effective, and that no trickery took place. To them the only allowable study is one that supported and perpetuated the miscategorisation bias. Indeed, all simulations that do not simply recreate and confirm the correctness of the corrupt process would be, and were, rejected as ‘errors’.
The reasoning deployed here is entirely circular and more akin to that deployed at the papal trial of Galileo.
We think these comments exposed this reviewer as a new appointee and confirmed they were not involved in the first round of reviews, as would be normal practice. Nor was the reviewer the mystery referee supposedly appointed by the editor to adjudicate the dispute between the opposing reviewers. So, who were they and why were they appointed as reviewer?
As a technical aside the reviewer brought up the topic of denominators, so it is worth pointing out that any dispute about how this was handled in the calculations was covered in the paper’s supplementary materials, where we computed denominators both ways by either adding the ‘newly’ vaccinated to the denominator or not. This reflected variation in prevailing practice.
Here are the comments from reviewer 2:
The main claim in this paper is that the COVID19 vaccine is not effective because of so called mischaracterization bias. The authors say " A consistent bias in studies of Covid-19 vaccine effectiveness arises from the assumption that it is necessary to allow an incubation period (typically up to 21 days) for an immune response to take effect. Under this assumption subjects are not categorised as vaccinated until this period has lapsed.
This bias takes the form of miscategorisation, whereby study participants who have been vaccinated are miscategorised as unvaccinated up to and until some arbitrarily defined time after vaccination occurred (typically up to 14 or 21 days). "
Putting aside well -documented physiology of the immune response (it does typically take 2-3 weeks for antibodies to develop), this characterization does NOT apply to RCTs particularly placebo-controlled, double or triple masked trials. Conceivably, miscategorization could be a problem in observational , and single arm studies. But, in triple-blind RCT (where clinicians, patients , outcome assessors and analysts) were masked, and in which the incubation period is equally applied to BOTH arms, miscategorization is expected to occur at the similar rates in the vaccine and placebo arm, respectively. Therefore, the entire premise for the analysis in this paper seems faulty to me.
Also, the authors’ review of biases is also not accurate. For example, they define using RRR vs absolute ARR as outcome reporting bias. This is not correct. Some other statements also need to be more carefully reviewed.
I am sorry to be negative on this occasion but hope this feedback will be useful to the authors as they reconsider their hypothesis.
The reviewer refers to RCTs with these statements “this characterization does NOT apply to RCTs” and “….but, in triple-blind RCT” whilst acknowledging the problem in observational studies. However, of the seven RCTs we reviewed only two were double blinded and none were triple blinded, so their comment, even if it was true, which it is not, would only be relevant to a small subset of the papers we reviewed. Also, as is well known a single or double blinded RCT does not automatically prevent “outcome assessors and analysts” intervening during the course of the trial, hence cases can easily be subject to all kinds of recategorization or exclusion for any number of reasons.
They also say “…the incubation period is equally applied to BOTH arms” in the RCTs. However, this remains a form of bias we highlight in the paper - exclusion, where cases leave the study altogether. After all a vaccine that suffers a higher infection rate than the placebo during the exclusion period would benefit from this practice and thus overestimate vaccine efficacy.
Their review doubles down with the usual ‘immune response’ excuse used to prop up the miscategorisation fraud. Then the reviewer offers a seemingly penetrating comment on absolute versus relative risk, a topic we have covered extensively in this Substack and in our published academic work. But discussing this topic is completely irrelevant in this context and is not even mentioned in our submitted paper in any way related to outcome reporting bias2. Why offer comment on irrelevant material not in the paper, if not to muddy the waters?
Again, from the comments we think it is clear this reviewer is also a new appointee.
What happened?
We only have feedback from two new reviews. There is no feedback from the third mystery reviewer supposedly appointed to adjudicate between the two ‘well qualified’ reviewers who disagreed. So, what happened to this referee’s review comments?
We can only speculate that this is what might have happened. The first round of reviews was seen as highly politically inconvenient, and a decision was taken to bin them and start again. Two entirely new pro-vaccine reviewers were appointed who were predisposed against the revelations in the paper. To justify this decision perhaps a procedural reason was created involving an invented dispute between the original reviewers requiring a third party to adjudicate. Maybe it was assumed we would be naïve enough not to spot that the final reviews could not have come from the original reviewers. If true, then it was all pretty sloppy.
What can be done?
After three years of self-sacrifice and countless hours of unpaid effort it is clear that we have made little to no headway. The corruption continues unabated and those involved remain unabashed.
The only hope on the horizon is the confirmation of the nomination of Robert F. Kennedy Jr., as the Secretary of Health and Human Services (HHS). If he is confirmed in post, and it is no certainty he will be, he must be allowed a free hand to eradicate academic corruption, such as that evidenced here.
If he is not confirmed or his hand is stymied at every turn, we fear this corruption will not end. Sadly, if so, perhaps our writing on this topic will be discovered a hundred years hence, by future vaccine researchers. Much in the same way as we considered ourselves as pioneers exposing what we thought to be a new corruption only to discover the self-same evil exposed in the writings of Alfred Wallace from 1889.
Relative risk is embedded in the vaccine efficacy formula: VE = (1 −RR) x 100%.
Keep up the brilliant work.
If I was you I would hit the journal with a DSAR under the Data Protection Act to find out exactly what they have been discussing behind closed doors.
Every single piece of paper, email, or phone call they have mentioning your name they will have to send you within 28 days.
My guess is you will get a lot of data exposing their corruption.
I really appreciate your commitment to this absolutely central issue. It's on the top of my TODO list to try and make some contribution to this specific fraud as soon as I find a moment to scratch myself.
The "cheap trick" (I prefer the stronger term "miscategorisation fraud") *IS* is the key to undermining the whole charade.
May I suggest that when you're going up against the 'fraud protection racket', your chances of success going through the front door, fully loaded (so to speak), is unlikely to succeed.
I would suggest the (annoyingly) slower but ultimately much-more-likely-to-succeed death-by-a-thousand-cuts approach. Get a paper published in some reputable journal (but it must be run by the protection racketeers) that deals with the "cheap trick" as a purely statistical matter when studying the efficacy of any treatment that may, or is believed to, have a delayed effect. Leave COVID out of it. The racketeers guard will be down, as it doesn't mention their COVID vaccine fraud, and also it's much more difficult to object to the core message that miscategorisation/delayed categorisation can completely invert efficacy findings.
One of the things I'd like to contribute, when I get the time, is an open-source online simulation (where all the main knobs can be tweaked) that shows the miscategorisation fraud in all its glory and allows users to play with the key variables (e.g. R0, treatment rollout profile, categorisation delay etc) to visualise the problem.
All the best
John (Engineer & Scientist)