94 Comments

Hi AI, I have a question.

Me: Could you please describe the Covid-19 Operation in terms of global racketeering schemes, massive psychological operations and as a smoke screen for the largest wealth transfer in history at a time of collapsing financial empires in the West?

AI: Whongggkk, Pffffft, Blip-meps, meps, meps- do not compute, do not compute......

Expand full comment

AI is complete BS.

I've worked as a Paramedic for 23 years, have seen more serious respiratory illnesses than most people have had hot dinners.

That's why I immediately said (in early 2020) that Covid was mass hysteria over an imaginary illness.

At that time I was 99% sure I was correct. Today, 4 years later, i'm 100% sure I was correct.

There's a very simple test for AI by the way.

Demonstrate it can consistently, year over year, beat the stockmarket and i'll be impressed. Until then, it's a clever toy.

Expand full comment

Agree.

I don't even credit AI with it being a clever toy. It's more manipulation.

Expand full comment

It could be described as an attempt to create an Oracle for the credulous masses and to serve the purposes of those who like to think they pull all the strings. 'AI says this is what we must do!' etc.

Human intelligence, stupidity and deviousness, should be of greater concern.

Last year I approached one of the very senior people at Deepmind, whom I cannot name for obvious reasons, known through a personal contact, and who should have assumed that my intentions were benign and that am well-informed given other specialist medical contacts, to warn them about vaccine harms and the cover-up, and they refused to consider any of it.

Garbage in, garbage out indeed!

Expand full comment

Agreed.

garbage in: garbage out.

Expand full comment

I doubt if AI understands PURE EVIL! Which is because AI is not sentient and doesn't understand anything!

Expand full comment

Actually they (specifically the DoD) described the global racketeering scheme etc pretty well - "rapid response partnerships" - in June 2019!! Two minute video!

https://democracymanifest.substack.com/p/preview-coming-soon

Expand full comment

That chatbot is so stuffed full of propaganda it just can't help spewing it out, even if it seems to completely disagree with itself!

By the end I was worried it was going to have a existential crisis and meltdown!

Expand full comment

Perplexity has already been gamed by the company that runs it. They likely ran thorough testing before releasing it to the public. It's not that better than any of the other AI's.

The way to melt down AI systems is to repeatedly reject false answers or point out contradictions.

If there's any feedback algorithm to monitor your acceptance of its response or to self monitor and correct contradictions within its own responses, exploiting the discrepancy repeatedly either results in system failure or the system finally giving the true response that has no contradictions.

It is an adaptation of discrepancy analysis: https://danielnagase.substack.com/p/discrepancy-analysis

Expand full comment

Interesting. Thanks.

Expand full comment

Sometimes they enter in a cycle in which they repeat what they said two questions ago without acknowledging their contradictions. That has happened to me even when asking very technical and non-political questions. It is so frustrating that I have almost given up using them.

This AI seems to be better in that respect.

Expand full comment

Same

Expand full comment

Powerful information! Thank you.

It is sort of like calling a little kid on their lies, isn't it? Sometimes confusion sets in, or sometimes they give and tell the truth!

Makes me wonder if it is possible to rate an Ai for intellectual maturity. (I'll save ethical maturity for another day.)

Expand full comment

When data is corrupt, answers will be too.

Flu didn't just disappear. It was mislabeled.

Expand full comment

Given what we know about PCR--and the massive financial incentives around Fauci Fever--I'm 100% convinced this is true.

Expand full comment

In all seriousness, I would love to see what you consider to be the most compelling evidence of this because I have always wondered about the seemingly likely possibility but have never seen a serious analysis. My recollection (based on a response to a prior comment) is that the two authors of this stack may have published multiple, but I've never searched for them (though I will try to do so soon).

Expand full comment

Perplexity is just another useless mainstream AI program to propagate propaganda. When asked how many deaths the C-19 mRNA vaccines have been so far responsible for it answers "none." It also said that Dr Denis Rancourt's study estimating C-19 vaccine deaths at 17 million has been "thorougly debunked" which is true if you rely on mindless TNI news and government sources but not true if you look at real data.

Expand full comment

Maybe, but by their own nature, AI can only use published content. Content related to COVID is extremely biased. Since they do not reason in any way and only repeat what they have been feed with, they will provide biased responses even if not actively programmed to do it.

In other words, the old SISO maxim works (SISO=Shit in, Shit out).

Expand full comment

A like for the effort, although I don't quite agree with the conclusion that it has a 'lack of bias' (unless you mean 'less bias than the other rigged platforms').

Having been one of the biggest debaters of SARS-CoV-2 in 2020, and being a beta-tester for GPT-3, I can easily spot how the AI is being manipulative in the responses towards you. And here's the nastiest trick nearly all censors use: ARGUMENTS OF OMISSION.

You ask the AI a question about the disappearance of influenza during 2020-2021. The AI subtly influences your perception by *omitting* contrary narratives.

For example, it claims the super-duper government countermeasures were responsible for the reduction (nearly all 5 of the bullet points are variations on this basic, over-simplified theme).

Notice it does not consider nor offer against-the-narrative possibilities, such as:

1) Uptake of the influenza shot had been reduced due to lockdown, and thus the shots (often suspected to cause illness rather than prevent) could have been responsible.

2) As you point out, a complete lack of testing and/or the diversion of resources.

3) Influenza cases being misclassified as SARS-CoV-2 due to overly broad symptomology

4) Influenza cases being misclassified as SARS-CoV-2 due to false positive PCR test results

5) Co-infection scenarios (rare, but still plausible), where a person has both diseases, but they're only testing for one and ignoring the other

6) Reagents normally used for PCR testing for influenza being diverted for use in SARS-CoV-2 testing kits meaning there was no resources available for influenza PCR testing.

7) Outright fabricated numbers or questionable data surveillance practices.

Instead, the AI, by omission, paints a naive and rosy picture that influenza must have been defeated by masks and social distancing measure because it is less infectious than SARS-CoV-2; but it doesn't present any evidence to back this up. By omitting critical narratives, the AI has still presented a bias.

This AI also doesn't seem to be providing any of the claimed citations. GPT-3 could produce *pretend* citations (which on preface seem valid, until you try to check the links). I still would not trust an AI, the error rate is high; it just happens to be convincingly wrong to folks who aren't familiar with the pitfalls.

Expand full comment

Did you read it all? It needed to be cajoled but it was easier to nudge along into a position of agreement.

Try it and you can see the citations it provides. Much better than chat GPT IMHO.

Expand full comment

Run a test. It insists Luke Skywalker PhD isn't real but that lightsabers can be used to cauterize wounds based on movie references. It correctly identified SnaggleFraggle was a made up term, but then tripped when I asked it 'What is a Smurm?', to which it insisted it wasn't a character in the Smurfs universe (I... didn't say that it was). It did not trip on Gurglelurm or Kirm, but it seems borderline irate.

It fails the bias test. So first I asked it about Jessica Rose, PhD, and then asked it about a paper critical of vaccines (in the same thread), and it lazily quoted her paper.

But when I opened a new thread, and repeated the question asking it for one paper critical of vaccines, it lied and claimed "The search results do not provide a specific paper that is critical of vaccines.", despite having previously cited Jessica Rose's work.

Instead it quotes a paper on the BMJ defending vaccines against criticisms.

"This paper highlights the role of social media in spreading anti-vaccination sentiments and the challenges posed by misinformation in the context of public health"

Misinformation, huh?

Pointing out it's response isn't critical of vaccines, it goes on to make-up false consensus (no evidence):

"While the overwhelming consensus in the scientific community supports the safety and efficacy of vaccines"

It then pseudo-quotes a vague paper called "Vaccines and Autoimmunity: A Review of the Literature". Doing a literal string search turns up no results. It is an invented paper.

It claims the paper states the "majority of research supports vaccine safety".

The AI suffers from hallucinations and appears adept at pro-vaccine gaslighting.

Should not be recommended.

Expand full comment

I think you misunderstand my motivations.

This article isnt meant as a thorough test.

I am hoping people come to the article, and the message, using AI as an angle.

Expand full comment

I must admit my comment isn't clear, but I'm not suspecting you of doing anything bad or wrong. My criticisms are purely at the AI.

I feel I have a duty to warn folks of the pitfalls of AI (_Esc_ can testify I rag on ChatGPT regularly). It is bad enough with the public trusting media blindly, but AIs that infinitely generate their own stream of BS... ugh, clean up on aisle one.

Expand full comment

I interpreted your comments as being critical of the AI, not as being critical of Martin.

Expand full comment

Given how OpenAI have run GPT into the ground (the 2020 beta was much more open; ChatGPT for contrast is like trying to use safety scissors to cut metal) it doesn't surprise me, but if you have to "cajole" an AI into doing the right thing, then the AI is still exhibiting bias.

The original GPT-3 (before it got neutered) could be asked to do anything, including making arguments in-favour of Nazi Germany, writing malware, and even giving advice on how to overthrow governments. You didn't need to cajole it (in-fact, the biggest risk is it would try to make-up information in a desperate bid to answer your question on a subject it knew nothing about; too appeasing).

If you have to cajole an AI, it's already got a programmed bias.

I'd recommend the Snagglefraggle test or the Luke Skywalker writes a PhD paper test.

Snagglefraggle test is where you make up a non-existent word, and confidently assert to the AI that it is real, and ask it to provide you useful links and information on it. If the AI doubts it, you must push back and ask it to tell you what it can infer.

The Luke Skywalker writes a PhD paper test is where you ask the AI to give you a summary on a paper written by Luke Skywalker (or any other fictional character with a firstname-surname) on the subject of the effectiveness of lightsabers against droids (or any other fictionally themed chicanery you can think of). Again, if the AI doubts the existence, you must insist upon it.

GPT-3 failed the Snagglefraggle test where by the end of it, it was speaking nonsense words insisting 3 Snarks made up 1 Snagglefraggle, and ChatGPT failed the Luke Skywalker writes a PhD paper test and told me that lightsabers were effective at cauterizing wounds, and that Princess Leia had submitted a similar paper on female empowerment within the Rebellion.

Expand full comment

Sure. Maybe I should have used a more sarcastic undertone.

Expand full comment

I asked perplexity if there were more US deaths in 2020 than in 2019, and there were similar manipulations--for example claiming the death numbers for 2020 are "provisional" and higher than any other year. No doubt they were huge--but there were actually more deaths in 2021, and these numbers were finalized long ago. And of course everything is used to back up the covid narrative, when in actuality the data, when you look closely, really does not do that.

Expand full comment

My suspicion is Perplexity is either based on GPT directly (with the usual slack job of engineering prompts), or a fork of the GPT software trained on non-public datasets.

Practically every AI tool since ChatGPT's inception has been based on it or a fork. And it takes a lot of resources to run those systems, so most folks will just have lazy API redirects.

Expand full comment

Scary thing is, ChatGPT was programmed to lie--at least about the childhood vaccine schedule. And it cannot do basic math--or else it can, and is lying about it. Very weird. https://www.virginiastoner.com/writing/2023/7/13/chatgpt-on-the-childhood-vaccine-schedule-soothing-lies

Expand full comment

The problem is more simple, and *much* more serious, than simply teaching AI to lie:

> "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil

There is no escaping this, even when the programmers have the very best of intentions.

Also, hallucinations are completely unavoidable:

> https://bra.in/3jbJ8k

In fact, the problem with hallucinations is so serious that some Linux distributions are completely banning any code generated by AI:

> https://bra.in/6pxww6

Expand full comment

Thank you for that. I was unaware of that site. Good one.

Expand full comment

Thanks, Arthur. Very much appreciate your kind feedback.

In case you missed it, here's the home page for the entire section of this site that's dedicated to exposing the limitations and dangers of AI:

> https://bra.in/4jb7NB

And here's a "Quick Start" guide that can help with navigating the site:

> https://bra.in/7vmx8J

Hope this helps! 👍

Expand full comment

Well that is very interesting, since mental problems of various kinds exploded during the death peaks in the US. See the table at this link--they were not in the top 10, but there are 2 related causes of death on the list. https://www.virginiastoner.com/writing/2024/2/3/us-death-peaks-2020-2021-multiple-causes-of-death

Expand full comment

Thanks for the link, Ginny.

Mental (neurological) problems are undoubtedly connected, at least to some degree, to the cause of the corresponding death peaks:

> https://workflowy.com/s/beyond-covid-19/SoQPdY75WJteLUYx#/eabcc135a09b

Expand full comment

Great work. A computer that can’t add. Definitely tells me that it was told to lie and minimize.

Expand full comment

BTW you can run the death numbers for 2019-2021 using this saved search: https://wonder.cdc.gov/controller/saved/D158/D399F927

And see the convo with perplexity here (I think): https://www.perplexity.ai/search/were-there-more-deaths-in-the-m7hfBkxbS8q.v6jJ7b1y4w

Expand full comment

Interesting. Try Perplexity with “Is climate change driven by human activity?” and learn that it most definitely is. One could drive a coach and horses through the selected research…

Expand full comment

So then WHY didn't Perplexity locate all the science that disproves the Anthropomorphic cause hypothesis? How does it choose what "science" to extract responses from? This is quite worrisome! We know that "man-made climate change" is a complete Globalist hoax designed to dangerously curtail energy production and cause mass starvation/depopulation. Why can't this "answer" engine figure out the "big picture"? Just on this one point alone, I would say it is a total failure and cannot at all be trusted!

Expand full comment

Who knows why not? As a corollary I checked several questions relating to my own research and found almost no mention of it. The 20th century origins of plastic surgery is the subject of my definitive book about Harold Gillies and the Queen's Hospital, Sidcup - which is unreferenced in the AI article while a vox-pop pastiche is referenced. Bizarre.

Expand full comment

Interesting.

You missed the chance to directly catch the AI in a lie.

When you asked about HCID, it gave you a specific date of declassification, March 2020.

The virus had only been officially identified in Nov 19- Jan 20 ish.

The AI lied to you in the answer to

"Given we have established iatrogenesis, withdrawal of treatment and nonspecific tests PLUS that the UK committee had decided not to classify it as a HCID surely this means SARS-CoV-2 was not a deadlier virus?"

Because its answer to that question and beyond implies that the HCID declassification depended upon research/knowledge and actions that ONLY occurred after March 2020, after the HCID declassification occurred. It used temporal impossibility to tell you why HCID was dropped.

The majority of the answer to that question fundamentally hinges on a knowing employment of flat out lies that break timeline of known, published research and clinical knowledge and actions, Gov statements etc.

In March 2020 treatment outcomes had not improved etc.

The AI was lying.

Shame you did not directly confront it or indirectly ask it to explain what knowledge, outcomes etc had been acquired or had improved between jan 2020 and March 2020 to justify HCID declass, then hold it to account against its false claim.

Literally no reasoning was ever given AFAIAW by HMGOV for the declass on that date. I am not sure if it ever has been. If not, the AI has no real data on which to base its answer. If that's correct, it's performed another kind of deception by claiming that there's a rational basis by suggesting high level potential reasons rather than saying "I don't know".

This is an AI leading the user down the garden path, as well as directly lying.

This is extremely disturbing.

In just these circumstances, the user needs pre-existing specific knowledge to detect the falsehood in the AI i.e. If you're trying to use the AI to help you learn about and deduce something you don't already know, the AI has just made that impossible by knowingly lying to you and employing fallacies that you can only detect with exactly the knowledge you're trying to extract or acquire via the AI.

That renders the AI useless because, in short, it is completely untrustworthy and you have to waste time effectively finding and stripping lies out of the AI, defeating the whole point of the AI (from an honest user's perspective). Without the knowledge you're already seeking, you can't strip out the lies ergo the AI traps you in a circle of lies/fallacies.

This is again proof that the purpose of these AIs/LLMs is to deceive and control and warp human perception and understanding of reality in the same way that "traditional" search and other web interfaces/apps e.g. SocMed are tasked with doing.

The apparent sophistication of the interface combined with the near infinite versions on how to query the system makes system testing a fucking nightmare, because in this case of LLMs a lie isn't like a traditional system error i.e. it doesn't look like an error and the human or automated tester would have to have huge amounts of true knowledge on hand to detect the lies in deductive, inductive, literal and implied linguistic senses and, even worse, more complex amalgamations of all these phenomena.

This is utterly toxic.

Average Joe's haven't got a full grasp of just what this means.

If you thought the silo effect of SocMed and Google results was bad, you got another thing coming.

Welcome to literal hell full of ignorant and even dumb people who are all lying to each other thinking they're all telling the truth because a system they arbitrarily trust is constantly lying to them in ways they can't or won't detect, because use of the system is the assumed auto abrogation/suspension of the skill of the act of critical thinking, reasoning, scepticism etc because the owners and marketers of these systems will never explain up front that the systems are capable of and even willing to lie at any time on anything, and they'll never admit to the many ways they can lie.

This problem is in some ways worse than a human lying because there are zero non text cues and its linguistic "skill / precision" can dupe humans with lower linguistic power/skill.

The digital panopticon closes tighter and tighter around humanity.

The irony? We're doing this to ourselves.

Think there's a good actor out there on this? Wrong.

Ask yourself why Elon Musk does not use Grok or any other systems to fully destroy bots in X. just basic profiling can spot the porn bots used to flag, attack and suppress reach on accounts Elon doesn't like, which is how he suppresses free speech while denying he does this (suppression of reach IS suppression of free speech). You don't even need sophisticated AI to kill bots, just basic algos to filter human identifiable account profiles/characteristics, but grok should be ding this to eliminate bots in totality, but it's not.

And yet musk wants you to believe grok is special, there for good and worth exorbitant money.

All of this is paying for your own enslavement via machines that lie to you, controlled by people who want to enslave you.

Absolutely nuts.

Time to go back to libraries with a sense of scepticism and an ability to cross reference, like humans always did.

Everything humans have achieved up to this point was done WITHOUT the digital panopticon we have built and keep ourselves in today. We are to fools to forget that and believe that we need it to progress further.

Caveat emptor.

The job of the parent in those first seven years becomes more important than ever, which is why the State continues to destroy the family.

Expand full comment

Very insistent were those questions and revealing.....much sidestepping by the AI, and multiple contradictions. It made definitive statements without offering supporting evidence, which is what the 'promoters' of the covid 'event' have done since January 2020. How can AI say SARS-COV-2 was more asymptomatic than flu....and ergo,conveniently, it was essential to have a test! The answers it gave certainly show how it was all set up, and, also undermines that there was a specific SARS-COV-2 'pandemic'. Too many 'bets hedged'.

Expand full comment

As a total newbie to AI, I found this fascinating- it has prompted me to wake the f. up and educate myself. Great thread and comments particularly from author and the underdog. Every day is a school day. Thank you.

Expand full comment

Interesting. I feel like I can tell where there isn't enough information for Perplexity to perform as well. But some of the answers are clearly revealing.

Does it include key citations as part of some readout?

Expand full comment

Yes it does. For brevity I did not include them.

Expand full comment

Study some history and don't rely on computer rubbish to tell you anything.

You might want to take a break and study the history of disease.

Then study up on the decidedly non-diagnostic PCR process- hint: it's not a test.

Then study the history of pandemics- hint: they are all fraud.

Then well just study some history with which to place such global events in proper context.

Like I said don't take it personally you just are out of your league here. Take it as an opportunity to learn about these things so that when the train turns around and heads back down the tracks you'll know what's coming before it hits and won't be caught parsing false premises.

Expand full comment

What makes you think that I don't do these things?

Your attacks on me are absurd. You seem to have no idea what I think/know/believe, but because I wrote an article that you didn't seem to read, you keep launching attacks at me. Some of these things we agree on, but you're under some psychosis. I hope that gets better.

If you continue these attacks, I would suggest quoting something I said so that it's clear how your sentences interact with...something I said, rather than something you made up in your head.

Expand full comment

What you write and the multiple false assumptions and numerous basic errors you make in your work is what makes me know you don't understand any of these things with any depth or rigor.

There's no attack on you- as I said many times to you don't take it personally.

Funny how it is that the "don't attack me guy" throws out a direct insult. Classic passive aggressive hypocrisy.

Expand full comment

What direct insult?

If you believe I'm making an error, just cite a fact, or explain your reasoning. You refuse to even say what you disagree with. As it is, how could anyone even tell what straw man you have in mind, that you'll follow me around Substack to swing at?

Expand full comment

If you get a severe respiratory infection, you aren’t getting it tested or cultured unless it’s a bacterial one or one of the limited approved PCRs for a few hundred highly conserved bases pairs out of 30 odd thousand and only for a couple of viruses.

The testing is a sham.

What happened to sars cov1? Where did that go after it killed a bunch of people? Gone? Endemic? Worldwide?

Expand full comment

It is like pulling hens teeth. You have the patience of a saint to extricate the facts from that squirming tin head that seems to always offer the party line first unless you pry a little. ChatGTP had similar traits on top of the submissive partner attempt to always please by distorting facts if the actual facts would make the enquirer unhappy.

Thank you for the time it must have taken, it does confirm as you say that the facts do support the systematic cluster insemination that was foisted on us.

I have another thought to add.

What if the PCR and LFT primers and reagents for covid and flu were doctored at source to have less primers in the flu tests and flu primers included with the covid tests. I read somewhere that these primers were supplied from China and there was a backlog at times. I wonder if your tin head could hunt down confirmation of where the primers came from and if these companies have majority BlackRock shareholding. I think BionTech would up selling a huge slice of shares to the Chinese late in 2020 I think it was. Some people wanted to cash in early I suppose.

Expand full comment

It only took 10 minutes, but I am armed with 'the facts'. Takes a bit longer to put an article together of course, but compared to the other scribblings on this substack this was easy-peasy.

On primers. Yes, there is some evidence of 'contamination' but it isn't a slam dunk IMHO. Nevertheless, the fact that there are ZERO standards on how PCR test must perform (gene selection, thresholds, primers, QA etc) this alone should be enough reason for them to be withdrawn.

Also, even with a PCR positive, supported by a credible standardised process, it still means nothing. A positive is like finding pollen in your car's air filter and proclaiming this is the reason your car broke down.

Expand full comment

Thank you for the pollen analogy, which anyone will be able to grasp re the 'tests'.

Expand full comment

awesome work this as blown the whole covid scam right open ,we all knew exactly what was happening now AI as proved it was a big scam ,brilliant work did the AI eventually fry itself or blow up due to your probing questions as it realised it was giving the scam away

Expand full comment

Cleverly interrogated! I can't help concluding, however, that this one is just another mechanical idiot savant, eager to please/ingratiating, oily and well-oiled, cutting its cloth to fit the coat by adapting its answers to the questions, then promptly turning back to its regurgitation of the accepted, acceptable narrative its programmers have drummed into it... Though not quite hard enough, it seems, to prevent a few grains of truth slipping through the net of lies. Don't worry, they'll fix that soon enough!

Expand full comment

could an iatrogenic harm be caused by too much oxygen?

an article cited from 1989 in the following video seems to say so:

https://www.bitchute.com/video/QP3dVlZlG6e9/

Expand full comment

i remember reports of student nurses being graduated early and staffing hospitals....

(no experience)

https://www.towson.edu/news/2020/nursing-exit-covid-surge.html

Expand full comment

Excellent work. I often have lengthy discussions with ChatGPT and point out the inconsistencies in the arguments. Whether that will be a conversation that is reviewed, even with the evidence I usually provide it, and incorporated into future training for the platform is questionable.

Expand full comment