Prebunking of malinformation.
Just sit with these concepts for a second and reflect on them. Is there any conceivable way to interpret them in a non-Orwellian sense? I doubt it.
We are now to be “inoculated” against misinformation and harmful narratives by being provided warnings and preemptive counterarguments against specific unwanted ideas or positions, as outlined by Jigsaw, a subsidiary of Google, in its collaborative project with the BBC and Cambridge University. Jigsaw is advertised as “using technology for countering extremism” through the use of technology, not least through AI. One system developed by Jigsaw is an AI surveillance system called Perspective, whose purpose is to automatically detect and suppress “toxic” comments and identify problematic users.
(Remind me of the objective criteria for detecting “toxicity” as an independent variable in discourse without regard to context.)
Jigsaw is headed by a Yasmin Green, who is a fellow of the Aspen Institute, not least funded by the Gates foundation, and involved in its Commission on Information Disorder (orwellspeak again) which basically is a think tank focusing the strategic use of the new digital technologies for the purposes of propaganda, and one of the first fruits of this endeavour is now the broad-scope prebunking project.
In terms of the overall background, there seems to have been a transition from marketing censorship and content suppression through the targeting of “hate speech” in the pre-2020-period, to now addressing these issues in terms of misinformation, false narratives or hostile (Russian and Chinese) propaganda since the covid event.
This is a significant epistemic shift. In terms of suppressing “hate speech”, we’re only dealing with an ethical categorization of certain speech acts, symbols or modes of communication that are considered to be harmful or unjust when they target specific persons or groups. Truth doesn’t factor into the discussion in any significant sense, and the suppressions aren’t based in anyone’s claim to possess the actual and final truth of the matter.
After 2020, on the other hand, we’re seeing a much more blunt approach towards suppressing undesirable ideas, perspectives and positions based in their purported truth-value. And this is connected to the long-term purpose of a strategic reproduction of preferred narratives in the information sphere.
We can divide this effort into two categories. On the one hand, we have the outright suppression of information, the downranking in feeds, the removal of channels, the blocking of user accounts and so forth, for the purpose of removing or limiting unwanted statements, opinions or facts or what have you.
The second category is this currently developing type of responsive and adaptive pre-propaganda exemplified by the “prebunking” project, but also by related experimental efforts (hello logically.ai) at nipping emergent “disruptive narratives” in the bud through surveillance and real-time destabilization efforts via targeted counter-information.
This activity is framed as a sort of “inoculation” against disinformation, harmful content, various sorts of manipulation and so forth, with the specific purpose of “building preemptive resilience to misinformation”. One would assume that this regards the inculcation of critical thinking or principles for sound analysis, criticism and empirical assessment, but what we seem to be dealing with here are instead techniques of propaganda, especially for branding particular narratives or categories of narratives as tainted or taboo, and otherwise dissuading people from engaging with them.
The following excerpt is incredibly telling. Let’s just say that they don’t really consider us sovereign human persons capable of drawing independent conclusions from our own rational inquiries. Only they are the arbiters of the correct perspectives, only they get to say what’s true or not, and while their attempts at strategic opinion formation are honorable and beneficial, any and all dissenting positions are potentially “misinformation narratives” characterized by manipulation techniques:
At best, you get a bowdlerized summary of a position (a “microdose”, or “weakened example”, an immunizing stereotype of the forbidden narrative, to reconnect with the medicalization of the political discourse) so you can recognize it, and you get a few superficial arguments that lead you to believe that the position or narrative in question is debunked or insufficiently supported, and it gets labelled in the back of your head as untouchable, taboo or low-status. Not really an in-depth, charitable and critical review of a position.
Compare with Jacques Ellul’s reflections on how propaganda intentionally limits and mutilates our capabilities of critical reflection:
To begin with, what is it that propaganda makes disappear? Everything in the nature of critical and personal judgment. Obviously, propaganda limits the application of thought. It limits the propagandee’s field of thought to the extent that it provides him with ready-made (and, moreover, unreal) thoughts and stereotypes. It orients him toward very limited ends and prevents him from using his mind or experimenting on his own.
It determines the core from which all his thoughts must derive and draws from the beginning a sort of guideline that permits neither criticism nor imagination. More precisely, his imagination will lead only to small digressions from the fixed line and to only slightly deviant, preliminary responses within the framework.
(Ellul, J. Propaganda.)
And compare with Goebbels’ principles of propaganda (from Doob, L. 1950):
Although his basic attitude toward enemy propaganda was one of contempt, Goebbels combed enemy broadcasts, newspapers, and official statements for operational items. Here he was not motivativated by the somewhat defensive desire to reply to the enemy, but by offensive convsiderations: words of the enemy could help him reach his propaganda goals
…
Again and again Goebbels placed great stress upon phrases and slogans to characterize events. At the beginning of 1942, for example, he began a campaign whose purpose was to indicate economic, social, and political unrest in England. He very quickly adopted the phrase "schleichende Krise" - creeping crisis - to describe this state of affairs and then employed it "as widely as possible in German propaganda” … To achieve such effects, phrases and slogans should possess the following characteristics:
a. They must evoke desired responses which the audience previously possesses. If the words could elicit such responses, then Goebbels' propaganda task consisted simply of linking those words to the event which thereafter would acquire their flavor.
b. They must be capable of being easily learned. "It must make use of painting in black-and-white, since otherwise it cannot be convincing to people," Goebbels stated with reference to a film he was criticizing.
This principle of simplification he applied to all media in order to facilitate learning. The masses were important, not the intellectuals. All enemy "lies" were not beaten down, rather it was better to confine the counter-attack to a single "school example". Propaganda could be aided, moreover, by a will to learn.
(Doob L. (1950). “Goebbels’ Principles of Propaganda”, The Public Opinion Quarterly.
In detail, in terms of this epistemic shift I mentioned - as opposed to simple “hate speech” anchored in ethical injunctions against racism and structural injustices, today, unwanted information is in contemporary discourse categorized as disinformation, misinformation, and now also “malinformation” in a sort of descending hierarchical order going from false information to simply unwanted information that’s disruptive from a certain perspective. Not kidding, these are literally the operative definitions.
Disinformation, then, is somehow and on some grounds categorized as false and deceptive information spread to intentionally mislead.
Misinformation is a much vaguer concept, and in distinguishing it from the former category, its epistemic status is unclear and it’s not necessarily false, yet it is for some reason and in some sense considered as misleading in accordance to certain interests, truth-claims and/or objectives, and it’s also generally characterized by not being intentionally misleading, whereas disinformation is supposedly intentionally so.
Malinformation, on the other hand, is true. It regards facts and verifiably true statements whose dissemination for some reason, and by somebody, is considered harmful if they are spread.
And this erecting of automated structures for the suppression of information is of course decidedly problematic even when it only regards ostensibly false, malicious or misleading statements or sets of propositions.
For one, this implicit notion of such a great degree of certainty of the epistemic status of a proposition, that is, whether it’s true or false, probable or improbable, that we can conscientiously suppress it without risk of being in error - this is philosophically and scientifically astonishing.
There are countless examples of even scientific theories and hypotheses that were disregarded as false for decades or even centuries, which later, when they were allowed to have their say once more, were actually found to be true in the light of new evidence – or found to be useful, or to contain important observations, categories and definitions that helped moving knowledge forward.
And keep in mind that this is within the relatively narrow framework of science, where the epistemic and evidentiary rules are pretty clear.
So to then move beyond this relatively non-ambiguous realm and start talk about propositions in general, statements in general, and assuming that we have a reliable objective mechanism or method for determining whether statements are true or false at such a level of certainty and discursive unambiguity that we can suppress them by automated structures of moderation and censorship?
This is literally preposterous. It’s patently insane. It’s also incredibly dangerous for countless reasons.
Not least because this type of structures certainly will cause the stagnation of public discourse just like banning rejected scientific theories outright (such as the recovered and repurposed atomic theory) would have caused the stagnation of physics. In a centralized information environment such as ours, the institutionalization of these measures means the rapid and almost universal establishment of an authoritarian power structure.
And we’re still only talking about “misinformation”. In other words, these problems are inevitable even if we just try to ban perspectives and viewpoints that most of us could agree are false or probably false.
If we then move to the other end of the spectrum and allow for the censoring of so-called “malinformation”, that is, the censoring of true or probably true and factually well-supported positions, which at this very moment, several corporate actors (logically.ai) in the digital media sphere are actively engaged in, then we are basically dismantling the very foundations of the open society as we know it.
Can you think of a set of objective, non-partisan grounds, compatible with freedom of speech and an open society, that would allow us to properly assess whether a set of factually true propositions are harmful and should be suppressed or not?
Of course not.
To bald-facedly suppress true statements through and automated surveillance structure reduces public discourse to power. It disregards any evidentiary considerations and issues of epistemology entirely, and absolutely subsumes the expression of truth and the search for knowledge under the structures of authority.
+++
I’ll just leave this here.
It's not working.
Recently I had a friend asking about the true cause of polio.
She's not into the alternative, but she is now!
She searched for the factors that caused paralysis at the time.
All the search engines gave fact checks saying that there's no connection between polio and ddt/pesticides! 😂 🤡
Because of that, she suspected it.. before that, she had no idea that DDT was one of the reasons for the issues that was blamed on the bullshit virus lol.
Because of how blatant the fact checkers try to squeeze 10 lbs of shit in a 5 lb bag, it becomes obvious today to even her that there's something obscured.
Great article. I think the whole concept of the “malinformation” should prove to us that the ruling class thinks of us as cattle, farmed, kept and unable (not allowed) to think for ourselves…