Panicked elites are only too happy to throw the democratic baby out with the disinformation bathwater.
The highest court in Romania has cancelled the presidential election that was underway, calling the results of the first round null and void because of Russian bots peddling misinformation and supporting an outsider candidate – who surprised everybody and won. Fresh elections are expected but the date has not been set yet. Nor has it been made clear, as far as I know, what will happen if the wrong guy wins again.
The problem, we are told, is that the Russians ran a disinformation campaign on social media, with some 25,000s Tiktok accounts deemed by intelligence services to be part of a coordinated campaign, with a “state like foreign actor” named as the orchestrator, leaving others to join the dots and name Russia. And, according to the experts interviewed here, not naming any particular agents or operatives. A criminal investigation is expected to follow, where such details might come to light.
Along with this online disinformation campaign, there have been allegations of cyber attacks on election infrastructure, sometimes described together as a “hybrid” attack. But It’s not been made clear anywhere I’ve seen that the cyberattack component was successful in changing vote counts.
It’s what the Russians did to voter’s minds that’s the problem. They hacked the people, apparently.
The Romanian courts (ranked 51 out of 102 by the World Justice Projects Rule of Law Index) are being applauded by figures in the US foreign policy establishment, and press coverage of the decision has been largely favourable.
Writing for the Foreign Policy Research Institute, Antonia Colibasanu suggests that their actions provide a “blueprint for defending democracy”. Having respectable thinkers with fellowships at major think tanks discussing courts cancelling elections to defend democracy is, to say the least, a sign we live in interesting times. Colinbasanu clearly knows the details, down to the fact that the court cited “Article 50, paragraph 3, of Romania’s electoral law” which apparently covers this exact scenario.
But my goal here is not to pretend I know what’s going on in Romania. I don’t.
I suggest Colibasanu’s article and the interview linked to in the second paragraph (which is from an English language broadcaster funded by the Polish government) if you want to get into the details.
My point is that this approach, of identifying and reacting against disinformation in a top down, whackamole fashion, is an extension of a wrongheaded approach to the general epistemological crisis that’s been brought on by the digital age.
What struck me about the Romanian court’s decision is that it is an extreme application of the default approach to the “fake news” problem. This views the problem of disinformation as an intrusion into what is/was an otherwise healthy and legitimate public discourse. We could call this the pathogen theory of disinformation. If we can kill the virus, cut out the cancer, stop the flow of poison, then health will return.
An example of this approach would be the six part disinformation schema proposed by Alan Jagolinzer who runs the Cambridge Disinformation Summit. In this paradigm, a disinformation campaign has six components.
Of these points, it’s probably point 5 that causes me the most concern. After all, who’s to say who’s vulnerable? Who gets to be the surgeon, and who has to be the patient, and go under the scalpel?
It seems to me that such a framework is designed for exactly the kind of top down intervention (and “intervention” is a word that Jagonlizer and the Cambridge Summit have used). But when I challenged him on social media about it, he insisted that this was not top down, but a set of tools that anyone could use to “deconstruct” disinformation including, for example, a teenager viewing a post on instagram.
Imagine a teen getting a message from an influencer then using the framework to evaluate the influencer and the message. This allows the teen to try to deconstruct the message and the influencer’s intentions. Then the teen can independently make decisions. That’s one framing.
— Alan Jagolinzer (@jagolinzer.bsky.social) 11 December 2024 at 14:48
And of course, I don’t know his motivations, or anyone else’s.
But the same problem- identifying motive – will plague anyone attempting to tell if a false claim they encounter is being made intentionally or by accident, and anyone attempting to draw lines between those who are lying, and those who are wrong in good faith.
What about claims originally made by Russian bots, but now circulating in the Romanian community, propagated by misled but earnest citizens? How do we take that salt out of the soup?
There are many variations on this approach, which seeks to weed the bad actors out of the public garden. They are all misguided, and based on a misguided view of the pre-internet media landscape, which causes them to misunderstand the impact of the internet on that landscape.
There is an assumption, usually unarticulated and implicit, of an otherwise healthy and orderly public discourse, into which disinformation has intruded. This is the foundational error upon which the whole approach rests.
To understand pre-internet, 20th century media, the best place to start is with Noam Chomsky’s statement that “the smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.” During this apparent (but illusory) era of consensus, the furthest left of the major publications and the furthest right provided bookends to the library of acceptable thought.
Chomsky’s point was that our media was still part of the oppressive state-corporate complex, and not an ideologically neutral space, in which all voices were equally or fairly represented. My point is that this allowed for an illusion of consensus, since those outside the mainstream spectrum were invisible, even to each other, and often to those in power.
This apparent consensus might include untrue statements like “American foreign policy is guided by benevolent intentions.” That was Chomsky’s point. But it also included true statements like “Vaccines are safe and effective.”
This illusion of consensus obviously wasn’t a perfect situation, and deserved to be criticized. But it had some upsides too. Now, information chaos has become a tool of regressive forces – as is apparently the case in Romania, where, for the record, anti-vaxxer candidates also had a good run during the most recent elections.
So the problem we face is not a question so much of identifying and suppressing disinformation, but of achieving a sufficient enough, visible enough, consensus – something that requires excluding certain viewpoints from serious consideration – without reverting to the era of institutional gatekeepers, because that’s something which would be a bad idea even if it were possible.
The solution we propose here at Stone Transparency is to create a new, more accountable, kind of journalism. The central tool for doing that is what we call a video bibliography, or vibliography. Here’s an example of what that looks like in practice:
That’s a research portal. It contains highlights from my research for this story. If you click “see research” you’ll see the whole research project not just the highlights.
I started the stone app before I got to work on the story, and throughout it was taking one screenshot per second – backing up the source materials I encountered, in the state that I encountered them at the time that I encountered them, by default. That’s how the bulk of the research gets recorded. Then at key moments, when I found something interesting or made a decision about how to proceed, which threads to follow down which rabbit holes, I turned on my webcam and recorded a short – 30 seconds max – commentary. That’s how the highlights (shown in the portal above) get recorded.
The great thing about this tool is that journalists mostly hate it, and cannot use it, because their research process is dogshit, or non-existent. They are just adding noise to the system, not signal, and if we have a way to exclude them from serious consideration, all the better. Stone users will, when, inevitably, it catches on, form an elite tier of high-trust media, above the fray of online he-said-she-said.
Whereas the disinformation-as-pathogen approach seeks to fix the problem at the bottom – by protecting “vulnerable communities” from disinformation and malign actors, we seek to fix the problem at the top, by making journalism as good and as evidence based as it can be, not just as good as it used to be.
Follow Austin on Bluesky at @austingmackell.bsky.social and on threads at @austinmackell