I have spent the last 3 years warning about this and here we are.
This is not yet 1984 but all the bricks are in place. A thought police which practically was impossible 10 years ago is now on the horizon with AI. A few more technical innovations and we have pre-crime, AI on drones and it's Terminator. The two together...
Authored by David James via The Brownstone Institute,
Laws to ban disinformation and misinformation are being introduced across the West, with the partial exception being the US, which has the First Amendment so the techniques to censor have had to be more clandestine.
In Europe, the UK, and Australia, where free speech is not as overtly protected, governments have legislated directly.
The EU Commission is now applying the ‘Digital Services Act’ (DSA), a thinly disguised censorship law.
In Australia the government is seeking to provide the Australian Communications and Media Authority (ACMA) with “new powers to hold digital platforms to account and improve efforts to combat harmful misinformation and disinformation.”
One effective response to these oppressive laws may come from a surprising source: literary criticism. The words being used, which are prefixes added to the word “information,” are a sly misdirection. Information, whether in a book, article or post is a passive artefact. It cannot do anything, so it cannot break a law. The Nazis burned books, but they didn’t arrest them and put them in jail. So when legislators seek to ban “disinformation,” they cannot mean the information itself. Rather, they are targeting the creation of meaning.
The authorities use variants of the word “information” to create the impression that what is at issue is objective truth but that is not the focus. Do these laws, for example, apply to the forecasts of economists or financial analysts, who routinely make predictions that are wrong? Of course not. Yet economic or financial forecasts, if believed, could be quite harmful to people.
The laws are instead designed to attack the intent of the writers to create meanings that are not congruent with the governments’ official position. ‘Disinformation’ is defined in dictionaries as information that is intended to mislead and to cause harm. ‘Misinformation’ has no such intent and is just an error, but even then that means determining what is in the author’s mind. ‘Mal-information’ is considered to be something that is true, but that there is an intention to cause harm.
Determining a writer’s intent is extremely problematic because we cannot get into another person’s mind; we can only speculate on the basis of their behaviour. That is largely why in literary criticism there is a notion called the Intentional Fallacy, which says that the meaning of a text cannot be limited to the intention of the author, nor is it possible to know definitively what that intention is from the work. The meanings derived from Shakespeare’s works, for example, are so multifarious that many of them cannot possibly have been in the Bard’s mind when he wrote the plays 400 years ago.
How do we know, for example, that there is no irony, double meaning, pretence or other artifice in a social media post or article? My former supervisor, a world expert on irony, used to walk around the university campus wearing a T-shirt saying: “How do you know I am being ironic?” The point was that you can never know what is actually in a person’s mind, which is why intent is so difficult to prove in a court of law.
That is the first problem.
The second one is that, if the creation of meaning is the target of the proposed law – to proscribe meanings considered unacceptable by the authorities – how do we know what meaning the recipients will get? A literary theory, broadly under the umbrella term ‘deconstructionism,’ claims that there are as many meanings from a text as there are readers and that “the author is dead.”
While this is an exaggeration, it is indisputable that different readers get different meanings from the same texts. Some people reading this article, for example, might be persuaded while others might consider it evidence of a sinister agenda. As a career journalist I have always been shocked at the variability of reader’s responses to even the most simple of articles. Glance at the comments on social media posts and you will see an extreme array of views, ranging from positive to intense hostility.
To state the obvious, we all think for ourselves and inevitably form different views, and see different meanings. Anti-disinformation legislation, which is justified as protecting people from bad influences for the common good, is not merely patronising and infantilising, it treats citizens as mere machines ingesting data – robots, not humans. That is simply wrong.
Governments often make incorrect claims, and made many during Covid.
In Australia the authorities said lockdowns would only last a few weeks to “flatten the curve.” In the event they were imposed for over a year and there never was a “curve.” According to the Australian Bureau of Statistics 2020 and 2021 had the lowest levels of deaths from respiratory illness since records have been kept.
Governments will not apply the same standards to themselves, though, because governments always intend well (that comment may or may not be intended to be ironic; I leave it up to the reader to decide).
There is reason to think these laws will fail to achieve the desired result. The censorship regimes have a quantitative bias. They operate on the assumption that if a sufficient proportion of social media and other types of “information” is skewed towards pushing state propaganda, then the audience will inevitably be persuaded to believe the authorities.
But what is at issue is meaning, not the amount of messaging. Repetitious expressions of the government’s preferred narrative, especially ad hominem attacks like accusing anyone asking questions of being a conspiracy theorist, eventually become meaningless.
By contrast just one well-researched and well-argued post or article can permanently persuade readers to an anti-government view because it is more meaningful. I can recall reading pieces about Covid, including on Brownstone, that led inexorably to the conclusion that the authorities were lying and that something was very wrong. As a consequence the voluminous, mass media coverage supporting the government line just appeared to be meaningless noise. It was only of interest in exposing how the authorities were trying to manipulate the “narrative” – a debased word was once mainly used in a literary context – to cover their malfeasance.
In their push to cancel unapproved content, out-of-control governments are seeking to penalise what George Orwell called “thought crimes.”
But they will never be able to truly stop people thinking for themselves, nor will they ever definitively know either the writer’s intent or what meaning people will ultimately derive.
It is bad law, and it will eventually fail because it is, in itself, predicated on disinformation.