The centre piece of the Guardian’s new “the web we want’ section is a piece of alleged statistical research into the “70m comments left on its site since 2006.”
Published under the ominous title The Dark Side of Guardian Comments, its most significant conclusions are trumpeted from the articles subhead:
As part of a series on the rising global phenomenon of online harassment, the Guardian commissioned research into the 70m comments left on its site since 2006 and discovered that of the 10 most abused writers eight are women, and the two men are black.
Since this claim fits so very conveniently into the precise narrative being sold by the Guardian and the government right now as a reason to further regulate the internet, we need to look pretty closely at how this research has been conducted, and how its conclusions were drawn. Most importantly of all – how did the authors of this research define the “abuse” they were quantifying?
Well, as it happens, they are good enough to tell us the answer to that. You could overlook it if you were not reading closely, because the construction of the piece is chaotic and tends to meander back and forth between unsupported assertions and random gobbets of “data.” But if you look – there it is; a clear and unambiguous explanation of how this report defines “abuse.”
For the purposes of this research, therefore, we used blocked comments as an indicator of abuse and disruptive behaviour. Even allowing for human error, the large number of comments in this data set gave us confidence in the results.
Let’s take that again…
For the purposes of this research, therefore, we used blocked comments as an indicator of abuse and disruptive behaviour…
Surely not. There must be some mistake. Are these people really saying what they seem to be saying?
Yes. They are. These perfectly Orwellian “researchers” have gotten their statistics on abuse by simply assuming every comment ever removed was “abusive or disruptive”. And that’s totally justified, because they have “confidence” they’re right.
Oh and because…
The Guardian’s moderators don’t block comments simply because they don’t agree with them.
Well, don’t know about you, but I’m convinced.
Let’s be clear. What they are selling to you as statistics about the rate of abuse are actually statistics about the rate of censorship, and nothing else. Jessica Valenti doesn’t get more abuse than other columnists, she has more comments removed – possibly even at her own insistence.
The centrepiece of their campaign, the alleged “database” on the abuse of vulnerable minorities is an absolute lie. They are simply hoping you won’t notice the bait and switch.
With this regard, when you read that article just replace each instance of the word “abuse” or “disruption”, with the word “censored” and you’ll get a clearer picture, as here:
We also found that some subjects attracted more
abusive or disruptive censored comments than others. World news, Opinion and Environment had more than the average number of abusive or disruptive censored comments…Conversations about crosswords, cricket, horse racing and jazz were respectful uncensored; discussions about the Israel/Palestine conflict were not.
Remove the weasel words and this is what we are being told. Political discussions get censored. Crosswords don’t.
Now let’s go back to that subhead and make it represent their actual data:
As part of a series on the rising global phenomenon of online harassment, the Guardian commissioned research into the 70m comments left on its site since 2006 and discovered that of the 10
most abusedwriters [whose article have the most comments removed] eight are women, and the two men are black.
You can see why they decided to sex it up, can’t you?