In the past decade there hasn’t been a year without a politician calling for real names on the internet. Some even want to force people to use real photos as profile pictures. All in the name of stopping online hate, though enforcing real names has long been shown to actually make the problem worse.
This article presents another solution, one that has actually proven that it keeps communication friendly, even in the most anonymous environment of the fully decentralized Freenet project.
And that solution does work without enabling censorship and harassment (as requiring real names would).
The Web of Trust (WoT) was conceived when Frost, one of the older forums on Freenet, broke down due to intentional disruption: some people realized that full anonymity also allowed for automatic spamming without repercussions. For several months they drowned every board in spam, so people had to spend so much time ignoring spam that constructive communication mostly died.
Those spammers turned censorship resistance on its head and censored with spam. Similar to people who claim that free speech means that they have the right to shout down everyone who disagrees with them. Since the one central goal of Freenet is censorship resistance, something had to be done. The problem of Frost was that everyone could always write to everyone. Instead of going into an arms race of computing power or bandwidth, Freenet developers went to encode decentralized reputation into communication, focussed on stopping spam.
To make your messages visible to others, you have to be endorsed by someone they trust. When someone answers some of your messages without marking you as spammer, that means endorsement. To get initial visibility, you solve CAPTCHAs which makes you visible to a small number of people. This is similar to having moderators with a moderation queue, but users choose their own moderators.
If someone now starts spamming, users who see the messages mark them as spammer. To decide whose messages to see, users sum up all the endorsements (positive) and spam-marks (negative), weighted by their closeness in social interaction to the ones who gave them. If the total result is negative, the messages of the spammer are not even downloaded.
That method still provides full anonymity, but with accountability: you pay for misbehavior by losing visibility. This is the inverse of Chinese censorship: in China you get punished if your message reaches too many people. In Freenet you become invisible to the ones you annoy — and to those who trust them but not you (their own decision always wins).
But wait, does that actually work? Turns out that it does, because it punishes spammers by taking away visibility, the one currency spammers care about.
It is the one defence against spam which inherently scales better than spamming. And it keeps communication friendly.
Now I repeated the claim three times that the WoT keeps communications friendly (including the title). Let’s back it up. Why do I say that the WoT keeps communication friendly?
For the last decade, Freenet has been providing three discussion systems side by side. One is Frost, without Web of Trust. One is FMS, with user-selected moderators as Web of Trust. And the third is Sone, with propagating trust as Web of Trust. On Frost you see what happens without these systems. Insults fly high and the air is filled with hate and clogged by spam. Consequently it is very likely that FMS and Sone are a target of the same users. With no centralized way of banning someone, they face a harder challenge than most systems on the clearnet (though with much less financial incentive).
Yet discussions are friendly, constructive and often controversial. Anarchists, agorians, technocrats, democrats, LGBT activists and religious zealots discuss without going at each others throats.
And since this works in Freenet, where very different people clash without any fear of real-life repercussions, it can work everywhere.
How can this be applied to systems outside Freenet — for example federated microblogging like GNU social?
You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:
These together reduce global moderation to moderation on a smaller instance and calculations based on existing social interaction.
(Finally typed down while listening to a podcast by techdirt about content moderation)
⚙ Babcom is trying to load the comments ⚙
This textbox will disappear when the comments have been loaded.
Note: To make a comment which isn’t a reply visible to others here, include a link to this site somewhere in the text of your comment. It will then show up here. To ensure that I get notified of your comment, also include my Sone-ID.
Link to this site and my Sone ID:
This spam-resistant comment-field is made with babcom.