The Freenet Web of Trust keeps communication friendly with total anonymity

In the past decade there hasn’t been a year without a politician calling for real names on the internet. Some even want to force people to use real photos as profile pictures. All in the name of stopping online hate, though enforcing real names has long been shown to actually make the problem worse. This article presents another solution, one that has actually proven that it keeps communication friendly, even in the most anonymous environment in the fully decentralized Freenet project.

And it does it without enabling censorship.

The Web of Trust (WoT) was conceived when Frost, one of the older forums on Freenet, broke down due to intentional disruption: some people realized that full anonymity also allowed for automatic spamming without repercussions. For several months they drowned every board in spam, so peole had to spend so much time ignoring spam that constructive communication mostly died.

Those spamers turned censorship resistance on its head and censored with spam. Similar to people who claim that free speech means that they have the right to shout down everyone who disagrees with them. Since the one central goal of Freenet is censorship resistance, something had to be done. The problem of Frost was that everyone could always write to everyone. Instead of going into an arms race of computing power or bandwidth, Freenet developers went to encode decentralized reputation into communication, focussed on stopping spam.

To make your messages visible to others, you have to be endorsed by someone they trust. When someone answers some of your messages without making you as spammer, that means endorsement. To get initial visibility, you solve CAPTCHAs which makes you visible to a small number of people. This is similar to having moderators with a moderation queue, but users choose their own moderators.

That still provides full anonymity, but with accountability: you pay for misbehavior by losing visibility. This is the inverse of Chinese censorship: in China you get punished if your message reaches too many people. In Freenet you become invisible to the ones you annoy — and to those who trust them but not you (your own decision always wins).

But wait, does that actually work? Turns out that it does, because it punishes spammers by taking away visibility, the one currency spammers care about.

It is the one defence against spam which inherently scales better than spamming. And it keeps communication friendly.

Now I repeated that claim three times (including the title). Let’s back it up. Why do I say that the WoT keeps communication friendly?

For the last decade, Freenet has been providing three discussion systems side by side. One is Frost, without Web of Trust. One is FMS, with user-selected moderators as Web of Trust. And the thirt is Sone, with propagating trust as Web of Trust. On Frost you see what happens without these systems. Insults fly hiegh and the air is filled with hate and clogged by spam. Consequently it is very likely that FMS and Sone are a target of the same users. With no centralized way of banning someone, they face a harder challenge than most systems on the clearnet (though with much less financial incentive).

Yet discussions are friendly, constructive and often controversial. Anarchists, agorians, technocrats, decocrats, LGBT activists and religious zealots discuss without going at each others throats.

And since this works in Freenet, it can work everywhere.

Further reading:

Can this be applied to systems outside Freenet — for example federated microblogging like GNU social?

You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:

  • As WoT identity, use the URL of a user on an instance. It is roughly controlled by that user.
  • As peer trust from Alice to Bob, use "if Alice follows Bob, use trust (100 - (100 / number of messages from Alice to Bob))".
  • As negative trust use a per-user blacklist (blocked users).
  • For initial visibility, just use visibility on the home instance.

These together reduce global moderation to moderation on a smaller instance and calculations based on existing social interaction.

(Finally typed down while listening to a podcast by techdirt about content moderation)

Inhalt abgleichen
Willkommen im Weltenwald!
((λ()'Dr.ArneBab))



Beliebte Inhalte

sn.1w6.org news