Categories
trust

A Trust and Moderation System for the Decentralized Web

Public social networks, comment sections, and discussion forums are broken. The internet promised limitless access to knowledge and thoughtful, intelligent conversations with people from all over the world. Instead, we have trolls, manipulators, outrage, and spam. 

It might seem like decentralization would make these problems worse. After all, most decentralized or unmoderated social networks turn into dumpster fires of low quality content over time. Without moderators or the threat of being banned, the worst users feel emboldened to spread their drivel across the entire network. As quality degrades, the best users are turned off and the network becomes less useful and engaging. 

But there is a solution that gives power back to the people, allowing them to moderate their own world, working with friends to keep it free from low thought discourse. It works by mirroring the real world: give everyone a reputation and the ability to ignore those with a poor reputation. 

In this article I’m going to show not only how a decentralized moderation system can be as good as existing systems, but also surpass them, creating a higher quality social network.

But first, let’s take a step back and see what centralized moderation gets wrong.

Centralized Moderation and its flaws

Centralized moderation refers to social networks or forums where one central entity, such as a corporation or the owner of the site, has sole moderation authority over the contents. There is usually one set of rules that all users must follow and if skirted the user’s content is deleted and repeat offenders are banned. 

The flaws center around content that some users enjoy and others do not and how you deal with that content. 

With centralized moderation we all have to play by the same rules, so sites have to do the best job they can. If they are overly harsh in moderating they’ll have a good signal-to-noise ratio, but the content may become overly partisan or boring. If they’re too lenient the content will get noisier which makes it less enjoyable as there are then more low effort posts to sort through. And some of the worst content will turn many people off.  

Content Spectra

Imagine rating every piece of social media content along a content spectra of the following:

  • Low Quality → High Quality
  • Low Offense →High Offense
  • Accurate → Inaccurate

Consider where the content you enjoy most fits on these spectra. It’s likely not a fixed point, but rather a range of what is considered acceptable and a smaller range of what is considered ideal.

The worst content generally lies at the extreme of these spectra. Spam is extremely low quality, Haters are High Offense, and Manipulators are Frequently Inaccurate. 

So, a site has a good signal-to-noise ratio when a good percent of the content falls into one’s ideal zone. 

Users want to see content that falls into their ideal zone, rather than have to hear everyone’s opinion. If too much falls outside their ideal zone they’ll often become bored and leave the site.

Sometimes heavy moderation of content that falls outside of one’s “ideal zone” is a good thing, because it means you’ll enjoy the content you read more, without having to wade through purely acceptable content.

The central flaw of centralized networks is that by having a set of rules that all users must abide by, sites are presetting the allowable content spectra. This content spectra not only decides the kind of users that will be allowed on the site, but also those attracted to the site, because the users whose content spectra more closely match the website will tend to stick around more.

It turns out centralized services have thought about this problem over the past decade and have come up with a solution: algorithmically determined feeds. If everyone has their own feed, an AI algorithm can learn what one’s ideal content spectra is and show the content that most appeals to them. This feed is a filter on the world that helps one view the site in a curated “ideal” light.

Algorithmically determined feeds are the next stage in the evolution of social networks, but these feeds have their own set of problems.

The Algorithmically Chosen Feed Problem

Having a social feed driven by an algorithm can be useful for surfacing all the content you’d like to see most on a social network, but it also has dark sides that most rarely think about. 

The biggest problem with utilizing algorithms to customize social feeds is that the user has no insight into the decisions the algorithm makes. With the rise of neural networks, at times even the developers of these algorithms are unable to explain their internal logic.

When a post is made, even one into which a lot of time and effort has been put, the algorithm may decide that it’s low quality, rank it lowly, and cause it to never appear in friends’ algorithmically chosen feeds. Under this paradigm, neither a user nor their friends’ opinions matter, the algorithm has decided and its decision is final. 

Furthermore, because these algorithms are applied to millions or even billions of people, there is a large financial incentive for manipulative companies or organizations to reverse engineer them. If they can figure out how these algorithms work, companies can tailor the algorithms to prioritize their own content to reach the top of as many feeds as possible. This becomes worse if the social network’s financial incentives are aligned with the company’s. Even if the social media network itself is completely benevolent, this will always be an issue. 

One final flaw of these algorithms is the incentive for social networks to keep you hooked on their product. This means that the algorithms are optimized to display content you like and hide the content you don’t like. Though useful in moderation, this can quickly turn dangerous if optimized for dopamine hits and short term emotional appeal over mental health and well-being. 

Who chose the moderators?

Centralized moderation has a fundamental, rarely discussed, issue: The idea that a third party is needed to moderate user’s content. 

Why can’t we decide that for ourselves? Why do we have to be subject to the whims of covert moderators and opaque algorithms that decide who is allowed and what content is acceptable?

Decentralized worlds do not have to be structured in this way. Decentralized networks have the potential to fulfill the original intent of the internet: worlds open to everyone without global rulers in which everyone is able to participate and moderate their own domain. Users choose who to trust, who to ignore, and how one’s feed is ordered. 

The Solution: A Decentralized Moderation System that scales to everyone

To have the best social experience possible, moderation must be personalized. Individuals must be able to decide who can be a part of their world and who cannot. It’s not possible for any company to make policies that millions approve of and that do not have downsides and unintended consequences.

This doesn’t mean that global moderators are no longer necessary. They are still incredibly useful, just no longer mandatory. Instead, moderators should be opt-in, leaving an individual, or group of individuals, to choose the moderator for one’s “world.”. This allows users to benefit from companies that specialize in detecting and eliminating spam, bot networks, or Astroturfing, without being forced to use them. 

How it works

In real life trust comes from friends and family, the people someone already trusts to help determine who can and cannot be trusted. When a friend begins to associate with bad people one either tries to steer them away or one’s trust in them falls. 

This is exactly how the new decentralized trust system works. A user rates the trustworthiness of some of the people encountered online, and then trust of everyone else is calculated based on these ratings. In this implementation the trust ratings must be public, so that everyone can calculate the trust of friends of friends. In the future a more private trust system could be developed along similar lines. 

The second part is a filter that a user can apply to their social feed to hide everyone with a trust rating below a particular threshold.

Now this isn’t a finalized spec, it’s a rough outline of how this system can work. This essay is about why we should build this and how it could work, more than exact implementation details.  

To bootstrap the system the user would first give a rating to the people they trust/distrust on the network. These might be people known in real life or not. Ratings can be between -100 and 100. This rating should be how much a user trusts them to post content that is similar to their ideal content spectra, because these ratings will shape the content seen on the social network. 

When a user encounters new people online they can give them +/- 1 votes based on individual pieces of content posted. For obvious spammers or trolls one could immediately give them a negative rating. 

So, while individual pieces of content can be rated the trust rating is given to a person. This is because most people post content in a similar content spectra. However, it is important that trust is also linked to content so that when a user asks the question “Hey, why does my friend distrust this guy?”, they could see the exact piece of content that caused the distrust.

After a few trust ratings have been given, the algorithm will go on to calculate the trustworthiness of friends of friends (two degrees of separation away) and even their friends (three degrees away). It does this by taking all the people a user trusts and all the people they trust, then multiplying the trust levels together to come up with one’s own personal trust rating of everyone else. Even if a user only rates ten people, then those ten go on to rate ten others and those rate ten others, this will result in the user having trust ratings for over 1000 people on the network. 

When a trust rating is given it is fixed and not affected by the ratings of others.

The people a user distrusts have no effect on their trust rating of others. This is just like in real life, where when someone encounters someone they don’t like, they don’t care who their friends or enemies are. It also prevents malicious users from gaming the system.

Now instead of having to individually block spammers, trolls and haters from your world, as soon as a trusted friend rates someone poorly, they’ll be hidden automatically, making moderation of obvious trolls or spammers much easier.

The Algorithm

The general algorithm is: the trust rating of anyone who does not already have a set rating is the square root of how much a user trusts a mutual friend multiplied by how much the mutual friend trusts the stranger. In the case of negatives the negative is ignored when calculating the square root (so there’s no imaginary numbers). When a user has multiple mutual friends their scores are summed before the square root and that number is divided by total raters (so that someone who’s popular isn’t naturally more trusted). 

There is a small flaw with this algorithm. If a user has a friend Alice that is rated at a ten, and then they have a best friend Bob that is rated at 100, the score for bob will be (10)(100)= 31.6. 

This is weird because the user doesn’t trust Alice that much, so why would they trust Bob more than her? We can make an improvement to address this: the trust level is capped to how highly the mutual friend has been rated.Similarly to real life, if an acquaintance brings a friend to a party, it doesn’t matter how much they say they like them, they still will not be trusted higher than that acquaintance.

Here’s a demo to showcase this system in action, with randomly generated users that randomly trust other users are random levels:

Total Users
AVG Ratings
Per User
Trust Depth

You can adjust the total users and average ratings per user to see how the system works as more people join in and use it. You can also adjust the depth, or degrees of separation, that the trust ratings are calculated for.

The arrows represent how much each user trusts the user the arrow is pointing to. The color of each user represents how much they are trusted by the selected user (the blue one), calculated via the algorithm described in this post. Bright green represents 100 trust and bright red -100 trust, with a gradient for scores in between.

The source code for this demo is open source on Github. I’ve implemented a simple version of the trust algorithm in JavaScript which you can use for your own social network if you wish and it’s already fast enough for most use cases.

Technical Implementation

This can be implemented on top of Scuttlebutt, or any other open source social network. With Scuttlebutt the only requirement is a trust plugin which allows users to post trust-rating messages that contain a user/feed ID and their rating of that user. With that information, the algorithm can calculate the trust rating of all users based on the trust-rating messages everyone has posted. 

In regards to user experience, this plugin could add upvote/downvote buttons to each piece of content that increases or decreases trust in the user that posted that content. The trust filter can be a simple drop-down in the top right of the feed that, when a trust level is chosen, all posts by users with a trust rating below that level will be hidden. 

In terms of performance, the demo above utilizes a simple algorithm in JavaScript, and even this is able to re-calculate the trust of 100,000+ people in just a few milliseconds. You can see the benchmarks here.

This can scale to hundreds of millions of users because each user only needs to calculate the ratings of a few hundred to tens of thousands of people. Because it’s open source this algorithm can be updated and improved over time if better ways of calculating trust are discovered. A written text example of this algorithm is in Appendix A.

Additional Benefits of Decentralized Trust

Choose your own moderators

It’s still a good idea to have large organizations become moderators, because they have more resources and clout than individuals. The difference is, in a decentralized world, one can choose their own moderators. Imagine a future where the New York Times has their own “Trust Beacon” user on the platform. Whenever they report on fake news or fake content , this trust beacon can mark the content and those who post/create it at say -10 trust. 

If a user trusts the New York Times, they can rate it at say 50 trust, and it will apply it’s trust ratings to  their social feed automatically. It’s the same as trusting any other user. 

These trust beacons could be set up by any media companies, fake news catching websites, or even companies like NetNanny that want to filter out content unsafe for kids. People can trust anyone they like and then those trust ratings will be applied to their feed. This way a big network of friends is not needed to help filter content, these companies can do so if desired.

Trust can be linked to posts or images

This trust system doesn’t need to only apply to users. It can be applied to individual posts or pieces of content. For example, on Scuttlebutt every piece of media is stored as a blob, where the filename is a hash of it’s contents. This hash is essentially a fingerprint of image, video, song, etc. on the network. 

A trust beacon such as Snopes that debunks false information can have an account that flags images that are false. Because that image can be identified across the network by its hash, if a user subscribes to the Snopes trust beacon it can automatically flag and filter this content for them even if that image was reposted 100 times before they saw it. 

This could also be done with pornographic or violent content that’s unsafe for kids. Services such as NetNanny could maintain a list of these adult only images and videos and automatically block them for children, creating a safer space for them to explore than even the major networks can manage. 

You choose what trust means to you

The interesting thing about this system is that a user doesn’t have to trust people based on truthfulness. Trust is an arbitrary number that can be used to represent anything one would like to see more of in your social world. If a user wants trust ratings to be based on shared beliefs, positivity, or even geographical proximity that’s up to them. 

Imagine a world where instead of everyone having one social profile, they have one profile for every topic they’re interested in. One could have a profile for programming, for politics, for games, for decentralization, etc. If this becomes commonplace then one’s trust in someone can be directly related to their area of expertise. Trust is often not in the person but in one’s confidence that they know about whatever they’re talking about at the time. 

Alternatively, it could be implemented in such a way that people have multiple trust ratings, each tagged by topic. However that would add a lot more complexity, especially around identifying what topic each post belongs to. 

Creating Small Communities again

Most of the time friends are chosen through serendipity. It’s been shown that the secret to a friendship blooming is multiple random encounters. One reason people rarely make friends via major social networks (Twitter, Facebook, Reddit, etc.) is because they are so large and spread out people rarely encounter the same person twice if they’re not already friends. 

Imagine if your social feed showed everyone you follow, as well as everyone they follow, with their rank based on trust ratings. You may discover some good friends of friends that you’ve never met posting content that you really like. Over time you could comment on their posts, contribute to the conversation, and, eventually, form a relationship. 

The trust system would allow the best friends of friends to appear in user’s feed more often than a random crazy uncle, posts from groups, or even barely known acquaintances. This could lead to new connections and many amazing new friendships with people that were previously just out of contact. The more interests and people in common, the more users will see each other online and the more chances they’ll have to connect.

This is internet scale moderation

The future will be full of AIs and bots that create hundreds of thousands of accounts in order to take over social networks. With decentralized trust all it takes is one friend or trust beacon to notice these bots and distrust them. It needs to be possible to scale moderation to the level where it can combat thousands or even millions of fake users and bots. It can’t scale by having designated moderators, it can only scale if everyone participates in moderating some portion of the world. 

Downsides of Decentralized Trust

Now there are potential downsides to this system. As with anything, when  people are given freedom some people are going to use it poorly. That’s okay. People shouldn’t be told what to do anyway.

The first obvious issue is the creation of echo chambers or online cults, where instead of choosing truth and seeking the best information, people trust those who confirm their existing beliefs.

This can be somewhat mitigated with the ability to set a trust filter level to a low level, even in the negatives, to see posts and opinions one may not usually be exposed to. This again is up to the end user if they wish to expose themselves to differing opinions or not. 

It is up to individuals to determine what’s best for them. Of course, one would hope they make good choices, but it is not society’s job to save people from themselves. It’s far better to have this freedom and the power to use or misuse it, than to allow companies to build the echo chamber behind closed doors. 

Conclusion

The implementation will be open source and open to anyone to participate. Implementation has already begun, however there is a lot of work to do. If anyone has heard of anyone building something similar it would be wonderful to meet so this idea can be brought to fruition.

It’s exciting thinking about the possibilities this system would bring. There is potential for more stimulating conversation threads without trolls trying to ruin the fun. With the current state of the social internet this is hard, but it’s something worth doing.

Appendix A:
Trust system example

Lets walk through an example, starting with Tom.

  • Tom has a best friend Alice, who he trusts at 100
  • Tom also has a friend Mike, who he trusts at 50
  • Alice knows a guy Dave, who’s a compulsive liar and trusts him at -20
  • Alice has a workmate Jeremy who’s an ok guy but she doesn’t know him well, so she trusts at 10.
  • Alice has a workmate Sophie, who constantly shares false outrage, but is otherwise a good person, who she trusts at -5.
  • Mike is best friends with Jeremy and trusts him 40
  • Mike knows Sophie and likes her, and trusts her 15
  • Dave has a best friend Barry, who he trusts 100
  • Sophie has a best friend Emily, who she trusts 100

While Tom has only rated 3 people, the Trust Software will now calculate his personalized trust ratings for all 7 people here:

First there’s the immediate friends, if you’ve rated someone that is the fixed rating for them, it isn’t influenced by anyone else:

Alice=100

Mike=50

Then the secondary friends:

Dave = \sqrt{(Tom.Alice)(Alice.Dave)} = \sqrt{(100)(-20)} = -45

Jeremy = \frac{\sqrt{(Tom.Alice)(Alice.Jeremy) + (Tom.Mike)(Mike.Jeremey)}}{Total Raters}

= \frac{\sqrt{(100)(10) + (50)(40)}}{2} = 27.5

Sophie = \frac{\sqrt{(Tom.Alice)(Alice.Sophie) + (Tom.Mike)(Mike.Sophie)}}{Total Raters}

= \frac{\sqrt{(100)(-5) + (50)(15)}}{2} = 8

Then the tertiary friends:

Barry is unrated as Tom.Dave < 0

Emily = \sqrt{(Tom.Sophie)(Sophie.Emily)} = \sqrt{(16)(100)} = 40

= \min{(Tom.Sophie, Tom.Emily)} = \min{(16,40)} = 16

There’s a few interesting things happening here. First, Sophie ends up with a slightly positive trust rating despite Tom’s best friend Alice saying she shares too much false outrage. This is because Mike still trusts her. If Tom notices that she does share a lot of false information he can either rate her himself which will mean his friends ratings no longer apply, or he can rate Mike lower because it’s becoming obvious Mike doesn’t care about truthfulness as much.

We also see that Barry’s rating is not calculated at all (so would default to 0). This is because if we don’t trust someone we also don’t trust anything they have to say about the reputation of others, so the system can’t be gamed by malicious actors.  Similarly if Tom rated Sophie at a negative number then her rating of Emily would no longer apply and Emily would have the default rating of 0.

Lastly we see Emily’s rating is capped to the same rating as Tom.Sophie, because we never rate friends of friends higher than the mutual friend.

All the people above are now in a big discussion about politics and Tom can see this discussion happening in his feed. As Tom wants to be well informed without liars being involved he sets his trust filter level to 10, so everyone with less than 10 rating will be hidden. This allows Tom to filter his world without investing much effort in the process. In this feed he would only see posts from Alice, Mike, Jeremy and Emily. Posts from Sophie, Dave or Barry would be hidden.