A few nights ago, I came across Tom Coates' post titled Social whitelisting with OpenID… about how to handle moderating an online forum when the amount that needs to be moderated outweighs any one person or small group of administrators' capabilities (due to time, sanity, etc). This post was written in early 2007 but this system which advocates building a web of trust between your friends was never built as far as I know.
It did remind me though of a moderation tool that me and couple other tech folks designed and built right around this exact same time for a TV station's website overhaul.
One golden rule of the web is “don't read the comments”, because, as nearly an absolute rule it brings out the worst in the worst people. On a news site, this is triple the case. In polite company, religion, politics, and money are things to tread carefully around even with close friends, but with the veil of anonymity that the Internet provides combined with a website that deals daily in stories of religion, politics, and money that often directly impact the reader, it's a damned near-perfect recipe to attract explosive, hateful, and irrational comments.
As part of this TV site's redesign, there was to be more of focus on contributions from the viewers, which were in the form of photo and video submissions, guest blogging, a simple “wall” for members to leave comments for one another, and, of course, comments on news articles.
As a hard rule, beside every piece of member-added content, we always put a link where anyone could report abusive content. This would go into a moderation queue in our administrative tools and our editors could act on it, either marking it as abusive or marking it as “seen but okay”. For a news site, with millions of visitors a day, this system wasn't manageable, as there would be as many false positives and there would be valid abuse. For some, it was abusive if someone disagreed with them and they couldn't find a way to logically defend their argument. This was overwhelming for a tiny editorial staff.
So, we had to devise something to at least bubble up the true offenders in the moderation queue. (Now, this was 5-6 long years ago and I'm sure it's evolved since I left, but here's how I vaguely remember it working.) The idea that we came up with was this: a member reporting abuse is right if we agree with their judgment, and people that report abuse “correctly” more often build up a certain amount of us trusting their judgment. If someone reported abuse 100 times and a 100 times we agree with them, there's a really good chance that their next report will be correct as well.
So we assigned every user a starting trust score and for every time they reported abuse that we deemed valid, we'd bump up their trust score. On every abuse report, we'd look at the trust score of the person that reported it and if it met some threshold, we'd silently remove it from the site. Their abuse report would still exist in the system, but there was less of a time pressure to go through the abuse queue as after awhile, a small army of reliable reporters would be moderating the site for us.
On the flip side, if an abuse report was deemed wrong, their score would be drastically reduced, halved if I remember correctly. We were fine with the penalty being so severe as good users would build themselves back up and introducing a little chaos into this system was nice as different editors would have slightly different standards, and a lot of the time judging whether something was abuse or not was a judgment call. Chaos was inherent in the system from the start.
These scores were obviously secret, shown only to the editors, and I honestly can't remember if we were actually doing the silent removals by the time I left, but I do think those reports at least got priority in the moderation queue and when going through thousands of reports, this was incredibly helpful.
I like to view this kind of system as sort of an ad-hoc Bayesian filter where your moderation efforts are amplified, rewarding and ultimately giving some weight to people that moderate like you do.
So, the social whitelist begins with allowing a subset of users to post, while the trust score model involves dynamically building a list of good moderators that agree with you on what is abusive content.
I still love the idea of social whitelisting or building up a set of trusted users to help you with moderating, as both are more organic approaches to moderating, meaning that it's forcing you as a someone in charge of a community to actually make decisions about what kind of discourse you want on your site.
This is also why it saddens me a bit today as more and more blogs are just dropping in web-wide, generic commenting systems, like Facebook's. While it is enabling almost everyone to be able to quickly log in and start adding comments, it's horrible for the site owners that are trying to build an intimate community. Every decent community probably has a baseline standard of what's acceptable: no hate speech, no physical threats, no illegal content, etc. This is what Facebook provides — a baseline — and nothing more.
Any community worth moderating is nuanced, has a voice and a direction. Facebook doesn't offer this, so every blogger that drops in this commenting system is making that trade-off between ease of user engagement and being able to effectively manage a community. I'd like to see more sites go back to these more hands-on and thinking-hard approaches to how to moderate and direct their communities instead of relying on someone else's standards of what constitutes a good contribution.