"A person is smart. People are dumb, panicky, dangerous animals" – "K," Men In Black
If I'm smart, and you're like me, you're smart. We're both smart, and other people like us must also be smart. In fact, we're smarter than the self-anointed media gatekeepers that trumpet inanity while burying important news in the interest of ratings. What we need is to be the new gatekeepers, together. Working together, the smartest people will be highlighting the news, rather than the dumbest.
Or so the theory goes. In reality, I'm smart and you're smart, but some of you like pictures of tattoos and second-rate web comics and third-rate political candidates. Worse, some of you are conspiracy theorists, celebrity gossip hounds, or Mac users. Worst of all, some of you just don't vote like you should. This site sure isn't as good as it used to be, before all the newbies showed up.
There are problems with the current wave of user-driven sites, like Reddit, Digg, and Netscape, but are the problems inherent to the model, or can software tweaks fix them? Are there even really problems?
The Wisdom of Crowds
19th century scientist Francis Galton observed that a collection of individuals, acting independently, managed to achieve what even experts could not: averaged together, their answers were right, though no individual managed to get as close as the average did. In his case, the exercise was estimating the weight of a slaughtered ox, but in James Surowiecki's 2004 book The Wisdom of Crowds, he suggested that the same principle holds true in many cases. And it might, but achieving a technical result (the weight of the ox) is a different type of exercise than rating quality.
While weight-guessing carries no obvious penalty or reward for guessing too high or too low, humans rate the quality of things based on non-obvious factors, resulting in surprising patterns. Does a horseshoe curve mean that a book is worth reading, or not? With the same number of one-star ratings and five-star ratings, can we determine that it's a three-star book? Or, if we plot the rating over time, it could be that one-star ratings represent an attempt by the rater to lower the visible aggregate rating, and five-star ratings an attempt by the rater to raise the visible aggregate rating. Having that aggregate rating visible to all makes sense from an online bookseller's perspective, but it undermines one of the fundamental elements of Galton's observation. The individuals in this case are not acting independently.
On Reddit, up-votes and down-votes seem to carry equal weight, and no reason need be given for either. On Digg, the value of individual actions is closely guarded by secretive and mysterious people, but it seems that burying an article carries more weight than digging it, though one does need to supply a reason. On Netscape, I have not spent enough time to weight the relative merits of sinks and votes, though the influence of human editors there sidesteps some of this in any case. Of these three sites, only Reddit makes any attempt to truly harness the wisdom of crowds, hiding (though not well) the aggregate votes on links for the first hour after posting. With the total score unknown, people may act independently, but once that first hour has passed, other factors begin to intervene. In fact, hiding the aggregate number doesn't work as well as it might, both because the numbers are only a click away, and because the order of links is still based on that number. You're only acting independently if you're looking at a strictly time-ordered listing (which exists on Reddit, on Digg, and on Netscape) and ignoring the existing score (if possible).
How many people do that? Jason Calacanis, who ought to know as a former owner of Netscape, suggests an 80/19/1 rule, in which only 1% of site visitors are submitting new content, with an additional 19% voting. The majority of these people presumably vote only on what's already on the front page, serving to make popular things more popular but doing little to actually direct the content of the site. Again Reddit captures the efforts of these people the best by mixing new content in with very popular content, though still with built-in indicators of how other people rate the link.
The biggest problem seems to be visibility. When one submits a link to one of these sites, how many people will actually see it? If the first few people who see it don't value it, most people will never even see it, so the control of the site falls to those most able to spend the most time on it. Buried or down-voted links aren't seen, at least not by most people, while very popular links are seen by nearly everyone. Digg makes it easy to give a link a five-star rating, but more difficult to give a link a one-star rating, and so obviously links get many more five-star ratings than one-star ratings. Reddit makes it easy to do both, but links that are hit with one-star ratings right away just disappear before they're widely seen. Also, people use the one-star rating for multiple purposes, not just to rate the quality of a link, resulting in over-emphasis on downvotes.
The definition of "mob" I'm using here can usually be summarized as "large group of people who disagree with me." Let's face it, that's the problem. There may be some people out there who are perfectly happy with every link they see on these sites! I doubt it, but it's possible. People with short-term memories like seeing links that were just there a month ago. Young people like seeing links that were popular three years ago. Some people love seeing undated bits of trivia, or the same collection of photos stolen from random websites, or still more evidence that fire can't melt steel except when it can. Different individuals in the collection have different priorities, or different agendas, and sometimes the same people may vote differently from day to day, depending on what they had for breakfast. One day a person may be voting based on what they think other people will like, while the next day she votes only on what she likes. One morning someone votes against links with typos in the title, or overly-long titles, regardless of the link itself, but then something happens, and a long title with a typo gets their vote. All of these things should average out, given enough input, but it doesn't.
In fact, we come back to the issue of the people with the most time shaping the content. Not to put too fine a point on it, the people with the most time are not often the people with the most experience or wisdom. Those people are, we hope, creating content more than voting on it! How is it that anyone thinks that social news sites will avoid the Peter Principle? In a hierarchy — and make no mistake, the 80/19/1 rule makes each of these sites a hierarchy — every employee tends to rise to his level of incompetence. That the employees are paid in egoboo rather than cash makes no difference. Over time, each of these sites faces increasing problems of incompetent individuals in its collection, and struggles with how to limit their effect.
How is that anyone thinks that social news sites will avoid the problems inherent to committees? When a sufficiently large group of individuals is added to a committee, the result more or less exactly fails to please anyone (with apologies to Douglas Adams).
On May 1, we saw what happens when a mob turns ugly, as Digg first attempted to comply with legal orders against them and then decided to bend their will to that of the mob, though the mob clearly lacked the ability to see the potential results of their actions — or care about them.
Harnessing the Crowd
Is there a way to harness the wisdom of crowds without falling into the trap of mob rule? A way to avoid the Peter Principle? A way to keep social news sites from turning into loosely-coupled committees? If so, the approach must lie away from the direction these sites are going. Can a site truly harness the wisdom of crowds and still be popular? Part of what makes these sites popular is what also contributes to mob rule: popularity.
The key to Galton's observation about crowds is that each person acted individually. Had there been a list of previous guesses available, most people would have clustered around existing guesses, even if those initial guesses were completely off-base. If the guesses of the experts had been broken out separately, people would have clustered even more heavily around those, though they turned out to be just as wrong as everybody else. While technical estimates are a different type of problem than ratings, some of the same principles apply.
Reportedly 95% of Kevin Rose's submissions reach the front page of Digg, and many of the top submitters are seen over and over and over again. Both are a demonstration of this type of clustering. If social news sites didn't list the submitter of a link, that would help cut down on clustering. Not listing the top submitters, as Digg has recently done, is a step in the right direction.
Presenting each voter with a random list of links, including some very popular links and some unpopular links, with no clues about their popularity, would result in voting more likely to harness wisdom, though each voter would have a less pleasant experience than if they were only perusing popular links. That's the trade-off: by decreasing the pleasure of the 19%, these sites can increase the pleasure of the 80%. The problem with this is that it punishes the people who contribute the most to the site in favor of the people who contribute the least. Of course, the site owners might consider the source of their revenue, as well. That seems to be what's driving Netscape more than Reddit or Digg.
Another thing that might help with the purity of the results is disallowing or not counting votes from other websites and the front page, but only counting votes from the "random list" pages. Again, this has negative effects in other ways, as content producers are less likely to feature links from their own websites back to social news sites if it won't directly benefit them.
A Fool's Errand?
In fact, there are good reasons for each of the choices each of these sites has made, though most of the choices actually disrupt the wisdom of crowds. If it isn't possible to build a big site that truly does harness the wisdom of crowds, if that isn't the highest priority — and it clearly isn't — then what are these sites, exactly, and how long are they sustainable? Can a site devoted to delivering quality results to the 80% attract enough of a 20% community to survive?
With the current crop of social news sites, the emphasis is on the "social." I look forward to the day when the emphasis shifts to the "news."