"A person is smart. People are dumb, panicky, dangerous animals" - "K," Men In Black
If I'm smart, and you're like me, you're smart. We're both smart, and other people like us must also be smart. In fact, we're smarter than the self-anointed media gatekeepers that trumpet inanity while burying important news in the interest of ratings. What we need is to be the new gatekeepers, together. Working together, the smartest people will be highlighting the news, rather than the dumbest.
Or so the theory goes. In reality, I'm smart and you're smart, but some of you like pictures of tattoos and second-rate web comics and third-rate political candidates. Worse, some of you are conspiracy theorists, celebrity gossip hounds, or Mac users. Worst of all, some of you just don't vote like you should. This site sure isn't as good as it used to be, before all the newbies showed up.
There are problems with the current wave of user-driven sites, like Reddit, Digg, and Netscape, but are the problems inherent to the model, or can software tweaks fix them? Are there even really problems?
The Wisdom of Crowds
19th century scientist Francis Galton observed that a collection of individuals, acting independently, managed to achieve what even experts could not: averaged together, their answers were right, though no individual managed to get as close as the average did. In his case, the exercise was estimating the weight of a slaughtered ox, but in James Surowiecki's 2004 book The Wisdom of Crowds, he suggested that the same principle holds true in many cases. And it might, but achieving a technical result (the weight of the ox) is a different type of exercise than rating quality.
While weight-guessing carries no obvious penalty or reward for guessing too high or too low, humans rate the quality of things based on non-obvious factors, resulting in surprising patterns. Does a horseshoe curve mean that a book is worth reading, or not? With the same number of one-star ratings and five-star ratings, can we determine that it's a three-star book? Or, if we plot the rating over time, it could be that one-star ratings represent an attempt by the rater to lower the visible aggregate rating, and five-star ratings an attempt by the rater to raise the visible aggregate rating. Having that aggregate rating visible to all makes sense from an online bookseller's perspective, but it undermines one of the fundamental elements of Galton's observation. The individuals in this case are not acting independently.