Is there really wisdom in the crowd?
Presented here for discussion is a summary of a current article published with permission from Knowledge@Wharton, the online research and business analysis journal of the Wharton School of the University of Pennsylvania.
As predictive analytics become more sophisticated, companies are increasingly relying on aggregated data to help them with everything from marketing to new product lines. But how much should firms trust the wisdom of the crowd?
In his latest research, Wharton marketing professor John McCoy proposes a new solution for crowdsourcing that can help create better, more accurate results: Instead of going with the most popular answer to a question, choose the answer that is “surprisingly popular.”
“If you think about doing something like majority vote, what you’re doing is just taking the most popular answer,” said Prof. McCoy in an interview with Knowledge@Wharton. “You’re taking the most frequent answer that people give. We say instead that the right thing to do is take what we call the surprisingly popular answer.”
His paper, titled “A Solution to the Single-question Crowd Wisdom Problem,” suggests the crowd should be asked for two responses to a query: their own answer and their predictions about the answers of others in the crowd. Prof. McCoy added, “Then, taking the surprisingly popular answer means looking at both the actual vote frequency and the predictive vote frequency, and choosing the answer where the actual vote frequency exceeds predictive vote frequency.”
As an example, he noted that only a third of a crowd of MIT students guessed right that Philadelphia wasn’t the capital of Pennsylvania. (It’s Harrisburg.) But even less, 23 percent, felt overall survey respondents would agree that Philadelphia wasn’t the capital.
Prof. McCoy said some companies are adapting their methods for extracting crowd-sourced information by weighting votes by competence and using prediction markets. Seeking two inputs — the individual’s own answers and their predictions about the answers of others — can remove some of the bias that comes from a person recognizing they’re being asked a question.
As shown in the “Is Philadelphia the capital of Pennsylvania?” question, the duo-question technique also helps identify the correct answer for queries where the majority of the public answers wrong. Said Prof. McCoy, “I think one big lesson is that the crowd is a lot smarter than many people give it credit for.”
DISCUSSION QUESTIONS: Does it make sense that crowd-sourced “surprisingly popular” answers are often more accurate than the majority vote? When is crowdsourced data most useful for research purposes?