Online reviews are under scrutiny again by the folks at Cornell. Last month, RetailWire ran a story on research conducted by Trevor Pinch, a professor at the school, who surveyed top reviewers at Amazon and discovered they tended to offer a higher percentage of positive reviews than others lower on the site's ranking list.
Now, new research shows that opinion spam -- phony reviews to prop up or disparage a product -- are hard to detect by humans looking to edit them out. Software used by Cornell's researchers, however, was able to track fake reviews with close to 90 percent accuracy, according to the Cornell Chronicle.
Myle Ott, a Cornell graduate student involved in the research, said that human editors have a "truth bias." They begin by assuming reviews are honest unless shown otherwise. On the other hand, once their bias has been shaken, they tend to go too far in the other direction assuming many reviews are fake when they are not.
A computer analysis of false and legitimate reviews found some key distinctions between the two. Fake reviewers tend to use more verbs and fewer nouns than those on the up-and-up.
A review of hotel reviews found that honest reviewers were more concrete in their language with specific references to terms such as "bathroom" or "price." Those looking to skew results used terms such as "business trip" or "vacation."
Mr. Ott said the software developed by Cornell could be used as "first-round filter" for sites that have high levels of fake reviews.
"Ultimately, cutting down on deception helps everyone," he told the Chronicle. "Customers need to be able to trust the reviews they read, and sellers need feedback on how best to improve their services."
How often do you think consumers question the honesty of online reviews?