Will AI mimicry ruin online user-generated reviews?

Will AI mimicry ruin online user-generated reviews?

Researchers at the University of Chicago have trained a neural network, or artificial intelligence (AI) system, to write fake reviews on Yelp and have found the AI-generated reviews to be virtually the same as those written by humans.

The AI-software learned how to mimic writing a review from publicly-available Yelp restaurant reviews. A customization process, which included feeding in details on specific restaurant dishes, tailored the review for a specific restaurant.

Among the five-star reviews the AI-system came up with for a New York City buffet restaurant:

  • “My family and I are huge fans of this place. The staff is super nice and the food is great. The chicken is very good and the garlic sauce is perfect. Ice cream topped with fruit is delicious too. Highly recommended!”
  • “The food here is freaking amazing, the portions are giant. The cheese bagel was cooked to perfection and well prepared, fresh & delicious! The service was fast. Our favorite spot for sure! We will be back!”

The AI-generated reviews were found to be “effectively indistinguishable from those produced by humans” by test subjects and were rarely identified by plagiarism detection software. On usefulness, the test subjects gave an average score of 3.15 for AI reviews versus 3.28 for genuine reviews.

The researchers said AI-generated reviews will only become more sophisticated and are a bigger threat than those written by humans because automated reviews can be done rapidly and don’t require monetary-compensation.

Beyond reviews, the researchers said the expansion of AI-generated content will increasingly cause society to question what’s real or fake on platforms such as Twitter and online discussion forums. The researchers said in the study, “We hope these results will bring attention to the problem and encourage further analysis and development of new defenses.”

In a statement to The Verge, Yelp discounted the findings because the study focused on text. Yelp stated, “Yelp’s recommendation software employs a more holistic approach. It uses many signals beyond text-content alone to determine whether to recommend a review.”

BrainTrust

"Invest in being great. Not in being fake."

Dave Nixon

Retail Solutions Executive, Teradata


"Where have all the humans gone? Covered with flowers every one..."

Cynthia Holcomb

Founder | CEO, Female Brain Ai & Prefeye - Preference Science Technologies Inc.


"This is the beginning. When AI becomes mature enough to spout RetailWire discussion comments, then I will be most impressed!"

Dan Frechtling

CEO, Boltive


Discussion Questions

DISCUSSION QUESTIONS: Are AI-generated reviews a bigger threat to the credibility and overall value of online user-generated reviews than human-written fake reviews? Will plagiarism detection software and other defensive measures be enough to offset increasingly sophisticated AI-reviews?

Poll

24 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark Ryski
Noble Member
6 years ago

It’s becoming increasingly difficult to trust once-trusted news sources, so online reviews don’t stand a chance. Everything online needs to be taken with a healthy dose of skepticism and this is particularly the case with reviews. Fake reviews are very difficult if not impossible to discern and I see little hope in plagiarism detection or other methods to thwart this. First-hand accounts from people you know are one of the only truly fool-proof ways of getting a reliable review.

Dave Nixon
6 years ago

They ARE! Solely for the fact that as the cost for AI drops, due to new advancements in hardware and software, their use to generate a higher volume of fake marketing will increase, sadly. This coupled with human generated reviews will ultimately lead to shopper distrust in the review and feedback system for brands. A perfect example of technology misuse without a focus on the real need … a great customer experience drives real engagement and loyalty. Invest in being great. Not in being fake.

Bob Amster
Trusted Member
6 years ago

AI is a threat as much as it is an advantage. In a country that decries restrictive laws and too much government intervention, something like having an AI application write fake reviews is going to elicit more regulations to protect the innocent from those who would abuse the technology (why else do we impose regulations?). I can see it coming, and we will have brought it on ourselves.

Chris Petersen, PhD.
Member
6 years ago

If the data reported are sound, this will mean the creation of “fake news” at its worst.

While some customer reviews are definitely suspect, there are a reasonable number for the customer to sort through. If AI can flood the reviews and stack the opinion ratings, then the written customer review becomes very suspect and loses credibility.

Maybe the time has come for vlogs and video reviews. It would be much more difficult to fake a customer showing a video of them in a restaurant or using a product. In time AI will probably be able to do that as well, but we are probably safe for a few years from AI vlogs that are credible.

Ben Ball
Member
6 years ago

Fake reviews are already a problem online. Until now, we relied mainly on the number of reviews available for a product or service to decide how much credibility to put on them. The theory being that fake reviews were less likely to be a meaningful percentage of total reviews posted. AI makes that a useless tactic.

Brandon Rael
Active Member
6 years ago

This development is very disconcerting, as online reviews contribute significantly to the final transaction decision for a significant portion of today’s digital-native consumers. Unfortunately, even with all the perceived benefits of AI and machine learning, this is an example of how the online review system could be corrupted, especially considering how close the AI-generated comments are to the actual human-generated reviews.

For every AI innovation, seemingly, there are downsides to this equation. Consumers have depended a great deal on objective peer reviews. Once that objectivity is compromised, or even threatened, and consumer trust becomes eroded, pure e-commerce and hybrid brick-and-mortar retailers have to be extremely proactive to protect the integrity of the online review system.

Plagiarism software and other protective measures may not be enough. It may necessitate a two- or three-factor sign-in process to fight against the AI machine’s fake review threats.

Lee Kent
Lee Kent
Member
6 years ago

With the widespread adoption of AI, I believe we are going to start seeing more instances in which people are required to prove they are human. Many of these techniques are quite simple and not so disruptive as to be annoying to the person. Let’s see where this goes. For my 2 cents.

Neil Saunders
Famed Member
6 years ago

Yes, in a sense it is a threat. However, it’s ultimately self-governing. If there is an increasing mismatch between what reviews say and the subsequent experience, people will lose faith in reviews and ignore them. Equally, if sites become swamped with the rather generic views (which lack detail or descriptive color), then it becomes boring and a turn-off.

Ken Lonyai
Member
6 years ago

The misappropriation of AI for things like fake reviews is a much bigger threat than human fakery. AI is still nascent and improves fairly rapidly due to machine learning capabilities. Humans can’t do that and fake reviewers sometimes leave suspicious clues that a mature AI system likely will not.

It’s totally wrong to state that AI-generated reviews “don’t require monetary-compensation” — someone will get paid for the use of their software, but it would be easier to coordinate than individual fraudsters.

The best defense is twofold:

  1. “Verified purchase” labels on product reviews will partially reduce fraud;
  2. The rise of video reviews with the reviewer on camera (a win for YouTube!).
Art Suriano
Member
6 years ago

AI-written fake reviews were bound to happen. Yes we will have software for a while that will attempt to prevent them, but most likely once the public catches on they will pay less attention to fake reviews.

With every new technology, there are many benefits and, unfortunately, often many scammers who know how to manipulate the tool. It’s a shame that we have to think this way, but when there’s a way to scam the public for more profit, it happens. If businesses focused on delivering the best service and quality, they would not have to worry about using fake AI reviews. If it were my business, I would post the human reviews written with a campaign indicating that ALL our reviews are real and invite customers to come in and experience things for themselves.

Jasmine Glasheen
Member
6 years ago

It isn’t which outlet is being used to generate fake reviews that poses the real threat to online credibility. It’s the fact that so many online retailers have turned to fallacious consumer reviews, instead of putting in the work of modifying their product or customer service to merit better consumer feedback.

I’ve recently found that even on Amazon it’s been more difficult to leave negative constructive feedback about an online retailer. This skews reviews towards positive and ultimately results in the consumer having no idea what they’ll be getting in the mail.

The real question is whether an increasingly faulty online review system will cause customers to return to brick and mortar, where at least they can be sure what they’re paying for.

Doug Garnett
Active Member
6 years ago

It’s incredible to me that some companies lack the wisdom to put their own brakes on approaches like this. It’s our job as industry experts to clearly, and unequivocally, oppose any and all creation of fake reviews.

So yes, the idea of AI mimicry is a very bad idea. Today most consumers scan reviews to rule out those that lack the “truth” that one reads in a real review. Real reviews have surprises — surprise comments that show someone approached the product merely wanting it to work and describing their experience as such. And consumers hear those surprises most clearly.

We’re able to have reviews be valuable despite today’s shill reviews because real reviews can be found amid the corrupt ones. Were AI to be used, companies would be able to overrun consumers with so many fake reviews we’d never be able to find the real ones.

The result? Consumers would start to fully ignore reviews. And that is bad for everyone.

Charles Dimov
Member
6 years ago

Authentication will become a big thing in reviews. We need to start thinking about how to confirm, verify, credit and authenticate reviews by REAL humans — reviews that are themselves real (there are plenty of human fake review out there too).

It’s a tricky issue. But it is a great opportunity for a whole new authentication industry to help retailers and shoppers alike.

Ralph Jacobson
Member
6 years ago

The scary part of AI is that true machine learning will allow these “reviewers” to continually alter their text outputs based upon changing consumer trends. AI is a real positive in general — however, as in this case, it can of course be abused.

Cathy Hotka
Trusted Member
6 years ago

This is a huge threat to the believability of reviews. Soon the only way we’ll know they’re real is if there are typos in them!

Camille P. Schuster, PhD.
Member
6 years ago

This is just another example of fake reviews making reliance on reviews problematic for consumers. Figuring out ways to evaluate and use reviews is and will continue to be a challenge for consumers.

Ed Dunn
Ed Dunn
Member
6 years ago

The robots are our friends. The growth of AI-based stylometry is an indirect win for retailers and a big loss for Yelp and other online review sites. The fake reviews automated by a robot undermine the fake human reviews posted on external sites like Yelp where there is no record or verified purchase — this is a good thing.

By zero-summing the external review sites, retailers can offer “verified reviews” from their customers to provide a more accurate review of a product/service. Yelp is pretty much out an obsolete business model thanks to the fourth Industrial Revolution and the focus shift back to retailers who can verify their customer voice.

Dan Frechtling
6 years ago

The security teams at UGC-reliant sites indeed need to address this problem. Algorithms can rank AI likelihood like SPAM likelihood and push questionable reviews down the list. The Chicago research already cited length, character distribution and typos as red flags.

It takes AI to fight AI, but just like the fire-fighting analogy that’s not enough. There are many ways to assess authenticity, such as requiring photos and rewarding video, frequent contributors and verified purchases with a higher ranking.

This is the beginning. When AI becomes mature enough to spout RetailWire discussion comments, then I will be most impressed!

gordon arnold
gordon arnold
6 years ago

There is a good deal of tested information that points to consumer habits and word of mouth in ownership of chief motivators for doing business with a retailer. 21st century technology has improved these motivators making it possible for retailers to compete anew. Instead of searching weekly hard copy advertising and coupons, we now search online. If persuasive facts are easy to attain and compellingly supported options are decided upon. Social media has allowed the consumer to discuss needs, wants and findings openly and quickly before and after a buy.

Artificial Intelligence is kind of handy at sorting out what we think is important. It can also readily accept a challenge to disclose some previously unanticipated events. It is still up to management to stay focused on the needs and goals of the business and to insure that the right decisions are made with accurate and relevant information. Computers can only tell what has happened and is happening with absolute accuracy. Computer projections are nothing more than a guess.

To learn more about the reliability of statistically supported projections visit past NFL drafts and the thousands of warehouses across the country stuffed with product that never did and never will sell products.

Craig Sundstrom
Craig Sundstrom
Noble Member
6 years ago

Would you rather be stabbed, shot or poisoned? Come on, some questions just seem unnecessary to ask. We’ve already discussed — endlessly — the many problems with reviews generated by humans (including what might be charitably called “legitimate” ones, but from clueless people) and they’re often enough to sink the whole concept. So is it meaningful that yet another avenue has been found to dilute the results? Probably not.

Cynthia Holcomb
Member
6 years ago

Where have all the humans gone?
Long time passing
Where have all the humans gone?
Long-time ago
Where have all the humans gone?
Covered with flowers every one
When will we ever learn?
When will we ever learn?

Cynthia Holcomb
Reply to  Cynthia Holcomb
6 years ago

Credit to Peter Seeger.

Ricardo Belmar
Active Member
6 years ago

This really throws into question the entire premise to user-based reviews. If you can’t verify that the author of a review legitimately purchased the item being reviewed (or experienced the service, venue, etc.) then what is it’s value? Unless you’re reading a “professional” review site all you have is a number that tells you how many reviews are favorable vs not favorable. Perhaps the real question that needs to be asked is who would use such an AI-based fake review generating solution. It says more about the business that uses this technology than anything else.

Although it was only a matter of time for AI to do this, let’s face an other fact — positive reviews are pretty easy to fake as so many of them are generic in nature and offer no specifics. “The service was outstanding and the food delicious! I would come back here every day if I could!” or how about “Great product! I was looking for something like this for weeks and came across this — works exactly as advertised, I’d recommend it to everyone!” How hard can it be for an AI system to produce that?

Hilie Bloch
6 years ago

There is a battle going on with bot-generated content in retail and beyond. As soon as companies and organizations put the latest protections into place, bad guys come up with something new. When one retailer realized that bots never mentioned the company’s name and weeded out those reviews that didn’t have it, a hacker came up with a solution to incorporate the name. Fortunately, some companies have gotten smarter and are setting up defensive measures that not only address the immediate problem, but make it worthless for the bots to attack.