Are consumer studies hopelessly biased?

Casting doubt on the methodology of many well-regarded consumer studies, researchers have found over half of published scientific studies could not be replicated to create the same results as found in the original reports.

A team of 270 scientists tried reproducing 100 studies focused on cognitive and social psychology that had been published in three top U.S. journals in 2008. Only 39 percent came out with the same results as the initial reports, according to findings in the journal Science. The low rates occurred despite the researchers working in close collaboration with the original authors.

Few of the redone studies contradicted the original ones, but their results were not as strong as originally claimed.

The project is part of an effort from the Center for Open Science, a nonprofit technology company directed by Brian Nosek, a psychology professor at the University of Virginia, that aims to increase transparency and reproducibility in scientific research.

Beyond small sample size, a main critique of consumer studies from companies or consultants are that they can be slanted due to skewed survey questions or readings biased to lead to a favored result. The psychology researchers, whether intentionally or unintentionally, were likewise found to be motivated to tweak their results, although for somewhat different reasons.

Research bias

Seeking tenure, grants or professional acclaim, researchers are under pressure to deliver results that regularly appear in reputable and high-profile journals.

"Not everything we do gets published," Prof. Nosek told the Daily News. "Novel, positive and tidy results are more likely to survive peer review and this can lead to publication biases that leave out negative results and studies that do not fit the story that we have."

To get a more significant result, researchers sometimes tweak experiements or pick only the most favorable data for analysis. By random chance, some experiments may produce results that appear significant. In the reproduction project, geographical factors, such as interviewing people in one country versus another, as well as the time the study took place was said to account for some of the weaker results.

The study called attention to how much bias drives publication in psychology. It called for the increasing need for more replication, although double-checking results doesn’t earn much coverage or accolades. Also urged was more extensive explanations of research methods, adequate sample sizes, and wider reporting of studies that show null results or that didn’t support the hypothesis.

BrainTrust

"Interesting question. I would argue that the kind of scientific rigor needed for psychology surveys is not quite the same as what we need in consumer studies. I think we look more for directional accuracy rather than very low standard deviations."

Paula Rosenblum

Co-founder, RSR Research


"I have found over the years that it is better to focus on what shoppers actually do than what they say. In observational and data-derived research, the results are very often similar over time if the methodology and the audience being observed remains stable."

Mark Heckman

Principal, Mark Heckman Consulting


"First of all, it is a logical — or more accurately, illogical — leap to say that because studies of one kind are flawed then studies of another kind must also be in error. That’s known as REALLY bad social science."

Ryan Mathews

Founder, CEO, Black Monk Consulting


Discussion Questions

Do you suspect that consumer-based studies suffer from the same biases as found in psychology studies? What steps should research teams take to reduce any natural biases and confirm the validity of their results?

Poll

21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Camille P. Schuster, Ph.D.
Camille P. Schuster, Ph.D.
8 years ago

Absolutely, there are similar biases in consumer studies. Validating questions and instruments across different consumer populations is difficult, tedious work. With increasingly diverse subgroups, making the assumption that one set of questions will yield similar results across all populations is problematic and must be tested. If not, the generalizability of results is questionable resulting in different results when studies are replicated. Validating research tools across diverse populations needs more attention.

Paula Rosenblum
Paula Rosenblum
8 years ago

Interesting question.

I would argue that the kind of scientific rigor needed for psychology surveys is not quite the same as what we need in consumer studies. I think we look more for directional accuracy rather than very low standard deviations.

I can also appreciate how the world of academia would look for new, fascinating, statistically significant results and stop the analysis when they found what they were looking for.

I think in the world of consumer studies, “weaker results” or directional accuracy is adequate. Look at it this way: we’ve been asking consumers about mobile payments for years (literally). In some form or another, consumers have been saying they are not particularly interested. We’ve gone forward, and guess what? It turns out consumers are not particularly interested. I could cite several other technologies, but the mobile payment story makes my point.

Close enough.

David Biernbaum
David Biernbaum
8 years ago

The barrage of consumer studies is one of my favorite pet peeves in our industry because I can tell, even intuitively, that the results are often biased, contradictory, misguided and dangerous because marketing practitioners are using erroneous findings to guide their decision-making.

In these digital times we live in, consumer studies are often flawed because in the process the science of truth in questions and responses gets lost. Way too many reasons for that to explain in this short space but please take my word for it, it’s so true.

It also goes without saying that with the way these studies are executed today, so fast and furious, and with so many of them, consumers will respond to the questions haphazardly, or even more often will misunderstand or misinterpret certain words and phrases. I have also noticed that too many surveys these days are using words such as “never” or “always” and consumers are taking these phrases quite literally.

Plus, we all know that so many of the results depend on the motive for the study itself. As a marketer I can purchase a study to pretty much come up with whatever results I need. Oh, but I would never do that. Just sayin’!

Zel Bianco
Zel Bianco
8 years ago

Consumer-based studies almost certainly suffer from the same biases — if not more so because they are tied so closely to commercial interests. If you are going to use data from consumer-based studies you must be prepared to accept the data as it is. It is so easy to twist results into what you want to hear and what goes along with your plan, but long-term growth and success must be considered. If you are going through the trouble of researching how your shoppers act and respond then you might as well actually listen to the results.

Herb Sorensen, Ph.D.
Herb Sorensen, Ph.D.
8 years ago

I don’t suspect the same flaws, I am quite confident of the absurdity of large numbers of “studies.” The reason is that the studies are based on asking people who do not know the “truth,” questions about things they can make up answers for on the spot. That and the bias of the study designs virtually guarantee fairy land reports.

This does NOT mean I never see any value in the “ask” type research commonly conducted. But it does explain why, as a scientist, I moved to observe and measure types of research years ago. I would rather observe what is in people’s minds by observing their behavior, which is driven by what is in their minds.

And the least confusing type of that data is tracking both their physical movements around the store, as well as what is coming into their eyes, and especially exactly what and when their eyes focus on, followed by specific behavior — like a PURCHASE! See: “From Opportunity to Final Purchase.” 

Dr. Stephen Needel
Dr. Stephen Needel
8 years ago

Depends on what we mean by consumer-based studies. If it’s the crappy survey results we are often shown (even in this space) then yes, it is likely that those results are biased as well.

We were taught in grad school about the joys of replication, even though it’s not what leads to tenure. And it was not uncommon to replicate a part of a study, then run the improvement or extension of that study. The ability to replicate gave the new findings more weight.

There are books written about how to reduce bias and ask questions in more than one way in order to get at the “truth.”

Mark Heckman
Mark Heckman
8 years ago

I have found over the years that it is better to focus on what shoppers actually do than what they say. In observational and data-derived research, the results are very often similar over time if the methodology and the audience being observed remains stable.

When dealing with cognitive or social psychology, I am not surprised that consistency is an issue. Bias is not limited to the researcher or their instruments.

Even in fairly simple “straight-up” consumer survey research, shoppers often answer questions in more of an aspirational manner than actually reflecting reality. One example of this is the consistent stated importance of health and nutrition in their list of shopping criteria, but yet the same shoppers actually index high for salty snack and carbonated beverage purchases.

Consumer research is a valuable tool, but the devil is in the details of the questionnaire, the methodology, and the ability of the analyst to derived an unbiased conclusion from the results.

Ian Percy
Ian Percy
8 years ago

Consumer-based studies are psychology studies. And they will always be hard to replicate because you’re dealing with people. Unless you are researching very primitive or mechanistic things like “Are you afraid of snakes?” or “Is parking within a mile of the store important to you?” it will always be so. Any life form with a heartbeat and a soul will not always comply with the scientific need for replicability no matter who the researcher is.

Most of this supposed “research” has been developed by research companies trying to make themselves indispensable to desperate retailers trying to figure out how to increase sales. It’s retail’s version of “Publish or Perish.”

The constant emphasis during graduate research way back in the seventies was to check the difference each question structure and each word made to results. We did factor analysis and analysis of variance calculations to make sure we understood what the subjects were telling us and, most importantly, what factors influenced them toward that perspective. Most of my formal research was on patients’ perception of hospital care — which is also consumer research.

In what is popular research today (especially political research) you’re lucky to find any more detail than X percent said this and Y percent said that. Then you get an estimate of the margin of error which is usually huge, scientifically speaking.

It seems from this article that not much has changed over the history of research and the higher our mountain of data, the less likely we’ll be able to make meaningful sense of it. Research that gives you only percentage results is, at best, anecdotal. IMHO a “poll” is not research. Take a look at it, sure, and then do what nature intended … trust your gut.

Ryan Mathews
Ryan Mathews
8 years ago

First of all, it is a logical — or more accurately, illogical — leap to say that because studies of one kind are flawed then studies of another kind must also be in error.

That’s known as REALLY bad social science.

That said, anyone familiar with psychological testing and measurement understands notions such as survey bias, an incomplete range of choice options (parenthetically why so many companies have such high “range of good to great” responses), interpretive bias, sample size bias, etc.

As regular readers known by now I am fond of attacking many of the studies that appear here on RetailWire for these and myriad other sins.

Surveys only tell you how a group of people responded to a set of questions at a point in time.

Directionally useful? Maybe

Truth, writ large and handed down on tablets? Not hardly.

The problem doesn’t stop with the limitations of the instrument. There’s always the interpretation issue.

If Donald Trump, for example, gains the support of 35 percent of the Iowa electorate, he will interpret that as a victory and a clear signal people love him and his policies.

On the other hand, that same result can be spun into conclusions such as, “65 percent of Iowa voters reject Trump;” “A majority of Idaho voters dislike New Yorkers;” “Iowans respond poorly to men with hair like an orangutan” etc.,

Sure, there are ways to make surveys more credible, but as long as they are written and answered by people there will be — as they say in statistics — a significant margin for error.

Ben Ball
Ben Ball
8 years ago

My colleague Ray Jones (a career researcher/consultant) and I (a career brand management exec/consultant) used to have very lively discussions about this.

Ray could point out the “flaws” in survey designs instantly. I would retort that it is only a “flaw” if it is unintended or accidental.

Ray would say “you were one of those — ‘never ask a question you don’t want to know the answer to’ guys, weren’t you?” I would say “only until I learned that the right approach is to never ask a question you don’t already know the answer to.”

While my story is for fun — the point is for real. Studies are exactly what you design them to be, assuming you know what you are doing to begin with.

Nikki Baird
Nikki Baird
8 years ago

There is definitely bias, you pretty much can’t help it. The only way to reduce the impact of bias is to have multiple validation points, both within the data itself (repeat the same study in different ways at different times), and across cohorts (compare retailers to consumers, for example). And not to expect exact numbers. “32.8 percent of consumers” just shouldn’t even be talked about unless you have a truly statistically valid sample. Most consumer studies I see (and RSR has done them too) look at “1,000 U.S. consumers.” That’s directionally valid — if a majority agree, that’s probably good enough to base decisions off of. But it’s not a result that can be used to predict a precise future.

Gene Detroyer
Gene Detroyer
8 years ago

In the very first consumer study I ever saw, in my very first job, with my very first company, I asked the question at a presentation of a study: “Just the fact that certain people have stopped to answer these questions or participate in this study, doesn’t that define the sample as something different from the normal customer?”

When I asked the question, the response was silence from the presenters.

Gordon Arnold
Gordon Arnold
8 years ago

Investments in market plans or solutions require support for the host to feel compelled to participate. The presentation must be carefully scrutinized to demonstrate a blend of positive and negative support arguments. The selection of these arguments is always in proportion to the desired or planned outcome of the presenter. In order to maintain credibility the presenting party must provide information that is assembled using standardized test methods with accredited test samples. This is getting more and more arduous in society today. The problem with statistical studies is the understanding of how to gather a group if individuals that will provide a look into the subject matter and provide and opinion that represents the vast majority on any topic or opinion. We are seeing how wide the disconnect really is in our country’s election polls and results over the past decade. The great depression of the 20th century completely placed the social and economic geographical demographics in turmoil for a period of almost 20 years. This very same turmoil is present today for the very same reasons and producing similar patterns in the results. The net result is always a widening of reliability issues until it is almost like a throw of the dice.

Doug Garnett
Doug Garnett
8 years ago

We should be far more worried about the mediocrity that defines so much consumer research … I see it all the time. Poorly-worded questions. Superficial analysis. Weak research design. Inflated expectations for the detail research can discover.

But this is far more insidious because it’s not biased — just executed with mediocrity.

After all, consumer research should surprise us with unexpected truths. The findings should be reliable in leading to actions. And that takes exceptional research work.

It’s one of those funny ironies — the bad and biased research is easy to identify. It’s the mediocre research that carries all the trappings of “well-executed” but isn’t.

I think the problem is worse because corporations decide to research far more than they need to. Research studies in mass quantity decrease dramatically in value and usefulness.

My recommendations: focus on small numbers of well crafted research and take care with the results if all it does is confirm everything you wanted it to confirm — consumers are not tame in that way.

Naomi K. Shapiro
Naomi K. Shapiro
8 years ago

To save time and space, I believe fellow BrainTrusters Herb Sorenson and Mark Heckman nailed it with their answers.

In other words, when it comes to consumer behaviors, “watch what I do, not what I say.” And … answering in an aspirational manner rather than reflecting reality.

And when it comes to research behavior, watch what the researcher wants, not what he/she gets.

Ralph Jacobson
Ralph Jacobson
8 years ago

Consumer studies—like most any study—will capture people in that specific moment in time. If they had a recent bad experience, their sentiment may be stronger than at other times. The more broad the target audience, the better, of course. The other challenge is to “question the questions” being asked. Are they leading in any way? Do they offer actionable responses?

Concerned about bias? Studies are studying humans. We all have biases. Don’t get too hung up on that.

James Tenser
James Tenser
8 years ago

Yes, bias reigns in commercial market research. And the savviest researchers are clever enough to anticipate and apply that to deliver the “right” results.

It’s true that survey respondents tend to answer aspirationally about both their opinions and their reported behaviors. It’s true that respondents may be inherently different than non-respondents, and therefore not fully representative of the population. It’s true that survey respondents tend to shade their answers to please the researcher. It’s true that question design can subtly steer subject toward certain results.

Research focused on measuring actual behaviors (like purchases, page views or dwell time) is more likely to yield reliable measures. But while those results establish present patterns, they do not necessarily provide insights into shopper attitudes.

Tony Orlando
Tony Orlando
8 years ago

It is just about impossible to do a complete survey without some bias or agenda mixed in. There are many different ways to make the results of your survey seem to back up the results that you are looking for, and as mentioned above, nobody is better than the politicians at doing this.

Global cooling, global warming. I can give you any facts, i.e. BS you want from the same survey to back both positions. Consultants understand marketing, and getting the desired result of their surveys makes sense to satisfy the clients, which all of us want.

You can also choose the pool of people ahead, knowing how they think, based on their profile, which surely will give you the desired result, and it all looks believable. I could go on, but sometimes you have to reach inside your own head, and trust your instinct as a retailer to make the right decision about your future plans. There are no crystal balls in business, but experience, knowledge, action, and yes—even some luck—can create success.

Michael Greenberg
Michael Greenberg
8 years ago

Ralph made the key point. These studies were one-time snapshots of behavior, designed to evaluate an hypothesis that is very hard to measure directly. Most retailer studies are much more basic and repeated over time, and it’s the changes between these readings that start to tell us what we need to know.

Vahe Katros
Vahe Katros
8 years ago

Academic research seeks to establish the truth (or non-truth) behind a hypothesis. It usually does not allow one to say to the source of funding, “Hey, you know that longitudinal study you gave us $250k to do? We found a much more interesting hypothesis we want to explore.” Business studies are not in search of truth as much as they are an opportunity. Perhaps the following saying applies: “Don’t go looking for Mr./Ms. Right. Look for Mr./Ms. Right Now.”

So yes, there is a bias—to find a consumer bias towards a behavior or find a way to bias a behavior.

Mark Price
Mark Price
8 years ago

Clearly, market research studies have potential to suffer from the same fate as psychology studies. Pressure to provide meaningful, positive results can lead researchers and marketers to skew findings in a way that draws the desired conclusion from the work. When was the last time that you heard a market research study was either inconclusive or failed?

The benefit that consumer research has over academic research is budgets sufficient to provide a sample size that has a lower margin of error than low-budget academic projects. Just by sheer size, consumer research is more likely to be able to be replicated.