When surveys behave badly … and what to do about it

Discussion
Oct 17, 2014

Through a special arrangement, presented here for discussion is an excerpt of a current article from the Joel Rubinson on Marketing Research Consulting blog.

Bad survey results are toxic. They lead to false beliefs, inaccurate measurement and bad consumer segmentation that injure rather than nurture innovation and media strategies. "Fixing" a survey starts with good survey design principles, but it doesn’t end there. In the digital age, we need to "think beyond the survey" and integrate behavioral information.

Here are three reasons that surveys behave badly and what you can do about it.

Problem: Telescoping. Surveys always elicit overstatement on brands bought over the past year, past three months, etc., leading to inaccurate estimates of market penetration and misidentifying users.

The fix: Anchor in behavior. Minimize telescoping by first asking about longer timeframes, then following up with the shorter timeframe you are interested in. To remove discrepancies, calibrate (rim weight) the data on known brand incidence. Selected suppliers now offer the ability to survey respondents whose behavior is known from scanned purchases, frequent shopper data matching, etc. creating a true single source approach. In subscription or retail businesses, you can also survey your own customers and match in transaction and clickstream data.

Problem: Random answering of attitudinal questions when beliefs are weakly held. Often attitude and usage studies ask questions that the respondent doesn’t really know how to answer, but guess what? They answer the question anyway! When you conduct consumer segmentation off of such data, the segments seem to make sense, but, sadly, via a test-retest reliability experiment, you might find that the same respondent only has a 50 percent chance of falling into the same segment the second time around. Such segmentation is practically useless for media targeting.

The fix: Anchor segmentation in digital and shopping behaviors. Survey people on fewer but meaningful statements and about media and purchase behaviors that are known. Create segments that are maximally different in purchasing behaviors and media habits as well as attitudes. That way, key segments can be targeted and personalized differently in digital media and will hold up over time. Finally, did you know that you can now listen to social media specifically for a segment of interest? The technology now exists to do profile matching to accomplish this.

Problem: Inaccurate recall of time spent on various media behaviors. In my experience, people underestimate the time spent watching TV and over-estimate the time spent on various digital behaviors, notably social media.

The fix: Go single source. Conduct survey research among people whose behaviors are metered or recorded in some way. An award-winning study I consulted on "Seven Shades of Mobile" did this; we surveyed people whose smartphone behavior was being metered.

The new news? In a digital age, think beyond the survey. Conduct surveys among those where clickstream behaviors, transaction and shopping behaviors, social media profiles or Facebook likes are measured.

What do you see as the most common causes of inaccuracies in surveys? Will the use of social media and digital technologies enhance survey effectiveness?

Join the Discussion!

10 Comments on "When surveys behave badly … and what to do about it"

Notify of

Sort by:   newest | oldest | most voted
David Biernbaum
Guest
3 years 1 month ago

At David Biernbaum & Associates LLC, we totally agree with every point made in this article. In fact, I would go as far as to say that most consumer goods surveys today are non-scientific, far too random and yes, often single-sourced. For example, a survey in the office, or strictly online, or by asking questions to only those that volunteer to be surveyed, are extreme flaws. If your goal is truly to get accurate results, you need to outsource a survey expert, best if it’s outside of your organization, and be sure to use the criteria mentioned in this article.

Dr. Stephen Needel
Guest
3 years 1 month ago

Joel’s points above suggest that surveys are inherently inaccurate. It’s not because people are lying (although that happens), it’s that they don’t recall the correct answers to our questions or we’ve asked the questions badly. So there’s a cautionary note for all you DIYers.

I don’t think social media and digital technology per se help. Measuring behavior unobtrusively does, though, and sometimes that behavior is social media based or digitally based. When we test shopping behavior using virtual reality, we are all about the shopping, not a question about the shopping.

Roger Saunders
Guest
3 years 1 month ago
Joel’s common causes accurately point to issues that lead to flawed results on surveys, be they custom or syndicated. Other flaws creep into the equation for Marketers as well. Too often they fail to take a longitudinal approach. If an issue is important enough to provide deep insight into a problem or opportunity, consider capturing a tractable database to see if your target audience is behaving consistently. Poorly-written questionnaires lead to interview bias and respondent fatigue. Read and follow Princeton Psychologists and author Daniel Kahneman’s “System One Thinking” and “Think Fast, Think Slow.” Don’t make the question complex, the respondent… Read more »
Gajendra Ratnavel
Guest
3 years 1 month ago

The inherent issue with surveying is you get answers that a person wants to tell you and not necessarily what you want to know. In some cases they work really well but they need to be created carefully with identification of areas where there might be inaccurate responses.

It should also be backed up or supplemented with other data where possible before any important decisions are taken as a result.

Ralph Jacobson
Guest
3 years 1 month ago

Survey questions must be reviewed by a team of editors to ensure a minimal level of ambiguity in the line of questioning. Only ask open-ended questions if you truly want “open-ended” responses. Otherwise, make the questions tactical and focused, with little or no room for interpretation of intent.

Will the use of social media and digital technologies enhance survey effectiveness? Not necessarily. I see these channels as additional vehicles for survey delivery, however the content of the survey itself is the key.

Kevin Leifer
Guest
3 years 1 month ago

Great points in each of the comments here. My biggest word of caution would be to keep the survey focused on a truly defined objective and avoid tendencies to ask “a few extra questions” someone may be curious about. “Surveys by committee” often grow to become out of control and dilute the essence of the objective.

Jonathan Marek
Guest
3 years 1 month ago
Lots of interesting points here, but I think commenters haven’t hit on the most fundamental issue: with respect to most of the surveys we conduct in retail and CPG, most survey respondents don’t themselves know why they behave the way they do or what will change their behavior. This is unlike much simpler surveys about clear decisions, e.g., for whom will you vote? And we all know how even those polls are far from perfect. As a result, marketers cannot get what they really want from surveys—an understanding of what levers will drive consumer behavior. They might get ideas, but… Read more »
W. Frank Dell II
Guest
3 years 1 month ago

Wording of the questions is the biggest problem after sample design. Too many surveys lead the respondent. This occurs due to the way the question is formed and then asked. For really critical issues, it is best to ask the question two ways. Another problem with digital surveys is getting a valid sample for projecting. The survey must target a broad reach to be valid.

Joel Rubinson
Guest
3 years 1 month ago
Wonderful comments, thanks all. Let me give another dimension, which is the action levers uniquely created by integrating digital and social data. Currently, a CPG marketer might conduct a BASES test to determine the sales potential of a new product and present that to the retailer as evidence for listing, but that is pretty much the extent of it. Suppose a retailer had a research panel of shoppers where surveys could be linked to both frequent shopper data and clickstream behaviors. Now we could test a concept and then passively capture who the triers are for that new product. Then,… Read more »
Gordon Arnold
Guest
3 years 1 month ago

Asking questions that are not able to allow for an opinion from the subjects leave no or limited results. This is common, especially in quick surveys. Small and mismanaged samples create large possibilities for error(s). This and more combined with management teams that are poorly schooled in statistics are the reasons marketing teams produce so many surprises.

wpDiscuz

Take Our Instant Poll

How much more or less effective will surveys be through the use of social media and digital technologies?

View Results

Loading ... Loading ...