Bad survey results are toxic. Here are four reasons that surveys behave badly and what you can do about it.
Bad survey results are toxic. They lead to false beliefs, inaccurate measurement, and bad consumer segmentation that injure rather than nurture innovation and media strategies. When we ask questions in surveys that people can’t accurately answer, we get wrong ideas like these:
Actually, hours spent viewing TV was 4-5 times greater in 2010 based on Nielsen data and is still the number one media behavior today.
Annoyance with TV advertising is increasing…the relationship with much of conventional advertising is slowly and insistently degrading.
Eating the Big Fish
Annoyance with advertising is increasing? Ask about movie trailers or the latest Apple commercial. While the implication is that marketers waste their money on advertising, TV advertising consistently shows sales impact and ad budgets, especially digital, continue to grow.
More than half of shoppers globally think more physical stores will become merely showrooms by 2020…based on interviews with 16,000 consumers from 16 countries.
Paris-based consulting firm Capgemini (2012)
Actually, showrooming affects only a few percent of transactions. That’s what happens when you think you can survey people about the distant future and get any valid info in return. Goes to show you can’t compensate for asking bad questions with huge sample sizes!
Here are four reasons that surveys behave badly and what you can do about it.
Telescoping Problem: Surveys always elicit overstatement on brands bought over the past year, leading to inaccurate estimates of market penetration and misidentifying users…net/net, leading to wrong conclusions.
What you can do about it: First and foremost, know what the right answer is! You can do this by referencing household panel data or by triangulating in off of other marketing facts, like market share. This will tell you if you have a telescoping problem. In terms of the survey, you can minimize telescoping by experimenting with asking longer timeframes than the one you are interested in which traps telescoping effects, then following up with the shorter timeframe you are really interested in. If there are still discrepancies, weight the data on brand incidence.
If the timeframe is “instant” and “in the moment”, take advantage of mobile research to get in the moment measurement. Especially for ephemeral behaviors that are often semi-conscious, like in-aisle shopping, long surveys conducted after the fact will not work.
If you need to cross-tab survey questions by more complicated behavior patterns (e.g. measuring attitudes from loyal buyers to a given brand), go single-source. Administer the survey to those whose purchase behavior is known via household panels or frequent shopper data matching. A different path is to keep timeframes short, even instantaneous. For this you want to use short mobile, “in the moment” surveys.
Under and over-reporting of a choice response. Share of response is influenced by the share of choices on the list. That’s why politicians like to be on two parties on the ballot! For example, if you show a respondent a list of media touchpoints that might influence their purchase and you give them one TV choice and 10 digital choices, you will get under-reporting on the importance of TV. (Of course, some digital media companies do this on purpose in self-serving surveys.)
What you can do about it: Break the question up into part A and part B. The first part is higher level, (e.g. TV, digital, social media, print, radio…); if they choose digital, you can then offer them more granular choices. If you want even more precision, go single source. For example, you can administer surveys to those whose behavior is metered on their computers and even smartphones/tablets.
Random answering of attitudinal questions when beliefs are weakly held. This often ruins consumer segmentation. Often you are asking questions that the respondent doesn’t really know how to answer but guess what? They answer the question anyway! Then they develop other answers to other unfamiliar questions that are rationally consistent with this random answer. When you conduct consumer segmentation off of such data, you will get segments that seem to make sense but sadly, via a test-retest reliability experiment, you might find that the same respondent only has a 50% chance of falling into the same segment the second time.
What to do about it: First, you need to rethink segmentation altogether. Create segments equally based on behaviors as well as attitudes so that the segments are maximally different in purchasing behaviors and media habits. You can now conduct surveys among those on a panel where clickstream behaviors, shopping behaviors, or Facebook likes are matched in. Because media and purchase behaviors are recurring and habitual, this ensures that segments will replicate over time and that segments are targetable using programmatic advertising approaches. About attitudinal questions, if you ask questions that come from naturally occurring social media conversation wording you are more likely to use phrases that have true meaning and are less subject to random answering.
The problem in the “Eating the Big Fish” example is that a weak attitudinal question shamefully leads to an assumption about whether advertising is working to build sales. Well, if that is what you need to know, use experimentation or marketing mix modeling to find this out.
Inaccurate recall of time spent on various media behaviors. In my experience, people underestimate the time spent watching TV and over-estimate the time spent on various digital behaviors, notably social media. Some combination of telescoping and inaccurate media recall problems led to the showrooming problem.
What to do about it: If time spent consuming media is an important part of the survey, you must ask questions of people whose behaviors are metered or recorded in some way (a number of suppliers now offer this), instead of asking people to recall this for a long survey. In the case of showrooming, this would have been easy to address via a combination of metered use of certain apps with geo coordinates at the time of use combined with a database of store locations.
Last parting words of advice.
- Know what the behavioral truth is.
- Consider big data approaches that have integrated behavior monitoring with survey taking.
- Ask questions that people can answer…ask yourself, “Would my mom answer this question the same way twice?”
- Think like a behavioral economist, acting as a choice architect, as Profs. Thaler and Sunstein would say in the book “Nudge”. If you are getting underreporting of certain answers when compared to some measure of truth, like purchasing of store brands (which almost always happens), make that answer more prominent in the choice list and/or via the lead-in wording. You are not creating bias, you are removing bias from the way consumers answer surveys in otherwise “predictably irrational” ways.