Trump Ads Won the Election (and what marketers and advertisers can learn from it)

The campaign messages of the two candidates played a major role in the outcome of the 2016 Election

By Parry Bedi & Adhil Patel

The election is now firmly behind us. Over the next few months and years, we will continue to hear arguments and counter arguments as to how and why polls missed the mark, unexpected voter turnout issues, what the Clinton Campaign should and could have done differently, etc.

However, one thing is for sure – Trump voters were extremely motivated. They were motivated by issues, promises, and perhaps not surprisingly, by a strong distaste for Hillary Clinton (as we also covered in a two part GlimpzIt blog post). One logical question to ask is, “Where did this motivation come from?” Was it the result of a complex set of tradeoffs and decisions which voters made before they pulled the lever for one of the two candidates?

Based on our research we can say with confidence that the campaign messages of the two candidates played a major role in the outcome of the 2016 Election. How the messages were internalized, and the emotional responses they evoked, are the factors that tipped the scales.

It would be incorrect for us to say that we saw this stunning upset coming. However, like the recent article in Ad Age by Simon Dumenco, or this one in Mashable by Petere Allen Clark, we did notice that there was something off with Clinton’s advertising and messaging quite early on; it simply did not resonate with voters. In fact, we presented these findings at IIEX 2016 under the heading: What does Presidential mean to you? You can watch the video of that presentation here.

To better understand the appeal candidates had to voters, we analyzed their campaign ads, which we used as proxies for their core message, to identify the reactions they elicited and the emotional connections they made with voters.

Using TNS’s ConversionModel we tested ads from Clinton, Trump and Sanders on a general population sample, including potential voters of all political persuasions and focused on three key elements: Novelty – how original the ad was; Affective value – how emotionally moving it was; and Relevance – how relevant was the ad to the viewer.

Our initial analysis showed that Trump’s ads performed around the norm, not excelling in any particular dimension. The Sanders ad hit on a particularly emotive chord, and stirred up patriotic and human values. Clinton’s ads were considered Relevant and Affective but ranked low on Novelty.

To understand why Clinton’s ad scored low on Novelty, we asked a sample of 400 randomly chosen people to watch Trump’s Immigration ad, and Clinton’s Fighting for You ad, and submit an image and text response that comes to mind after watching each.

All of this unstructured data, along with accompanying demographic information and the stated voting preferences, were then fed into the GlimpzIt’s AI engine, which revealed the following insights: The top insight was that visual associations to Trump’s messaging were very specific and tied directly back to his narrative. This images submitted in response to his Immigration ad directly related to the issues raised in the ad itself.

Whereas, visual associations to Clinton’s messaging were much more vague. People struggled to recall the key message in Clinton’s ads.

When we analyzed the text responses and placed them into different categories, we saw another clear pattern.  Clinton’s message also failed to evoke a direct emotional response. While her ads contained a number of sub-messages targeted towards various groups, the lack of novelty and originality meant people simply tuned it out and could not correlate the issues with the ad. For example, for the Clinton Ad, we saw the following distribution of the most commonly used phrases and categories:

Notice how the basic human emotion of Empathy is way down below other much more issue-based and cerebral categories, such as Specific Details.

However, Trump’s ads had something for everyone and evoked an entire spectrum of basic human emotions ranging from skepticism to hope and triggered a subconscious and basic emotional response among people.

In fact they were extremely effective in getting people to sit up, take notice, and even take action.

But did this messaging lead to higher engagement? To find out, we analyzed the social media conversations relating to both ads on Twitter and Facebook. The results: Trump’s specific and memorable messaging struck such a chord with the audience that it completely dictated and dominated the social media conversation.

From our Twitter analysis, one of the key themes of the election – immigration – exemplifies how much Trump and his supporters controlled the narrative through share of voice. We analyzed 624,539 tweets from 221,760 accounts over nine months and discovered that, not only were Trump supporters much more vocal (as measured by number of tweets), but most of the content shared was around Trump’s actual statements.

This was further proven through our analysis of the Clinton and Trump campaigns’ official Facebook pages. While Trump posted less to his page (66 posts versus 100 posts by Clinton), he managed to achieve a lot more engagement, by garnering substantially more likes, shares and comments than Clinton.

 

One of the dominant themes of the election has been the polarization seen across social media. Therefore conventional wisdom suggests that there would be little crossover between supporters of the two campaigns – conversations happening in echo chambers. While this was true on Twitter, it was not true on Facebook. Trump’s messaging was extremely successful in breaking through the silos. People commenting positively on Clinton’s Facebook page were almost eight times more likely to comment on Trump’s page (albeit negatively).

Trump’s messaging was prompting action both by people who were supporting it, AND by those who were adamantly opposed to it – thereby ensuring that the every major conversation on the social media was about Trump’s messaging.

So where does that leave us? Trump ads were much more potent. They directly motivated people on an emotional level, which in turn made them more memorable and inspired action.  In order to connect with and influence people, it is key to have the ability to tap into raw human emotions. This is how an outspoken and controversial candidate like Trump, talking about highly emotive topics such as immigration, was able to connect, mobilize, and at the same time antagonize a large spectrum of people with a simple message. People were more or less forced to choose sides and take action. This in turn meant that that people across the political spectrum were actively engaging with and thus exponentially increasing the reach of Trump’s messaging on social media.

Neuromarketing: Identifying the Fact From the Fiction

Neuromarketing has seen its fair share of pseudoscience. How can you determine the real from the fake?

By Michelle Niedziela

One of the striking narratives that plagued 2016 was the emergence of fake news. With the decline of the newspaper and growth of viral news, more people are getting their information from social media rather than the older, reliable news sources. Many are quick to accept what they read online as fact, and even more don’t even read past the headlines or check the news source before accepting the message. The growth in fake news has been so large that it may have even influenced the 2016 presidential election.

Fake news, however, is not the only problem. There has also been an emergence in the spread of pseudoscience. Pseudoscientific news ranges from the hilariously ridiculous (Big Foot sightings) to the dangerous (homeopathic cures for cancer).

Unfortunately, the persistence of fake news and pseudoscience affects not only our entertainment, but can also have legal ramifications (http://www.telegraph.co.uk/technology/11834670/Woman-who-claimed-she-was-allergic-to-Wi-Fi-gets-disability-allowance-from-French-court.html).

Neuromarketing has seen its fair share of pseudoscience. There are no easy to use gadgets that can “read consumers’ minds.” The human brain is far too complicated to be reduced to a simple piece of plastic sitting on top of your head (that’s not to say that physiological measures can’t tell us something about consumers’ reactions to products and communications, but that’s not the same as mind reading).

If you are looking for a simple solution like that and are not interested in its legitimacy, let me redirect you to here: https://www.google.com/search?q=mood+ring

Perhaps a great New Year’s resolution for 2017 is to be sure to think critically and dismiss fake news and pseudoscience.

But how can you identify neuro-fact from neuro-fiction?

My first piece of advice is to know that there is no ONE perfect tool for studying human response. Different research questions and settings require different methodologies and technologies. So if your research provider is suggesting that their widget can do everything anywhere, you are dealing with a widget salesperson that will only ever sell you a widget, not a scientist helping you to understand your consumer. And to that point, if your research provider cannot tell you the limitations of their widget, then they are not being honest with you.

But when seeing “scientific” news about neuromarketing, here are a few steps to help you to sort through the muck:

1. The use of Psychobabble

Psychobabble is the use of words that sound scientific, but are not. Neuromarketers have ahabit of tagging the word “neuro” in front of anything to make it sound like real neuroscience. The use of these neuro-words or neuro-brands is really no more than “neuro-hype.” Often these words are really just a marketing scheme to get you to believe in a product or company.

And while it’s just a name, this is why we at HCD prefer to use the term “Applied Consumer Neuroscience.” We believe this better describes the process of using a combination of neuroscience, psychology and traditional consumer research methods to better understand the consumer experience. Sure it’s just a name, but we don’t believe that neuro- measures are meant to replace traditional research and instead suggest that the addition of neuro- measures is an evolution and advancement to the already existing field of market research.

2. Reliance on anecdotal evidence

In place of published studies, many neuromarketing companies offer case studies and most do not validate their tools or methods with any scientific research. If you are not paying for a validated tool, then what exactly are you paying for?

Many people become interested in using physiological or neuropsychological measures because they believe it will be more accurate than traditional measures. They believe that participants won’t be able to lie as they might on a survey or that difficult to articulate emotional reactions may be revealed through neuropsychological measures. And while that may be true, anecdotal evidence is not evidence. Any new measure or new application of a neurological tool must be validated before being used (and sold).

While case studies can be very informative and lead to great research ideas, thoughtful research must still be done to validate a methodology. For example, in the world of pharmaceuticals this is very important. Just because one participant given a drug may have improved more than when given a placebo does not mean that the drug should be approved. It still needs to be thoroughly tested; otherwise you risk relying on a false positive result.

Many neuromarketers provide anecdotal evidence as proof that their tool works. However, I suggest that if your research provider cannot provide you with real evidence (published peer-reviewed papers, or at least blinded case studies with real statistical analyses), then you may best be cautious. Buyer beware.

3. Extraordinary claims in absence of extraordinary evidence

The human brain is a complicated organ, so complicated that it can’t be duplicated and many aspects of it are still not understood. Academic neuroscience, for example, is still trying to explain even simple, vital, everyday things we do such as eating (see recent publications here: https://www.ncbi.nlm.nih.gov/pubmed/?term=food+intake, at time of writing, 187,055 recent publications still can’t tell us why we eat or stop eating).

So when I see a claim that says that this or that tool or approach can “read the subconscious” or predict a complicated behavior like consumer behavior, I raise an eyebrow. Unless they can show you the evidence that the measure is linked to a behavior, then this is not predictive. It is imperative that neuromarketers do the background research in order to prove that their tools can be used in the specific ways that they claim, rather than just what sounds interesting.

4. Claims which cannot be proven false

When making claims about neuro- methodologies, researchers often fall into the trap of hindsight bias.  Hindsight bias (https://en.wikipedia.org/wiki/Hindsight_bias) is the research mistake of asserting that your finding is true and predictive after the event has occurred. It’s the act of seeing the final score of the Super Bowl and then telling everyone you predicted it before. No one can prove that you didn’t and it can make you seem very smart. But it hinders the scientific process of moving the neuromarketing field forward. If we are not using real findings and making real discoveries, then we are not really accomplishing anything of value.

But more importantly, this doesn’t help our clients.The problem with this approach is that it doesn’t give credit to what applied consumer neuroscience is best used for: helping us to better understand the consumer. It’s not a replacement of current market research methodologies. And so being directly predictive of something that could have just been asked is not helpful. But when used as an addition to instead of replacement of traditional measures, applied neuroscience can be a valuable complement to current research.

The question then is not whether neuromarketing could have predicted liking. If we want to know if someone liked something you can simply ask them. The better research question for applied neuroscience is ‘why do they like it’.

5. Claims that counter scientific fact

Again, it’s not currently possible to “read the mind” with any tool. However, there was a recent academic study that got close (sort of). fMRIs were done on participants as they viewed a movie. Participants watched the same movie for 3 months. After 3 months of training on the same movie, researchers were able to identify which movie the participants viewed by identifying a similar pattern in brain activity. But this is not the same as “reading the mind”. They trained people to exhibit a response and then identified that response in testing. Further, it is known that certain patterns are identifiable as synchronicity rhythms in brain activity due to blinking that is often caused by the phrasing used in cutting scenes together. Definitely not mind reading.

Brains are really complicated (neuro-understatement of the year). They control our breathing, eating, standing, walking, etc., and so on… everything. So there’s a lot going on up there even when we don’t appear to be doing anything but sitting quietly and still. So imagine the amount of activity happening while you are walking through a store. Now imagine how differently your brain might look than another person’s brain might look as they walk through a store. You might hear different sounds or different people. Your experiences would be different and so activity in your brain would also be different. This makes studying this sort of behavior with neuroscience tools very difficult. The acts of walking and breathing and staying upright (balance) are very complicated things we do without having to consciously think of them. But these acts require a significant amount of brainpower, causing a lot of noise in the data if you are not interested in how well someone is walking, but interested in what they are seeing in a store. Real-time, naturalistic experiences are not well suited for neuro-measures and require a great deal of attention to proper research design. This is the fact of the situation, and if your research provider ignores these facts, again, buyer beware.

6. Absence of adequate peer review

One of the biggest problems in neuromarketing is the absence of peer review (though some are trying to correct this problem). The scientific method is clearly about testing hypotheses. But even further, it’s about replicating results and presenting your research to the  larger scientific audience for critique.
However, criticism is not something that the many in the neuromarketing community encourage and the lack of a legitimate scientific peer review process for proposed methodologies has allowed many companies to get away with peddling non-validated widgets unchecked.

Because neuromarketing companies don’t provide the key details of the analysis techniques they use, it’s hard to evaluate them objectively.

7. Claims that are repeated despite being refuted

If it sounds too good to be true, it probably is.

While it would be amazingly convenient to measure neuro- responses while a consumer walks through a store, this simply is not a valid methodology.  While it would be great if we could really read the mind, it’s simply not that simple. As discussed earlier, the brain is complicated and so when we measure it we need to do so using validated tools and thoughtful research design. It is possible to use applied neuroscience to better understand consumer response.

Making claims from brain response is highly difficult. Labeling a set of brain data as a signal of attention or anxiety based on one set of data is similar to saying “tomatoes are red, this apple is red, therefore this apple is a tomato” and continuing to state that an apple is a tomato despite evidence to the contrary.

We see this in neuromarketing frequently, probably due to the lack of a strong peer-reviewed scientific process and the drive to sell methodologies. For example, while academic research has found that social setting (whether in presence of another person or alone; see research: http://psycnet.apa.org/journals/dev/32/2/367/, http://psycnet.apa.org/journals/emo/1/1/51/) can influence facial emotional response, many neuromarketers use facial coding in group settings such as focus groups.

Unfortunately, there is a tendency of neuromarketers to keep methods secret, therefore, hampering serious evaluation. This does not, however, mean that all the data is bad. With a properly designed study, it is possible to look for meaningful (statistical) changes between stimuli or products, as well as look for meaningful changes from baseline measures. And it’s possible to make inferences from those changes in a well designed study, but those claims need to me made cautiously and be backed up by research.

So let’s all resolve to do better in 2017.

What Makes a Great Super Bowl Ad?

Here are some hard facts about what makes Super Bowls ads different from other TV ads and how to make yours stand out.

By Michael Wolfe

Advertisers have a lot at stake when they invest upwards of $5 million for a single Super Bowl ad.  You’d expect their ad creative to be superb, but on average, many Super Bowl ads underperform compared to every day TV ads.  Here are some hard facts about Super Super Bowl advertising, what makes it different from other TV ads, and how do you make sure yours breaks through and generates the buzz that builds your brand.

The Underlying Metric for Ad Effectiveness

For this exercise, we are going to define ad effectiveness from ad creative scores from ABX (Advertising Benchmark Index).  We’ve selected this company’s metrics because we have proven, through advanced Marketing-Mix Modeling, that their ABX Index has directly been linked to brand performance and retail sales.   This linkage, we believe, makes their ad scoring system more relevant and credible.  The data that we are going to source here covers Super Bowls 2013, 2014, 2015, and 2016.

The ABX Index

The ABX metric or index is based on a general population stratified random probability sampling design with respondents evaluating ads in all media types including TV.  This metric covers five

critical functions and aspects of an ad.  These include (1) the correct awareness or linkage of the ad with the brand name, (2) whether the ad’s core message was understood and clear, (3) whether the ad has a positive impact on brand reputation, (4) whether the ad was deemed to be relevant to the customer, and (5) whether the ad elicited some action or behavior such as website visits, discussions, store visits, purchase intent, etc. (Call to Action).  A sixth element is also measured for ad likeability or dislike, but is not a part of the overall ABX score because evidence does not link this with actual brand performance.  Unfortunately, and all too often, the most important and sometimes sole reason for selecting a particular Super Bowl ad spot is based on subjective likeability.   Evidence shown in this paper indicates that this often leads to the development of many mediocre and poor performing ads.

Overall Super Bowl Ad Performance

Over the last 4 Super Bowls, a total of 290 ads were evaluated, or about 73 per event.  When we see the overall performance by event, the 2015 Super Bowl was found to have the strongest ads, while the 2013 Super Bowl had the weakest.  All of the Super Bowl averages are below the ABX TV norm of 109.

The top scoring individual ads for these Super Bowl events can be seen below.  Of the top 10 ads, all but three are food products or restaurants.  Interestingly, of the traditional heavy spenders, no auto insurance, beer or soft drink ads made the list; and only one auto brand did so.

When we looked at what made these ads stand-out relative to other ads, respondents rated each ad (1) very high on positive brand reputation, (2) strong positive purchase intent, (3) intent to talk about the ad and desire to see it again, and (4) low dislike.  These top ads rated over 2X the normative levels on all of these areas, demonstrating the core benefit brands achieve by making it to the top of this exclusive club.

Super Bowl Ads Differentiators

We are all pretty aware that Super Bowl TV ads tend to be different from the run-of-the-mill ads we see every day.  Certainly, companies find Super Bowl advertising sufficiently attractive and beneficial to warrant large cost premiums.

When we look at the key ingredients of ads, as perceived by the TV audience, certainly some of these benefits stand out.

As shown above, Super Bowl ads stand out because (1) people talk about them and are interested in seeing them again, (2) they have a higher level of “likeability,” and (3) they have a higher incidence of impacting positive brand reputation.  For brands that need to establish credibility and establish themselves with a larger audience, these ads do deliver.  However, evidence has not been conclusive that there is a connection between the likeability of an ad and a brand’s ad effectiveness and business performance.

Next, we contrast how top performing ads differ from bottom or poor performing ads based on the same criteria (we select top and bottom 20 ads).  As you can see below, the overall differences are substantial and significant across-the-board.  Of particular note here are those elements with the largest gaps.  The top performing ads show extreme and positive differences on factors such as positive shift in brand reputation and positive purchase intent.  In addition, the poorest performing ads stand out as being significantly more “disliked” from the top performing ads.

What ingredients are most important in Super Bowl ad effectiveness?

Using logistics regression, we built a model with the purpose of determining the relative importance of key components or features of an ad and its overall ABX creative effectiveness score.  Below, shows the results and relative importance of the drivers of Super Super Bowl.ad effectiveness.

Creating great ad creative is not easy.  It is always a challenge and certainly, there is no fixed formula with respect to creating great Super Bowl TV ads.  However, there are certain fundamentals that simply need to be achieved for any Super Bowl ad to be truly effective and impactful.

  1. Creating awareness and brand linkage. This is found to be the most important factor.   Good ads must cement the brand identity.   This is fundamental and, without it, the ad will not be effective.
  2. Message Clarity. Each effective Super Bowl ad also must generate a message which is clear to the audience and understandable.
  3. Developing a clear and responsive “call-to-action.” This says that the ad must elicit some sort of brand affirmative behavior, whether that leads to a direct purchase or some other positive behavior from store visits, word-of-mouth conversations, visiting the brand website or recommending it to others. All of these are behaviors that affirm the ad’s impact on the audience and end consumer.

In sum, evidence shows that Super Bowl advertising has some specific and unique benefits to advertisers.  Whether these are sufficient to generate a positive ROI, however, is probably only known on a case-by-case basis.   Certainly, those ads which rise to the top of the effectiveness list generate fame and limelight.  To achieve this lofty level or effectiveness, however, requires that each ad generate strong fundamental ratings on brand linkage, message clarity and its ability to generate responses to defined “calls-to-action.”

Likewise, we see some important and interesting differences between top and bottom performing ads on the ABX scale.   In particular, top performing ads have very much higher

levels of positive shifting brand reputation perceptions and also a large rating on positive purchase intent.  For the few brands that can climb to the top of the chain for Super Bowl ad effectiveness, it appears the benefits are substantial and the likelihood of positive ROI from their investment is high.

By contrast, not all Super Bowl ads are strong performers.  In fact, slightly more than 44% of all ads fall below “all advertiser Television norms” by more than 10% and score less than half the normal rate on such critical measures as “positive purchase intent”.  Therefore, the case can certainly be made that many Super Bowl ads fail to achieve positive ROI and can probably be considered wasted spend.

The lesson here is that there are certain fundamentals that Super Bowl ads must deliver.  Achieving them requires fact-based tools and deep insight as to what ingredients go into a good Super Bowl ad.   mjw@global-analytics-partners.com

Jeffrey Henning’s #MRX Top 10: AI Reshaping CX, MRX, and IT

Of the 4,126 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

By Jeffrey Henning

Of the 4,126 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. Britons’ Predictions for 2017: Brexit, Economic Concerns, and a Royal Engagement? – A survey of 1,003 Britons in mid-December found that 79% believe the UK government will trigger Article 50 in 2017 to start the formal process for leaving the European Union; 66% believe inflation will increase and 45% think unemployment will rise; and 52% think Prince Harry will get engaged.
  2. 6 Ways Artificial Intelligence Is Reshaping Customer Experience (CX) – AIs will target marketing based on consumer behavior, provide faster customer service through chat bots, facilitate orders through intelligent agents, leverage the Internet of Things for self-service, and push human agents into tier-2 service and and support.
  3. The Latest GRIT Report Has Arrived! – The newest GRIT Report surveyed 1,583 market researchers (80% suppliers, 20% buyers) in the last two quarters of 2016 about key issues driving the industry, including the role of Artificial Intelligence, client-satisfaction with suppliers, planned usage of non-MR data sources, and hiring profiles for researchers in the future.
  4. Five Market Research Trends for 2017 – Ray Poynter identifies five key trends driving the research industry this year: embracing automation; finding the story in the data; translating insights into action; incorporating implicit measurement; and experimenting with Artificial Intelligence.
  5. IDC: Top 2017 Predictions – Among IDC’s predictions for IT: by 2019, 40% of digital transformation initiatives and all Internet of Things initiatives will be powered in part by AI; 67% of enterprise IT infrastructure will be cloud-based.
  6. In the Blink of an Eye – Writing for Research Live (free registration required), Jane Bainbridge reports on a Journal of Consumer Psychology article that reveals that ease of eye movement when examining a product is typically misinterpreted by consumers as liking the product itself.
  7. NPD and Ipsos Veteran Deitch is New Leader for Bakamo – Bakamo, a social media monitoring firm, has appointed Jonathan Deitch as Chief Executive.
  8. The Future of Market Research Data Collection – Research Now CEO Gary Laben writes about moving beyond screeners and panel profiles in the form of long questionnaires towards profiles based on mining behavioral data.
  9. How A Global Insurance Brand Turns Customer Insights Into Long Term Success – Writing for Forbes, Steve Olenski interviewed Saras Agarwal of AXA about how the firm has innovated within the insurance industry by focusing on customer insights.
  10. Qual: There’s An App For That – Sean Campbell of Cascade Insights interviewed Over the Sholuder’ co-founder Ross McLean about their app, which enables researchers to capture consumers’ qualitative and media feedback during decision-making processes.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Trends in Insights from Research with Moms

Incorporating the voices of moms into relevant product development may help brands resonate more successfully with this powerful consumer force.

Mom & child using tablet

By The GutCheck Team

Oh, moms; where would we be without them? The person we each have to thank for life on this earth is also the consumer whom many brands have to thank for keeping them in business. Though it may seem sexist to saddle women with the childcare, clothing, and grocery shopping, the reality is that moms are often in charge of these household-related purchases. (After all, she does know best.) But the busy life of a mother is far more than the sum of her purchases, so brands looking to appeal to this crucial—yet often hard to reach—target audience must first gain a better understanding of how their products fit into the lifestyle, values, and functional needs of moms everywhere. Below are some of the most pertinent and eye-opening consumer insights we’ve gleaned from qualitative research targeting parents and moms in particular. Incorporating the voices of moms into relevant product development may help brands resonate more successfully with this powerful consumer force.

Moms Are Highly Skeptical of Health-Conscious Food Claims

When it comes to eating better, moms are looking out not only for their own well-being, but the well-being of their children and loved ones as well. In our Consumer Packaged Goods-centered market research exclusively targeting women with children in the household, we found that moms define healthy foods most often by what is left out: artificial ingredients, GMOs, sodium, fat, etc. This means they’re attracted to packaged food products with minimal processing, and know that these can usually be found in the perimeter of the grocery store.

“Even sugar-free and fat-free can mean that chemicals have been used to make these products this way. GMO is another definite red flag.” -44, Charlotte Hall, MD

But respondents reported a lot of confusion surrounding the terminology found on health-conscious food packaging, including vague terms such as “GMO,” “all natural,” and “organic.” Though attracted to the implications of this front-of-box language, moms place the majority of their trust in nutrition fact labels, verifying any supposed benefits therein. Though there are certain telltale signs of a healthier food product, including bright colors and nature-based imagery, moms agreed that certification and/or evidence that reinforces labels would greatly help assuage their general mistrust of CPG health claims. A compilation of moms’ tenuous understandings of health buzzwords can be found in the table below, and further implications for food product positioning and packaging can be found in the full report.

And They Prefer Creative Games That Can Grow with Their Kids

Keeping children entertained can feel like a near constant battle for parents, who would also love for their kids to somehow benefit mentally from their playtime. So when your child starts begging for a video game, how do you find one that’s stimulating, not too violent, and requires some brainpower? Our investigation into the preferences of parents of children ages 7-12 years old revealed that Minecraft is valued for its customizable experience and intellectual stimulation.

“Both my daughters always pick Minecraft first. They rush to their Kindles as soon as they are allowed.” —Heather, Children younger than 4, 7-9, 10-12

Parents found that Minecraft’s collaborative nature encourages interaction with friends and family, as well as enhances creative and strategic development. Minecraft is also praised for its lack of explicit violence and promotion of problem-solving skills—qualities parents feel are lacking in most video games. Yet perhaps the most valuable characteristic of Minecraft is that it can be accessed from multiple devices and is constantly being redesigned and rebuilt by the players, making for a game that grows with kids and always feels fresh. Parents believe Minecraft should pursue even more customizable experiences, including the way packages are priced. Parents also shared more about their attitudes and concerns about electronic games in general, found in the full executive summary.

Moms Want Toys That Will Last—Just Like the Ones They Hand Down

In our exploratory investigation of what parents look for when toy shopping for children six years old and younger, we discovered that oftentimes the best toys were actually bought for the parents—when they were kids, that is.

“My girls would and have played with all the toys I enjoyed. I kept many of my ponies and Barbies. Each of my girls has also revived a cabbage patch doll.”
– Female, 36, IA, children aged 10, 13, 16

Parents enjoy handing down sentimental favorites like Barbies, action figures, and toy cars; they are also willing to buy newer versions of older classics. This is in keeping with parents’ general aversion to fads when it comes to toys, placing a premium instead on original, unique finds that will make for lasting memories. Legos were deemed the favorite by parents and kids alike, namely for their unisex appeal, promotion of imagination, educational stimulation, and position as a proven classic.

“I make sure they are age appropriate. I look at how sturdy they are. I do not like to buy plastic junk. Also, whether or not they will outgrow them quickly. I like to buy toys that allow them to use their imaginations and be creative.” – Female, 40, FL, children age 4, 6, 8

Overall, when shopping for toys, moms are in it for the long haul: they want something that will not only last for years, but also hold their child’s attention, emphasizing creativity and/or cognitive development, as well as fostering social interaction and/or communication. Building blocks, play kitchens, art sets, and Big Wheels were commonly mentioned, while screen time was often limited, and dolls/action figures with unrealistic bodies were widely criticized. Parents lament that in the age of electronics, cheap plastic, and franchising, toys that meet the criteria above are often hard to find, so brands would do well to keep the standards of those who are buying the toy—not just playing with it—in mind. To learn more about what parents look for when shopping for their little ones, check out the report summary here.

And They’re Reluctant to Blow Their Cash on a Halloween Costume

Halloween is usually a blast for kids, but it can be a stressful, expensive nightmare for parents. When we asked parents for their thoughts on Halloween costume shopping, they insisted that having fun is the primary motivation, but staying in budget is considered a challenge. Even with respondents split between those who prefer to DIY and those who prefer to shop, all agreed that pre-made costumes from retailers are almost always overpriced.

“Cost is the biggest factor. Typically, if you purchase a pre-made costume, it’s flimsy material (not ideal for Ohio weather in late October), and it’s pretty costly. I can’t bring myself to spend $30 on an outfit [that] they will freeze in and wear only once. That would be $120 for my kids to wear for one night.” – Female, 32, DIY Group

Each group also had pain points specific to their approach. The DIY group enjoys the creation of the costume, but gets frustrated by the time it takes to assemble, especially if they can’t find the supplies needed, or the end product doesn’t turn out right. Meanwhile, the shoppers enjoy hunting for the right costume with their children, but get annoyed searching for the right size, as well as trying to find decent quality for a reasonable price. But both groups are willing to shop most anywhere to find what they need, including big-box stores, Halloween pop-ups, and fabric/craft stores, both online and in-person. And both draw their costume inspiration from a wide variety of pop culture resources, though the final idea comes to them in different ways.

Overall, both groups are looking to minimize costs as much as possible, with DIYers re-purposing items they own, and shoppers putting more effort into comparing selections and prices. All respondents requested more money-saving options from retailers, like discounts and sales, and one consumer even suggested a costume exchange of sorts. You can read the full executive summary to learn more about which aspects of Halloween shopping parents enjoy and which they would like to improve.

Whether you’re looking to help your shopper marketing resonate or just want to boost your market intelligence for future product innovation, keeping the voice of moms alive and active will help your brand meet their forward-looking, cost-conscious needs.

The Solution for Polling Accuracy: Less Logic, More Xenophobia!

The recent US elected landed a crushing blow to the research industry’s credibility. So what's to be done?

By Nick Drew

So, it’s happened again. After the British general election in 2015 and the Brexit referendum, now comes the latest blow to the reputation of the polling industry with Trump’s ‘unexpected’ win in the US presidential race. And, as ever, the opprobrium has already started, with the world seemingly placing the blame at the feet of those pesky pollsters. “Ohhhh, they got it wrong again!”, “Can’t they do anything right?!”.

It’s enough to make me wish I’d chosen a different career; one where I could quietly do a slapdash job, safe in the knowledge that when my and my colleagues’ failings came to light, nobody would care, nor suggest that they know better than us. Something like working in a telephone help centre; being a quality control checker on German diesel cars; or a rocket scientist. Around SpaceX’s latest rocket pre-takeoff explosion, there seemed to be quite a lack of people asking really, what are they all doing, it’s not that hard.

But after this latest crushing blow to the research industry’s credibility, and assuming that people are right – that observing the polling figures differently would have changed the result of the election – what’s the problem?

Well, there’s a clear trend of unconscious observer bias. A recent WSJ article demonstrated how the same set of polling figures can lead to quite different conclusions, with the specific predicted outcome depending upon the statistical models and personal interpretations applied by a pollster. And unconscious bias plays a large role in this.

Researchers are fairly smart people: educated, with a head for numbers, reasonably articulate, and able to understand the idea of a multi-cultural world. They’re also generally employed, and the industry is becoming, on average, younger and more female over time. But these very attributes are inherently limiting, and shape the way researchers think. Polling firms didn’t predict a vote for Brexit because to any logical person, the idea of the UK seceding from its continent is utterly ridiculous. Likewise, a Trump victory wasn’t widely predicted because the idea of a xenophobic bigot winning the most important job in the world through a popular vote is unfathomable – at least to logical, educated, reasonably articulate people who can understand the idea of a multi-cultural world.

So what’s to be done? Fortunately, the answer is clear and, indeed, easy. In order to break out of this limited mindset, and become better at predicting elections and referenda, research firms need to have greater diversity in their ranks. Forget women and ethnic minorities (those are so last year): the research industry needs to be employing more angry, old white people; those who didn’t finish school; men who like to grab women by the unmentionables; those who don’t like to talk through their problems, but instead want to rage at how the system is fixed. Most importantly, perhaps, we need to do better at hiring people who think that foreigners are to blame for everything, and for whom a weird mix of national isolationism and imperialism provides the ideal solution to all the world’s problems. Only then can polling firms break away from the tyranny of the logical approach, and start to better understand and more accurately predict the views of the electorate.

Nick Drew is VP, Strategy & Insights at Fresh Intelligence. The views above are his own and not intended to be taken seriously.

Which 15 Cool Companies Get A Shot At $20K In Cold Cash?

Here are the final results from the open voting phase of the most recent Insight Innovation Competition, and details on what happens next.

 

The Insight Innovation Competition has been one of my absolute favorite initiatives since Ray Poynter and Pravin Shekar suggested it as part of the very first Festival of NewMR six years ago.  The idea of developing a research-centric innovation competition for young companies to gain exposure, support, and capital was something new for our space, but from it’s inception the response from the industry has been phenomenal. To date over 150 companies have entered and 57 have made it to the final round, with 10 winners  going on to win the prize.  Many of the participating companies have received funding or been acquired, with even more going on to organic success through new clients and partners.

In short, the IIC is making a difference for all stakeholders in the marketing insights space, and that has always been the goal. We’re thrilled it continues to evolve and deliver on that promise!

In this most recent round, 15 companies officially threw their hats in the ring, and it is an amazing group of participants pushing the boundaries of innovation in market research.

Here are the final results from the open voting phase. Click here to go to the site and check out each of these great entrants!

Idea Votes
Innovative Library Management Using Holographic Projection Technology 252
PROMPT: Predictive Test Marketing 243
delvv.io 208
Conjoint.ly 150
EyeSee Research – Neuromarketing in cloud 113
CAVII-Retail, powered by OSG ASEMAP and IBM WATSON 63
NeuronsHub — A Dashboard Solution for Neuroscience Research 40
Seedling 34
weseethrough 28
Cross-platform TV and radio attribution analytics 25
Plotto 20
Cognitive Brand and Consumer Insights from Unstructured big data 8
Unomer 5
Revuze – Analyst in a Box 4
Branded Mobile Communities Engage Audiences with Content and Reward Loyalty 4

The crowd voting is just the first part though. 5 finalists and 1 wildcard will now go on to the Judging round at IIeX EU in Amsterdam, and here is what they are competing for:

WINNER PRIZE
  • $20,000 cash award.
  • Exposure to large international audience of potential prospects, funding partners, venture capitalist and angel investors.
  • An invitation to present at the next Insight Innovation eXchange
  • An interview to be posted on the GreenBook Blog, viewed by 36,000+ industry professionals per month
  • An opportunity to work with successful senior leaders within the market research space

So what happens now?

The companies that will go on to the Judging Round and their chance to win $20k and all of the other benefits of making it to the finals are:

Entrant Votes
Innovative Library Management Using Holographic Projection Technology 252
PROMPT: Predictive Test Marketing 243
delvv.io 208
EyeSee Research – Neuromarketing in cloud 113
CAVII-Retail, powered by OSG ASEMAP and IBM WATSON 63
WILDCARD: weseethrough 28

 

On February 20, 2017, as part of Insight Innovation eXchange Europe 2017, the finalists will present their concepts to a panel of judges comprised of sponsors of the competition in a live event. Each presentation will last 10 minutes: 5 minutes to pitch and 5 minutes for Q&A from the judges. The panel of judges will be moderated by Gregg Archibald, Senior Partner at Gen2 Advisors. and the panel of judges include

  • Vijay Raj (Unilever)
  • Jeff Krentz (Kantar)
  • Reineke Reitsma (Forrester)
  • Melanie Courtright (Research Now)
  • Dan Foreman (Hatted)

Using a 10-point scale for each category, judges rate each presentation on:

  • Originality of concept
  • Presentation quality
  • Market potential
  • Scalability
  • Ease of Implementation

On February 21st, 2016 we’ll reveal the scores. The highest final score wins. The winner takes home the pot and chooses which of the judges they would like to engage with afterwards as a mentor.

A BIG thanks needs to go out to our IIC sponsors who fund the prize:

The judges have their work cut out for them. Each of these entrants have immense potential, and any of them could easily take home the prize! But, there can be only one who will take their spot along past winners of the IIC:

The other five will be in good company as well and will join the 50 other finalists who have gone to great success even though they did not win the competition:

 

Although the other companies that entered won’t get a chance to present to the judges and win the prize, since we believe everyone deserves as much attention as possible and should still have a chance to network with the potential clients, partners, and investors at the event we’re working on creating some additional session space in the agenda right now for a few of the runner ups so they will have an opportunity to make short presentations on their capabilities and business use cases to attendees.

It’s not too late to grab your ticket to IIeX so you can experience these (and many more!) great innovative companies first hand. Don’t be left out from meeting the companies that will be driving the future of the industry and exploring how they can work with your organization to deliver insight innovation and impact!

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The 3 Generations of Social Listening & Analytics Tools

What differentiates the more than 1,000 social media monitoring tools that are currently available?

By Michalis Michael

I am writing this blog post as I travel at 300 Km/hour on the Eurostar towards Brussels – from London – on a Sunday afternoon. I am heading to my second consecutive participation as a speaker at LT-Accelerate; a conference about language technologies, not the usual market research conferences that I attend.

Last year at LT-Accelerate I spoke about rich analytics for social listening and stressed the importance of semantic analysis and accuracy; this year I will be describing what differentiates the more than 1,000 social media monitoring tools currently available out there.

Looking at this from a market research and customer insight perspective, we categorised the social listening tools into three generations:

  • GEN 1: sentiment accuracy less than 60%, search based topic analysis, limited attention to noise elimination, automated sentiment analysis in usually one or two languages only
  • GEN 2: sentiment and semantic accuracy over 75% in any language, inductive approach to report topics of conversation, significantly reduced noise (less than 5% irrelevant posts)
  • GEN 3: In addition to what Gen 2 social listening tools can do, those few that can be classified as GEN 3 can also detect emotions, analyse images in an automated way for brands in terms of theme and possibly sentiment, and they offer guidance for integration with consumer tracking surveys and other data sources and profile users.

If you want to know what generation your current social media monitoring tool belongs to, all you need to do is ask your vendor what is their sentiment and semantic accuracy and whether they can detect emotions and analyse images for insights.

The main reason I go to conferences such as this one is to demonstrate thought leadership in the field of market research and customer insights, with the hope that prospective clients, partners, and vendors will come forward and initiate conversations that could develop to become mutually beneficial deals.

Last year only half of the conference delegates showed up because of the terrorist attack that had happened in Paris. Brussels was on a high terrorist alert that started the Sunday before the conference; the prudent thing to do was to stay at home and switch to a skype presentation as some speakers did. My take on the situation was that a city is at its safest when it is on high alert, so I decided not to change my plans. Indeed as I arrived at the train station last year and on the way to my hotel the streets were deserted, apart from armed soldiers. It was eerie but funnily enough it felt quite safe.

So here I am again this year on my way to the Brussels Central station and in the absence of a red alert due to terrorist threats I sort of feel less safe. I am making a mental note to remain vigilant and pay attention to what is going on around me; look out for any suspicious behaviour in other words.

Enough reminiscence, back to the essence of this post: I am sure there are other meaningful ways to categorise social listening tools and I would be very interested to find out how other people classify them. Maybe a plausible way to classify them is according to the use case of each one. Maybe another is the target customer/department the tool was created for, such as:

  • PR
  • Communications
  • Operations
  • Customer Service
  • New Product Development
  • Customer Insights

In my opinion around 98% of the current tools on the market belong to Gen 1, around 1% belong to Gen 2, and only a handful belong to Gen 3. I would not be the least surprised if only a handful of social listening tools meet all Gen 3 criteria. Clearly, only Gen 2 and 3 tools are suitable and can be used for market research and customer insights. Gen 1 tools would be disqualified from the get-go, if nothing else, due to the noise (irrelevant posts) that is analysed and included in what is reported to the user as relevant.

How do you classify social listening tools? Please feel free to share your approach with me on Twitter @DigitalMR_CEO.

Growing the Industry by Funding More Research – Part Five

Collaborata is the first platform that crowd-funds research, saving clients upwards of 90% on each project. We’ve asked Collaborata to feature projects they are currently funding on a biweekly basis.

By Peter Zollo

Editor’s Note: Welcome to our next post featuring two insights projects currently offered on Collaborata, the market-research marketplace. GreenBook is happy to support a platform whose mission is to fund more research. We believe in the idea of connecting clients and research providers to co-sponsor projects. We invite you to Collaborate!

Collaborata Featured Project #1:

“What Really Happens in the Produce Aisle: Mobile Shop-alongs”

Purpose: This study will uncover true “in-the-moment” purchase decisions made in the produce aisle. You’ll learn the degree to which price and promotions, organic, non-GMO, appearance, displays or other variables impact the actual produce purchase.

The Pitch: This quantitative mobile shop-along, leveraging geo-fencing technology, will capture what consumers are really doing in the produce aisle. You’ll learn if they stick to what they claim is important to them, while gaining rich insight into the purchase-decision tree, barriers, and drivers. Because of this study’s unprecedented scale, you’ll learn if behavior differs by type of store and the generation of the consumer. Become a co-sponsor and get input into the project now! Check out this video to learn more.

Who’s this for: Produce boards, brands, and retailers

Who’s behind this: Cooper Roberts Research, a full-service market research firm, who’s conducted close to 100 produce studies.

For details on becoming a co-sponsor: Click here or email info@collaborata.com

Collaborata Featured Project #2:

“What’s Hot & Not in the Digital Economy”

Purpose: The first wave of this new qualitative research series, which will include 8-10 mini-groups based on San Francisco, will distinguish between what’s truly exciting consumers in the digital economy and what they see as simply hype.

The Pitch: Tap into tech-forward consumers on a quarterly basis to see why some new ideas are being adopted and others are not. Your company will participate via a live streaming link and receive a Rapid Output Report and consultation from the researchers. This first wave will provide new insights on how consumers value and prioritize digital wallets, payment tools, and share-economy apps.

Who’s this for: Tech companies developing new consumer products and tools, and all brands needing to better understand what consumers want when it comes to digital payment.

Who’s Behind This: Scoot Insights is a quick turnaround qualitative consultancy offering a streamlined methodology to deliver insights faster without compromising on quality.

For details on becoming a co-sponsor: Click here or email info@collaborata.com

Data: It is…ALIVE!

It’s not data that should drive marketing; it’s the needs of your product.

Dr. Stephen Needel

So help me God, I thought we had killed it. The idea that Big Data, in and of itself, was something to embrace, that is. For two years now, the discussion shifted from why you MUST have Big Data (and invest millions to have it) and hire a fleet of data scientists to analyze it (because it’s too complicated for the average researcher). Instead, we started talking about small data, which might be a piece of big data, and how to use it. In many forums, including GreenBook and ESOMAR, I argued that it’s not data that should drive marketing; it’s the needs of your product.

And then today, I pick up the November issue of Quirk’s (sorry Steve, I’m a month behind), and there’s the lead article telling us we have to be data-driven or we’re doomed to failure. The key points of Lawrence Cowan’s missive (http://www.quirks.com/articles/2016/20161105.aspx) are:

  • Data is one of the most important aspects of achieving a competitive advantage.
  • The ultimate goal is to create a business where data is leveraged to create real value (as opposed to fake value, I guess).
  • Data is a basic requirement for business, not a cost item.
  • You need a culture of “data-driven-ness” where you have to promote, train, and enforce the use of data (I’m picturing the corporate data police state when I read this).

I beg to differ. A lot. With all due respect to my friends who sell data for a living, you all mostly do a great job and provide a useful product. But data is data and data is not going to save a bad product or a bad company. Such a focus on data strategy and a culture of “data-driven-ness” across the company, as the author suggests, diverts attention from what is really important for businesses to thrive.

What’s really important is understanding where your product fits in the universe. Okay – maybe not in the universe, but in the store where people buy it, as part of a category of similar products. In these days of dwindling research budgets, data acquisition needs to be a focused activity. Otherwise, you are left with mounds of unused or unusable data that is not getting you the information you need.

We get to this focus by having a “theory” about your brand. I put “theory” in quotes because it does not have to have all the formal aspects of a scientific theory; its one formal requirement is that it has to be true. This theory will tell you:

  • Why shoppers buy your brand.
  • Why shoppers don’t buy your brand.
  • How sensitive is your brand to various marketing activities.

It is that simple. Once you know the answers to these questions, your marketing is dramatically simplified and your energies can be focused elsewhere. You might let this product go on autopilot. Or you might focus on improvements targeted to non-buyers. Or you might try and come up with some creative marketing that hasn’t been tried before. But most important – you don’t have to focus on data every day. Your knowledge gaps will tell you what types of data you need – and in a multi-brand company, that’s likely to be different for each brand. Your research needs will be focused on testing ideas generated by what you know – and what you don’t know.

We are not advocating that companies should ignore data. But they should not be data-driven either. The data comes from research needs which come from information needs which come from the brand theory – not the other way around. Let’s kill this idea before it once again stalks the countryside.