Jeffrey Henning’s #MRX Top 10: AI, EQ, and Data Sets Visualized, Breached, and Perfected

Of the 5,822 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

By Jeffrey Henning

Of the 5,822 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. GfK Strengthens Its Membership with ESOMAR – GfK has licensed ESOMAR content and services for all its
  2. ESOMAR Webinars – ESOMAR has updated their free webinars with events in March for the Young ESOMAR Society, market research in Russia, and identifying fraudulent participants in online
  3. Concern about the NHS Jumps to the Highest Level since 2003 – 27% of Britons surveyed by Ipsos MORI report that Brexit is the most important issue facing the United Kingdom, with 17% saying the NHS is the most important
  4. Artificial Intelligence in Market Research – InSites Consulting presents the results of a study using predictive analytics to predict disengagement from or negative behavior in an online community.
  5. The Rise of AI Makes Emotional Intelligence More Important – Writing for Harvard Business Review, Megan Beck and Barry Libert argue that AI will displace workers whose jobs currently involve gathering, analyzing and interpreting data, and recommending a course of action – whether those jobs are doctors or financial advisors or something To thrive, professionals in affected industries must cultivate our emotional intelligence and how we work with others.
  6. How Interest-Based Segmentation Gets to the Heart of Consumers – Hannah Chapple, writing for RW Connect, argues that a better way to understand social-media users is to study who they follow, not what they say. Who they follow shows their interests, but what they say often must make it passed self-imposed
  7. Behavioral Economics: Three Tips To Better Questionnaires – Chuck Chakrapani of Leger Marketing, writing for the Market Research Institute International’s blog, offers three quick tips for applying lessons from behavioral economics to questionnaires: beware the subliminal influence of numbers on subsequent questions, consider the issue of framing when wording questions, and ask for preferences before
  8. One Dataset, Visualized 25 Ways – Nathan Yu visualized life expectancy data by country in 25 different ways, to demonstrate there’s no one way to visualize a
  9. How a Data Breach Battered Yahoo!’s Reputation – Emma Woollacott, writing for Raconteur, discusses how Yahoo! failed to take even minimal steps to notify and support its users after it became aware of its data
  10. Big or Small: No Data Set is Perfect – Kathryn Korostoff of Research Rockstar argues that big data, survey research, ethnographic studies, focus groups, and customer analytics require business users to better understand the strengths and weaknesses of the resultant data sets.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Edward Appleton’s Impressions of #IIeX EU 2017

Edward Appleton offers a review of last week's IIeX Europe

 

 

By Edward Appleton

Going to Amsterdam in February isn’t the best time to visit the city, but it’s IIeX Europe time – so: grey skies, wind, rain… here we come!

This was the fourth time I attended – and I was impressed. There were apparently well over 500 attendees a huge rise over 2016 which was I believe under 400, and 30% client side researchers. Wow.

It was energizing as ever, a great place to network, and with multiple parallel tracks and competitions going on, it requires careful planning.

What made this IIeX different? Two things stood out:

  1. The voice of qual was well represented with presentations from Acacia Avenue, IPSOS Mori, Northstar and yes, our good selves at Happy Thinking People. The AQR was present, very ably represented by chair person Simon Patterson. People from QRCA were there too. It was great to juxtapose qual. and “tech” – suggesting a complementary, rather than a competitive relationship. Reminded me of the Big Data/ Qual juxtaposition at the Esomar Conferences in Berlin in 2016.
  2. New Speakers Track. Led by Annie Pettit, people who had never been on stage before presented their own Market Research Innovations. The talks I saw were impressive and made me wonder if my own presentation skills needed an urgent refresh! I applaud this – it’s something we should do more of: giving a platform to unheard voices.

Thematically, what stood out? I by no means have an overview – it’s impossible to “do” all of IIeX, there’s simply too much, but here’s what stuck:

  • “Crowd wisdom” – whether you’re looking for a freelance creative team to move your idea along, or want to get a first understanding directly of real-life behaviours in unknown markets, there were a number of companies (eg Streetbees & Mole in a Minute ) linking up different sorts of crowds directly with budget owners. Automated, in real-time, fast, and I’m imagining relatively cost-efficient.
  • AutomationZappistore continues to be a major presence at IIeX, propagating the benefits of full automation at a fraction of the cost of traditional methods. A company to continue to watch, it seems.
  • Stakeholder Engagement – a demo on the smart video software from TouchCast stunned me. Paul Field’s live demo of how a presentation could be whizzed up fast and made to look as if a professional TV studio had created it – amazing.
  • Non-conscious/ implicit methodsSentient Decision Science were a familiar and welcome presence, other newer faces also suggested different ways (strength of attitudinal response – courtesy of Neuro HM) of to accessing more authentic, dare I say System 1 responses, with higher predictive validity.
  • Artificial Intelligence was a strong theme – allowing companies to mine and access knowledge in their reports much more easily, for example, or eliminating low-value, time-consuming tasks e.g. during recruitment by automatically identifying potentially relevant audiences.

I was interested to see the likes of big-hitting conjoint experts Sawtooth there, Mr. Aaron Hill – IIeX is getting noticed far and wide, it seems.

Overall, IIeX shows the humble visitor that “Market Research” (whatever you call it) is vibrant, but it’s already very different to what it was a very few years ago.

Major client side companies are already showcasing their new MR approaches – CPG giant Unilever being the stand-out company doing that at IIeX but Heineken and fragrance and flavor company IFF also hosted a showcase track.

The human aspect is still central – tech can help us concentrate on that, automating and removing repetitive, low-interest, non-value-added tasks.

If you do visit in future (which I would recommend), I suggest you come with a mind-set that looks to join-the-dots rather than be overwhelmed by “breakthrough” or “step-change” developments.

For more on my thoughts, as well as many of my colleagues, here is a video blog we did last week:

Impressions from IIEX Europe 2017 from Happy Thinking People on Vimeo.

Tech can enable, be disruptive, but it’s up to us to link up, be imaginative, find the sweet application spot in whatever part of the MR area we play in.

Curious, as ever, as to others’ views.

John Kearon Unveils System1 Group

John Kearon, CEO of BrainJuicer, unveils their new brand and explains their thinking behind the rebrand, what it means for the company and their clients, and his view on the industry over the next few years.

 

This morning BrainJuicer announced to the investment community their decision to rebrand as System1 Group, an integrated insights and creative agency that incorporates System1 Research (formerly BrainJuicer) and System1 Agency, their already established creative agency.

Shareholders are being asked to approve the Company’s proposed change of the name from BrainJuicer Group PLC to System1 Group PLC

Here is a summary from the release:

Over the last 16 years BrainJuicer has built an international business by applying Behavioural Science to predicting profitable marketing. At the heart of Behavioural Science is the notion that people use instinct, intuition and emotion to make most decisions.  This is known as, “System 1” thinking.  Having adopted the System 1 approach to market research and successfully launched our System1 advertising agency (‘System1 Agency’), we believe the company’s growth will be better served by adopting the System1 name across the group. Within the System1 Group, we will have, System1 Agency to produce profitable marketing and System1 Research to predict it. As the ‘System1’ name becomes synonymous with ‘profitable growth’, the business will be in a great position to help clients move towards 5-star marketing and the exponential growth that comes with it.

Why is this worth covering on the blog? Because BrainJuicer has been recognized as the “Most Innovative Supplier” in the GRIT 50 list for 5 straight years; they arguably have more brand equity established than almost any other research company, and certainly more than any of the “next gen” companies that have emerged in the past decade. They are masterful marketers who practice what they preach and have had an extraordinarily successful history in a short period of time. They also have been a primary driver within the industry in bringing attention to the topics of behavioral science in all it’s many forms, taking the ideas of behavioral economics and applied neuroscience from a niche to a very mainstream topic.

For a company with all those claims to fame to make a shift in their branding and to double down on a very specific direction is news worthy indeed, perhaps even inspirational.

I had the opportunity to sit down with John Kearon, the Chiefjuicer himself (I forgot to ask if his new title will simply be #1 Guy) to dig deeper into their thinking behind the rebrand, what it means for the company and their clients, and his view on the industry over the next few years.

As always, John is a joy to chat with; he’s funny, smart, and provocative with that innate British coolness we Americans are secretly deeply jealous of. I hope you enjoy listening to our conversation as much as I enjoyed having it.

Neuroscience and Marketing

Marketing scientist Kevin Gray asks Professor Nicole Lazar to give us a brief overview of neuroscience.

By Kevin Gray and Nicole Lazar

KG: Marketers often use the word neuroscience very loosely and probably technically incorrectly in some instances.  For example, I’ve heard it used when “psychology” would have sufficed. Can you tell us what it means to you, in layperson’s words?

NL: Neuroscience, to me, refers loosely to the study of the brain.  This can be accomplished in a variety of ways, ranging from “single neuron recordings” in which electrodes are placed into the neurons of simple organisms all the way up to cognitive neuroimaging of humans via methods such as functional magnetic resonance imaging (fMRI), positron emission tomography (PET), electroencephalography (EEG), and others.  With single neuron recording of the brains of simple organisms, we get a very direct measure of activity – you can actually measure the neuronal activity over time (in response to presentation of some stimulus, for instance).  Obviously we can’t typically do this type of recording on human beings; fMRI, PET, etc. give indirect measures of brain activation and activity.  These are maybe the two extremes of the neuroscience spectrum.  There is overlap between “neuroscience” and “psychology” but not all people involved in what I think of as neuroscience are psychologists – there are also engineers, physicists, applied mathematicians, and, of course, statisticians.

KG: Many marketers believe that unconscious processes or emotion typically dominate consumer decision-making. Marketing will, therefore, be less effective if it assumes that consumers are purely rational economic actors. System 1 (“fast thinking”), as popularized by Daniel Kahneman, Dan Ariely and others, is commonly used by marketers to describe this non-rational aspect of decision-making. “System 1 thinking is intuitive, unconscious, effortless, fast and emotional. In contrast, decisions driven by System 2 are deliberate, conscious reasoning, slow and effortful”, to quote Wikipedia. “Implicit” is also sometimes used synonymously with System 1 thinking.  Is there now consensus among neuroscientists that this is, in fact, how people actually make decisions?  Or is it still controversial?

NL: I don’t think there is consensus among neuroscientists about anything relating to how people process information, make decisions, and the like!  “Controversial” is perhaps too strong a word, but these things are very complex.  Kahneman’s framework is appealing, and it provides a lens for understanding many phenomena that are observable in the world.  I don’t think it’s the whole story, though.  And, although I’ve not kept up with it, I believe that more recently there have been some studies that also disrupt the System 1/System 2 dichotomy.  Clearly we have some way to go before we will reach deeper understanding.

KG: Neuromarketing can mean different things to different people but, broadly-defined, it attempts to measure unconscious responses to marketing stimuli, i.e., fast thinking/implicit response. fMRI, EEG, MEG, monitoring changes in heart and respiration rates, facial coding, Galvanic Skin Response, collages and the Implicit Association Test are perhaps the most common tools used. Based on your expertise in neuroscience, are any of these tools out of place, or are they all, in one way or another, effective methods for measuring implicit/fast thinking?

NL: First, I don’t consider all of these measures to be “neuroscientific” in nature, at least not as I understand the term.  Changes in heartbeat and respiration, galvanic skin response – these are physiological responses, for sure, but even more indirect measures of brain activation than are EEG and fMRI.  That’s not to say that they are unconnected altogether to how individuals are reacting to specific marketing stimuli.  But, I think one should be careful in drawing sweeping conclusions based on these tools, which are imperfect, imprecise, and indirect.  Second, as for fMRI, EEG, and other neuroimaging techniques, these are obviously closer to the source.  I am skeptical, however, of the ability of some of these to capture “fast thinking.”  Functional MRI for example has low temporal resolution: images are acquired on the order of seconds, whereas neuronal activity, including our responses to provocative stimuli such as advertisements, happens much quicker – on the order of milliseconds.  EEG has better time resolution, but its spatial resolution is poor.  Reaching specific conclusions about where, when, and how our brains respond to marketing stimuli requires both temporal resolution and spatial resolution to be high.

KG: Some large marketing research companies and advertising agencies have invested heavily in neuromarketing. fMRI and EEG, in particular, have grabbed a lot of marketers’ attention in recent years.  First, beginning with fMRI, what do you feel are the pros and cons of these two methods as neuromarketing techniques?

NL: I’ve mentioned some of this already: resolution is the key.  fMRI has good spatial resolution, which means that we can locate, with millimeter precision, which areas of the brain are activating in response to a stimulus.  With very basic statistical analysis, we can localize activation.  That’s an advantage and a big part of the popularity of fMRI as an imaging technique.  It’s harder from a statistical modeling perspective to understand, say, the order in which brain regions activate, or if activation in one region is leading to activation in another, which are often the real questions of interest to scientists (and, presumably, to those involved in neuromarketing as well).  Many statisticians, applied mathematicians, and computer scientists are working on developing methods to answer these more sophisticated questions, but we’re not really there yet.

The major disadvantage of fMRI as a neuroimaging research tool is, again, its relatively poor temporal resolution.  It sounds impressive when we tell people that we can get a scan of the entire three-dimensional brain in two or three seconds – and if you think about it, it’s actually quite amazing – but compared to the speed at which the brain processes information, that is too slow to permit researchers to answer many of the questions that interest them.

Another disadvantage of fMRI for neuromarketing is, I think, the imaging environment itself.  What I mean by this is that you need to bring subjects to a location that has an MRI machine, which is this big very powerful magnet.  They are expensive to acquire, install, and run, which is a limitation even for many research institutions.  You have to put your test subjects into the scanner.  Some people have claustrophobia, and can’t endure the environment.  If you’ve ever had an MR scan, you know that the machine is very noisy, and that can bother and distract as well.  It also means that the research (marketing research in this case, but the same holds true for any fMRI study) is carried out under highly artificial conditions; we don’t usually watch commercials while inside a magnetic resonance imaging scanner.

KG: How about EEG?

NL: The resolution issues for EEG and fMRI are the opposite of each other.  EEG has very good temporal resolution, so it is potentially able to record changes in neuronal activity more in real-time.  For those who are attempting to pinpoint subtle temporal shifts, that can be an advantage.  In terms of the imaging environment, EEG is much friendlier and easier than fMRI in general.  Individuals just need to wear a cap with the electrodes, which is not that cumbersome or unnatural.  The caps themselves are not expensive, which is a benefit for researchers as well.

On the other hand, the spatial resolution of EEG is poor for two reasons.  One is that the number of electrodes on the cap is not typically large – several hundred spaced over the surface of the scalp.  That may seem like a lot at first glance, but when you think about the area that each one covers, especially compared to the millimeter-level precision of fMRI, localization of activation is very amorphous.  In addition, the electrodes are on the scalp, which is far removed from the brain in terms of the generated signal.  All of this means that with EEG we have a very imprecise notion of where in the brain the activation is occurring.

KG: As a statistician working in neuroscience, what do you see as the biggest measurement challenges neuroscience faces?

NL: The data are notoriously noisy, and furthermore tend to go through many stages of preprocessing before the statisticians even get to see them.  This means that an already indirect measure undergoes uncertain amounts of data manipulation prior to analysis.  That’s a huge challenge that many of us have been grappling with for years.  Regarding noise, there are many sources, some coming from the technology and some from the subjects.  To make it even more complex, the subject-driven noise can be related to the experimental stimuli of interest.  For example, in fMRI studies of eye motions, the subject might be tempted to slightly shift his or her entire head while looking in the direction of a stimulus, which corrupts the data.  Similarly, in EEG there is some evidence that the measured signal can be confounded with facial expressions.  Both of these would have implications on the use of imaging for neuromarketing and other trendy applications.  Furthermore, the data are large; not “gigantic” in the scale of many modern applications, but certainly big enough to cause challenges of storage and analysis.  Finally, of course, the fact that we are not able to get direct measurements of brain activity and activation, and possibly will never be able to do so, is the largest measurement challenge we face.  It’s hard to draw solid conclusions when the measured data are somewhat remote from the source signal, noisy, and highly processed.

KG: Thinking ahead 10-15 years, do you anticipate that, by then, we’ll have finally cracked the code and will fully understand the human mind and what makes us tick, or is that going to take longer?

NL: I’ll admit to being skeptical that within 10-15 years we will fully understand the human mind.  That’s a short time horizon and the workings of our mind are very complex.  Also, what is meant by “cracking the code”?  At the level of neurons and their interconnections I find it hard to believe that we will get to that point soon (if ever).  That is a very fine scale; with billions of neurons in the human brain, there are too many connections to model.  Even if we could do that, it’s not evident to me that the exercise would give us true insight into what makes us human, what makes us tick.  So, I don’t think we will be building the artificial intelligence or computer that exactly mimics the human brain – and I’m not sure why we would want to, what we would learn from that specifically.  Perhaps if we think instead of collections of neurons – what we call “regions of interest” (ROIs) and the connections between those, progress can be made.  For instance, how do the various ROIs involved in language processing interact with each other to allow us to understand and generate speech?  Those types of questions we might be closer to answering, although I’m still not sure that 10-15 years is the right frame.  But then, statisticians are inherently skeptical!

KG: Thank you, Nicole!

______________________

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Nicole Lazar is Professor of Statistics at the University of Georgia and author of The Statistical Analysis of Functional MRI Data. She is an elected fellow of the American Statistical Association and editor-in-chief of The American Statistician.

4 Reasons Survey Organizations Choose On-Site Hosting

Why is a portion of the industry sticking with in-house data hosting?

By Tim Gorham

Most organizations across the market research industry have chosen cloud hosting for their survey data storage. For them, it’s easier to manage, easier to budget for, and secure enough for their needs.

But a core group are not prepared to jump to the cloud. For them, they choose to physically control survey data centers located on company property. And we’re very familiar with their rationale, since Voxco offers one of the few professional survey software platforms that is available on-premise.

So why is this portion of the industry sticking with in-house data hosting? Here are the four reasons we hear over and over again:

1. Complete Data Control

Market research organizations manage the kind of sensitive data that is commonly protected via strict privacy regulations. That means they want to know exactly where their data is at all times, and they choose to be in total control of storage, and avoid third party suppliers.

Many of our healthcare and financial services clients need to prove conclusively that their data is protected to the letter of the law. In some situations, Canadian and European clients need to prove that data is stored within their own borders. It’s not always clear how offsite data is being stored, who maintains ownership, and who else might have access to it. That can be a real worry.

On-premise set-ups often make it easier to prove total compliance. Even when cloud companies can guarantee compliance, some IT managers feel more comfortable absorbing the risk and controlling the data storage themselves. 

2. Infrastructure Costs

Cost is always a deciding factor. It often boils down to prioritizing fixed capital expenditures over monthly operational expenditures. Monthly hosting fees can be significant for large organizations with huge data requirements and numerous users. At some point, the economics tip in the favor of a fixed capital investment.

This is especially true for organizations with existing infrastructure in place for data storage in-house. It’s a very easy decision for them to select on-premise hosting for their survey software.

3. Physical Server Customization

Cloud hosting providers have existing server structures. However, in-house hardware is custom-tailored to an organization’s personal needs. This offers levels of local control, visibility, and auditability which are unattainable from cloud providers.

Retaining infrastructure control internally also allows instant fixes and improvements to how data storage is structured. The larger the cloud provider, the harder it is to request fixes or customization.

4. Internet/Bandwidth Restrictions

We’re spoiled in most of the western world with high bandwidth and uninterrupted internet connectivity. But many parts of the world are still catching up; internet and bandwidth can be slow or spotty. For these situations, hardwired internal databases are often the most productive and efficient solution available.

Sound familiar?

Do you choose to host your survey data in-house? Let us know why YOU have made that choice in the comments section below.

The Most In Demand Suppliers At IIeX Europe ’17

An analysis of the 154 private Corporate Partner meetings that took place at IIeX Europe this week and what that tells us about the commercial interests of research buyers.

 

IIeX Europe 2017 is happening right now, and it’s been an amazing event. With 560 attendees it’s grown massively from previous years in general, but this year the client side attendance has been especially strong, with 30% of registrants being research buyers. It seems as if the event has fully become a part of the European MR event calendar and word has spread that IIeX is THE event to come to to find new partners, be inspired and challenged, and embrace innovation. Considering that has always been our mission, it’s incredibly gratifying to see our message being embraced.

Although registration metrics can tell us a lot about how we are doing, what we pride ourselves on across all GreenBook and Gen2 Advisors initiatives is how we create impact by connecting buyers and suppliers and one of the best ways we have to measure that is via our Corporate Partner program. The premise is simple: research buyers (and in a few cases, investors) tell us which attending companies they are interested in meeting with and we coordinate those meetings for them in private meeting rooms during the event.  It doesn’t cost any extra for either party to participate; it’s a value add for all stakeholders at the event.

In addition to being a great benefit to IIeX attendees, it also gives us great data on what clients are looking for so we can continue to refine our events to meet their needs and give the industry some useful perspective!

This year at IIeX EU we scheduled 154 unique meetings for 26 different client groups from 19 brands (some brands sent teams interested in different things, like P&G for instance that has different teams for CMK and PR), with 84 different suppliers being asked to meet (many received multiple requests). That is A LOT of meetings!

The brands that joined as Corporate Partners at this event were:

City Football Group
Alpro
E.ON Energy
Facebook
Heineken
HERE
IFF
Instagram
Inter IKEA Systems
Kantar
McKinsey & Co
Mintel
Northumbrian Water Group
P&G
Reckitt Benckiser
Red Bull
Strauss Water
Test-Aankoop
Unilever

Now, I’m not going to divulge the names of the suppliers who were asked to participate, but I did a quick analysis of them by assigning them to a segment of either Service or Tech based and then categorizing them by their core offering.

First, 62% of all meetings requested were with Tech companies. The definition of Tech that I used was that the primary offering was a technology solution that was either DIY or offered limited service options beyond basic project support.

38% of the meetings requested were with companies that fall into the Service category, meaning that they provide full service, although it may be confined to their specific area of focus such as nonconscious measurement or brand strategy vs. a more traditional Quant/Qual full service agency. In fact, only 11 companies fall into the traditional “Full Service” bucket, with the rest being positioned more as niche consultancies focused on specific business issues or methods.

 

 

It’s instructive to look at the types of companies that clients were interested in, so I assigned each to a “specialty” category based on their positioning. A few notes on my thinking here:

  1. Nonconscious is any company that is focused on using methods related to nonconscious measurement as their primary approach. This includes facial coding, implicit, EEG, fMRI, etc… and includes those just offering technology and those who have built full consulting organizations around these approaches.
  2. Mobile includes anything that is “mobile first”, regardless of use case. If a company has built their offering around mobile devices as the primary means of collecting data whether it’s qual, quant, crowd-sourcing, behavioral data, etc.. they fit into this bucket.
  3. Full Service MR are companies that fit the traditional definition: they offer a range of methods and focus areas across the methodological spectrum and engage with clients with a full complement of service solutions.
  4. Data Collection is only those companies who license data collection platforms as their core business.

I think the rest of the categories are self explanatory, and while some could fit into multiple ones I tried to capture the claim to fame of each as their primary selling point.

As you can see in the chart below, anything related to Nonconscious Measurement continues to be of high interest. This is phenomenon we have seen at every IIeX event since the beginning. Although we are not seeing major adoption by share of project client-side interest in anything related to understanding the the motivations of consumers outside of cognitive processes is of intense interest. My belief is that when a validated, scalable, mobile friendly and inexpensive solution hits critical mass we will see the share of project for these approaches skyrocket just as we have seen with DIY quant and now with automation.

Surprisingly, mobile-only solutions were almost as hot which tells me that yes indeed, clients reached the tipping point a while ago and are now aggressively looking for new mobile-centric capabilities to augment or replace traditional approaches.

Another surprise was the number of Video Analytics meetings occurring: I suspect a symbiotic relationship between this and the other top approaches which also use video combined with social media data: with so much video being produced by consumers as part of their daily lives and in response to research projects the need to develop solutions to make the analysis and curation of the video efficient and affordable. Look for this to continue to be a priority.

 

 

Finally, let’s look at the number of companies in each category that were asked to meet with clients. Remember that some suppliers were asked to meet with multiple clients so of course there is a high correlation to the previous chart which showed meetings by category. This serves as a nice snapshot of the types of offerings clients are looking for as well as the types of companies that do well at IIeX in terms of business development.

 

 

IIeX has always been a stalking horse for the rest of the industry by indicating what clients are looking for today to build the insights organizations of tomorrow. We’re privileged to have a first hand view of what that looks like via our Corporate Partner program and are glad we can share it with the industry as a whole in this way.

The 5 Second Rule and the Need to Create Instantly Recognizable Visualizations

To create visualizations that engage, we must create images that are instinctively recognized as images capable of portraying meaning.

By Tim Bock

Most people are busy. Many are bored. Designers take the view that they have a small amount of time, perhaps 5 seconds, to engage the viewer. They believe that if they fail, the viewer will just move on, and the communication will fail.

This begs an important question: how can we create visualizations that engage? Visualizations that are instantly recognizable? The obvious technique is to use chartjunk or create something beautiful. However, there is a simpler solution: create images that are instinctively recognized as images capable of portraying meaning (rather than noise).

The original visualizations: art

Let us start with art. Fine arts scholars deride The Creation of Adam as being cartoonish. However, millions queue every year in the Sistine chapel to view it.

Contrast the painting above with the one below. The art cognoscenti love this one. But, many people when viewing it can be heard to mutter that it compares, unfavorably, with the work of children.

There is a simple explanation for why people view these works of art so differently. When we look at The Creation of Adam, we immediately recognize the images. And, if we are from a Judaeo-Christian background, there is a good chance that we understand the context and subtext of the picture. When we look at Mondrian’s work, on the other hand, the best sense that our brain can make of it is that it is an unusual brickwork or perhaps bathroom tiles. As we are not in the habit of searching for meaning in brickwork or bathroom tiles, our brains lack a useful frame of reference to guide interpretation of the painting. Most people just move on.

The difference between these paintings highlights the great challenge when designing visualizations. We need to design something that attracts the viewer’s attention. If all they see is a mess, they will often not take the time to decode the meaning, the meaning will not be recognizable. We have perhaps 5 seconds in which to attract the attention of the viewer, before they move on.

When heatmaps and treemaps go abstract

It is no accident I have shown you Piet Mondrian’s work. It is strikingly reminiscent of one of the more fashionable visualizations, the treemap with heatmap shading. It is from the Harvard Business Review. No doubt the people that use this visualization have been trained to use it. Rest assured, though, that most people will look at it, see nothing that attracts their brain, nothing instantly recognizable, and move on. (I will discuss this in another blog on coloring in heatmaps.)

The same idea, but with a great execution

Don’t get me wrong here. Treemaps and heatmaps can be great. The problem is just the execution of this one. Contrast it to the one below, which shows much more data.

Bill Gates loves this visualizationas it “shows that while the number of people dying from communicable diseases is still far too high, those numbers continue to come down”.

This visualization is great because it uses intensity, color, and proportionality in ways that they are used in nature, and so taps into our instincts, making the meaning recognizable. (I expand on this idea in my forthcoming blog posts on pie charts versus bar charts, and a another on using color in charts).

Visualization expert Stephen Few disagrees. He think that Causes of Untimely Death is a poor visualization. He has created an alternative visualization, which he suggests is vastly superior; it is is shown below. In some ways it is a better visualization: it is a lot easier to compare and contrast the numbers. Nevertheless, in a very important way, it is a much worse visualization. It is no longer recognizable as an image. It does not tap into our instinctive skills at finding patterns. It is hard to imagine many people engaging with this beyond 5 seconds.

I think it is a monster!

The next visualization verges on being art. It looks great. However, for all its beauty, it is only summarizing 12 numbers, which makes it a poor investment of time-to-create. Furthermore, it requires 8 text boxes to explain its interpretation, which is a bad sign. Why does it struggle to work as a visualization? It is thoroughly alien. There is little in our experience of humans to guide us in working out what it means. As we are not used to interpreting things like this, we find this very hard to interpret and the meaning is not instantly recognizable.

After a lot of thought I did end up realizing I had seen something similar before: the Sarlacc from The Return of the Jedi. Unfortunately, recognition of this passing similarity failed to help me interpret the visualization above.

Visualization guru Edward Tufte  suggests that Charles Joseph Miniard’s 1861 Sankey diagram of the march of Napoleon may well be the greatest statistical graph ever created. It is a great visualization. However, if fails the 5 second rule. What can you see when you look at it? Perhaps it is a branch? But what does a branch have to do with Napoleon? I do discuss this in another post, but despite my love for it, ultimately it is only a visualization for the cognoscenti, who are few and far between in the normal audience for a viz.

Observe that with each of the examples, the issue is not complexity. The issue is familiarity. When we create visualizations, that tap into images that we are used to reading, it makes a big difference in making the meaning instantly recognizable.

This next visualization is showing a tweet network created using Nodexl. It has been clustered to show groups of people. Labels have been added to explain the clusters. The outcome for me is that my brain just gets confused. I see a Ferris wheel on the left, and streamers coming out from it. Again, while I can make some sense of it, it does not help me see a pattern.

As is often the case, the visualization can be greatly improved by taking things away. Here is the same visualization, but with the commentary, color coding, and icons, removed. It is instantly more interpretable. Why? I see it as dandelions. I can see that there is one big dandelion on the left, which tells me that one person sent out a whole lot of tweets. There is only one more dandelion visible, and a much smaller one at that. No tweet storm occurred. Just a couple of people told a lot of other people.

To summarize the thesis of this post, I am trying to make two related points:

  1. We have about 5 seconds to persuade a viewer that a visualization is worth their focus.
  2. One way of engaging the viewer is to create images that use graphical elements in a way that is in some way familiar, where consistency with nature is a ready test of this.

I will finish off with perhaps my favorite interactive visualization, the OECD’s Create Your Better Life Index.  A snapshot is below, but do check it out here. Why does it work so well? It works well because it taps into our ability to instinctively understand the height and shape of leaves on a tree.

Super Bowl 2017 Ad Effectiveness

How well do Super Bowl ads drive customers to spend money or do some kind of proactive and positive behavior towards the advertised brand?

By Michael Wolfe

According to Advertising Age, a 30-second Super Bowl ad in 2017 cost about $5 million.  That puts total Super Bowl 51 ad spend near $385 million.  It seems like many Super Bowl ads focus on gaining viewers likability and it often appears to be a popularity contest.   The key question, however, is how well do these ads drive customers to spend money or do some kind of proactive and positive behavior towards the advertised brand.  That is the issue we wish to explore here.

Using Advertising Benchmark’s ABX copy test scores, the overall results for 2017 Super Bowl commercials were nothing to brag about.   In fact, using standard ad effectiveness criteria,  2017 ads were a disappointment, at best.   Overall scores of the last 5 Super Bowls also generally fall short of ad norms.

The chart below summarizes ABX ad effectiveness scores for 65 ads.  Overall, 58% of these ads scored at or above normative levels.  As shown, unfortunately, there were some very low scoring ads.   Ads with a political message, such as the 84 Lumber ad, did not fare that well.

The ABX ad testing system is based on a survey of a nationally representative panel.  The major components making up the ABX index are:

  1. Awareness/brand linkage. Was the brand advertised correctly recalled?
  2. Message clarity. Was the primary commercial message understood?
  3. Brand reputation shift. Did viewing the ad change brand reputation perceptions?
  4. Message relevance. Was the ad message deemed relevant to the customer?
  5. Call to action. Did the ad elicit any positive actions or intentions, including purchase intent?
  6. Although not part of the ABX score algorithm, was the ad liked or disliked?

If we look at the key criteria or drivers of ad effectiveness,  below shows that, while Super Bowl  ads do very well on “likeability” and generating “buzz”, these didn’t fare so well on key action points and particularly on such critical measures as “purchase intent”

In sum, the key insight here is that popularity and likeability do not always translate to effective actions  from the customer.  Clearly,  funny, cool and emotional ads can be good ads, but focusing on winning a popularity contest does not always translate into effective marketing, which stimulate customers to do some positive action towards a brand.

McKinsey Makes Their MR Play: What is Periscope by McKinsey & What Does It Mean For the Future?

Periscope By McKinsey and their new Insights Solutions practice area are moving firmly into the research industry with a next gen offering that merges the best of high-end strategic consulting and comprehensive data-driven solutions.

 

Historically consulting firms such as PwC, Deloitte, Accenture and McKinsey have fallen more into the “research client” bucket than “supplier” category; like most ad agencies, although they might conduct research on behalf of their clients, as industry publications or even as branded syndicated offerings the research was generally sourced to external partners and it wasn’t a “tip of the spear” defined product offering. As the insights industry has fragmented and evolved over the past decade, we’ve seen a gradual blurring of the lines between both agencies and consultancies as it related to insights, especially in the use of social data or other to help measure and predict on behalf of clients.

Concurrently, many research providers have struggled to re-position themselves as data-driven strategy consultancies (and even a couple making the leap to creative agencies) with various degrees of success, but overall it’s been a challenge for traditional research suppliers to move up market. Also, many new data consultancies have emerged who challenge all the existing players by focusing on high end analytics, “Big Data”, and various aspects of business intelligence.

Of course during all of these changes the advent of insights technology, especially DIY and automation across the data collection and analytical spectrum have emerged, many powered by the big tech companies or social platforms, and gone from strength to strength, further accelerating and empowering disruption.

Due to these shifts I (and others) have long predicted that the industry might end up looking something like this:

 

 

Recently I had the privilege to get to know the folks at Periscope By McKinsey, and I am more now firmly convinced than ever that the general outline above is not just where we are headed, but where we are already.   

All of that is to set the context for today’s post; an in depth interview with the senior leadership at Periscope By McKinsey where we explore what I consider to be a major sign that the industry has fundamentally shifted and is falling firmly into a new structure.

McKinsey has always conducted research, but with Periscope by McKinsey and their new Insights Solutions practice area they are moving firmly into the research industry with a next gen offering that merges the best of high-end strategic consulting and comprehensive data-driven solutions.

In this interview with Brian Elliott, Ph.D., CEO and Managing Partner at Periscope By McKinsey, and Oliver Ehrlich, Partner at McKinsey and Company, a Global leader of the Insight Solutions Suite, we dive deep into what they are doing. how they view their role in the insights space ecosystem, and what the future holds.

Rather than my usual video interview, we approached this as a podcast type interview, and then added a few slides as background to help illustrate the points we cover in the discussion. Think of this as a bit of a private webinar, where Brian, Oliver and I go back and forth to set the context of the Periscope By McKinsey offering within the broader industry. It’s surprising and revealing in many ways and falls into the “must listen” category.

Here is an embed of the interview:

 

 

This isn’t shoehorning research into strategy consulting; this is a bottom up, highly productized and fully baked research offering. From the most basic of data collection needs to the most advanced integrated strategic analytics and everything in between, Periscope By McKinsey has it. Perhaps most surprising is their Insights Solutions suite, which embraces automation and agile approaches for primary research to deliver cost and speed efficiencies one wouldn’t normally associate with a high end strategy consultancy.

To be clear, this isn’t a developing solution. This group is already a large players with 400+ Marketing and Sales analytics specialists, 35 distinct research techniques, 100+ sector-specific capabilities and 50+
unique data sources available. And based on their investment in rolling out into the research industry, they are absolutely committed to building on their success and becoming a major force in the industry.

Here is a bit from their website to show the breadth of their Insights Solutions suite offering:

 

Agile Insights consists of a leading edge research design and execution capability that is paired with subject matter expertise to ensure that research is truly fit-for-purpose to address key business needs. Our 4 distinctive services for companies across consumer and B2B sectors include:

Survey (quantitative research): Get the facts about individuals’ beliefs and behaviors 

  • Concept testing
  • Flash surveys
  • Growth potential surveys
  • Customer Decision Journey Score Card
  • Other modular surveys

Speak (qualitative research): Talk to and observe individuals in real time 

  • Video interviews
  • Digital diaries
  • Digital UX testing
  • Focus groups
  • in-home interviews
  • Mock-shops

Scrape: See what people are doing and saying online

  • Social brand equity
  • Social media customer engagement
  • Online intelligence
    • Purchase structures
    • Pricing & assortment insights
    • Consumer insights
    • Patient conversations

Scan: Scan data on a specific market, segment, or category to get rapid, targeted insights

  • Digital Opportunity Scan
  • Insights Factory

 

And of course, all of this fits into a larger system as you would expect from a company like McKinsey. Again, their website describes it best:

 

…Our services break down into four offerings:

  1. Agile Insights: Our team conducts interviews, creates surveys, and analyzes social and online data to better understand the consumer.

  2. Applied Insights: Our proprietary tools and services help companies identify potential market growth areas, assess product pricing and assortment, and uncover sales and marketing improvement opportunities.

  3. Intelligence Flow: We use embedded insights such as data feeds, industry surveys, and managed analytics to provide ongoing support that helps companies stay ahead.

  4. Insights Boost: We create a custom program to transform the way your organization develops insights, conducts market research, and uncovers revenue growth opportunities.

 

Many traditional MR suppliers may be reading this with trepidation, and perhaps rightfully so for some, but as Brian and Oliver point out they are also very happy to partner when that is what is in the best interests of the client or the business as a whole.  That is a difference between some of the other large MR firms and not only is it likely a competitive advantage for Periscope By McKinsey, but it is also a real opportunity for a variety of research suppler companies to develop a new network of partnerships around them.

The Periscope by McKinsey team will be at IIeX Europe next week in Amsterdam and likely at many other GreenBook events this year so anyone who is interested will have a chance to engage with them directly. In the meantime, stay tuned to see what they and other disruptive players are up to; this ride is just beginning!

Twitter Network Analysis: Nordstrom at the Center of Resistance?

Visualization of social networks is now coming online to make sense of Big Data and convey the results of analyses through emerging, open-source programs.

By Michael Lieberman

Mathematical analysis can tell us a lot of what is happening now. A great example is a Social Network Analysis Map of “@Nordstrom” I ran last Friday, February 10.

The graph represents a network of 5,293 Twitter users whose recent tweets contained “”@Nordstrom””, or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 5,000 tweets. The network was obtained from Twitter on Friday, 10 February 2017.

There are six major types of Twitter maps produced by Social Network Analysis. One type is called a Polarized Network. This pattern emerges when two groups are very split in their opinion on an issue. Two or more dense clusters form with little interconnection. Generally one sees polarized Twitter maps for divisive issues such as women’s rights in the Arab world or a hotly contested gubernatorial race in, say, Texas.

The most effective method of reading the map is to observe how hashtags cluster within the software. With our map we see three major groups forming. Below is a summary of how the dominant hashtags in the three largest groups cluster.

It is evident that G1 are mildly anti-Trump, perhaps media and those not thrilled with the new administration. G2 are the anti-Trumps. Hashtags in G3 are, evidently, supporters of the President and Ivanka’s line of clothing. The software captures every individual tweet in an Excel file. It is possible to drill down, if the client asks, with text analytics.

The interesting finding is this: Why has Nordstrom, a chain of luxury department stores usually found in upscale malls, now become a symbol of resistance against the new administration? We think we know the answer. What is interesting is that the results show up clearly when we run a Twitter map. Is upside down the new normal?

Visualization of social networks is now coming online to make sense of Big Data and convey the results of analyses through emerging, open-source programs. This kind of analysis is not limited to Twitter, but also can be applied to other social media data, megadatasets, consumer sales data from, say, a major supermarket, Walmart or survey data. It is a great new tool that, together with our analytic skills, we can deploy to give our clients the story.

A good use of this tool is if a major brand launches a new advertising campaign. By running a Twitter network map every day for 30 days, we can gauge the penetration of the message, which hashtags are going viral over time, how they are clustering, and what is the trending message.