1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Blog-banner-May-Global
  4. mfour_new_1

Why You Should Never Sample On Auto-Pilot

How do you decide the right sample variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations.

dreamstime_xs_30869237

By Susan Frede, VP of Research Methods and Best Practices, Lightspeed GMI

Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.

How do you decide the right variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations. Without this understanding it is difficult to build an appropriate sample. This understanding should include a discussion of the category and how different groups react to the category. Clients may not always know every group that is important, but most will have a general understanding of how various groups might respond.

Research-Live (May 2016) recently reported an excellent example of the importance of understanding the objectives and the category. Voters in the UK will soon be voting on a referendum on whether or not to remain in the European Union. Results of polls have varied greatly and originally people thought the difference was driven by online versus phone. However, with further digging it was discovered that the decision to remain or not is highly correlated with education. Many of the polls are not controlling on education so that can lead to skews in the results. Those online are also more likely to have higher education levels so that exasperates the difference between online and phone.

Sampling differences may also be accounting for some of the large differences in political polling in the U.S. for the next presidential race. It is important to look at the types of people who support each candidate and ensure the groups are appropriately represented in the sample. In some cases it may go beyond demographic variables. Certainly in U.S. politics, political party is key as many people vote along party lines.

Some might be saying ‘but you have just given us two political examples and this doesn’t apply in the marketing research world’. But it does! Say a client is testing a new idea for a high end product with an expensive price tag. Logic suggests that those with higher income will be more likely to afford the product and purchase it. If the income of your sample skews low then it may appear the product is not viable. Income might become even more important if you are comparing several product ideas and trying to pick a winner. If one of the samples skews high on income and the other low on income, it could look as if the one with the higher income is the winner when in fact it is the sample that is driving the difference.

Generally age and gender are the most common quota variables, but below are a number of examples of what might be important to control on depending on the category. For any category, the key is to think about what demographics might impact respondents’ behaviors and answers.

  • Banking and finance – Income impacts the types of financial products people may own and use.
  • Product consumption – Household size is key because larger households have higher consumption levels.
  • Shopper study – Stores can vary by region.
  • Entertainment/music – Tastes may vary by race/ethnic group.
  • Insurance – Insurance needs change as life stage changes so controlling on things like marital status or presence of children is important.
  • Toys – Age and gender of children can drive toy preference.
  • Hispanics/Canadians – Language is important because it can drive product choice.

Even when sampling is carefully done there can still be unexpected results. This is why it is imperative that the first thing to check when receiving a data file should be the demographics. Do the demographics look like what is expected of the target group? Next brand usage and category habits should be examined. Balancing on demographics reduces the chance that there will be brand usage and habit skews, but differences can still occur. For example, having significantly more users of the brand can greatly impact key measures. When differences in demographics, brand usage, and category habits are discovered, data can be weighted to bring the differences in line with expectations.

Bottom line, sampling needs the same consideration as the rest of the research design and should never be done on auto-pilot.


References

Bainbridge, J. (May 2016).  Education not taken into account sufficiently in polls.  Retrieved from https://www.research-live.com/article/news/education-not-taken-into-account-sufficiently-by-polls/id/5007442

Share

The Analytics of Language, Behavior, and Personality

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization.

human connectivity

By Seth Grimes

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization. These tasks are accomplished via natural language processing (NLP) technologies, implementing linguistic, statistical, and machine learning methods.

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

NLP touches our daily lives, in many ways. Voice response and personal assistants — Siri, Google Now, Microsoft Cortana, Amazon Alexa — rely on NLP to interpret requests and formulate appropriate responses. Search and recommendation engines apply NLP, as do applications ranging from pharmaceutical drug discovery to national security counter-terrorism systems.

NLP, part of text and speech analytics solutions, is widely applied for market research, consumer insights, and customer experience management. The more consumer-facing systems know about people — individuals and groups — their profiles, preferences, habits, and needs — the more accurate, personalized, and timely their responses. That form of understanding — pulling clues from social postings, behaviors, and connections — is the business Jason’s company, People Pattern, is in.

I think all this is cool stuff so I asked two favors of Jason. #1 was to speak at a conference I organize, the up-coming Sentiment Analysis Symposium. He agreed. #2 was to respond to a series of questions — responses relayed in this article — exploring approaches to —

The Analytics of Language, Behavior, and Personality

Seth Grimes> People Pattern seeks to infer human characteristics via language and behavioral analyses, generating profiles that can be used to predict consumer responses. What are the most telling, the most revealing sorts of thing people say or do that, for business purposes, tells you who they are?

Jason Baldridge> People explicitly declare a portion of their interests in topics like sports, music, and politics in their bios and posts. This is part of their outward presentation of their selves: how they wish to be perceived by others and which content they believe will be of greatest interest to their audience. Other aspects are less immediately obvious, such as interests revealed through the social graph. This includes not just which accounts they follow, but the interests of the people they are most highly connected to (which may have been expressed in their posts and their own graph connections).

A person’s social activity can also reveal many other aspects, including demographics (e.g. gender, age, racial identity, location, and income) and psychographics (e.g. personality and status). Demographics are a core set of attributes used by most marketers. The ability to predict these (rather than using explicit declarations or surveys) enables many standard market research questions to be answered quickly and at a scale previously unattainable.

Seth> And what can one learn from these analyses?

People Pattern Portrait Search

People Pattern Portrait Search

Personas and associated language use.

As a whole, this kind of analysis allows us to standardize large populations (e.g. millions of people) on a common set of demographic variables and interests (possibly derived from people speaking different languages), and then support exploratory data analysis via unsupervised learning algorithms. For example, we use sparse factor analysis to find the correlated interests in an audience and furthermore group the individuals who are best fits for those factors. We call these discovered personas because they reveal clusters of individuals with related interests that distinguish them from other groups in the audience, and they have associated aggregate demographics—the usual things that go into building a persona segment by hand.

We can then show the words, phrases, entities, and accounts that the individuals in each persona discuss with respect to each of the interests. For example, one segment might discuss Christian themes with respect to religion, while others might discuss Muslim or New Age ones. Marketers can then use these to create tailored content for ads that are delivered directly to the individuals in a given persona, using our audience dashboard. There are of course other uses, such as social science questions. I’ve personally used it to look into audiences related to Black Lives Matter and understand how different groups of people talk about politics

Our audience dashboard is backed by Elastic Search, so you can also use search terms to find segments via self-declared allegiances for such polarizing topics.

A shout-out —

Personality and status are generally revealed through subtle linguistic indicators that my University of Texas Austin colleague James Pennebaker has studied for the past three decades and is now commercializing with his start-up company Receptiviti. These include detecting and counting different types of words, such as function words (e.g. determiners and prepositions) or cognitive terms (such as “because” and “therefore”), and seeing how a given individual’s rates of use of those word classes compares to known profiles of the different personality types.

So personas, language use, topics. How do behavioral analyses contribute to overall understanding?

Many behaviors reveal important aspects about an account that a human would struggle to infer. For example, the times at which an account regularly posts is a strong indicator of whether they are a person, organization or spam account. Organization accounts often automate their sharing, and they tend to post at regular intervals or common times, usually on the hour or half hour. Spam accounts often post at a regular frequency — perhaps every 8 minutes, plus or minus one minute. An actual person posts in accordance with sleep, work, and play activities, with greater variance — including sporadic bursts of activity and long periods of inactivity.

Any other elements?

Graph connections are especially useful for bespoke, super-specific interests and questions. For example, we used graph connections to build a pro-life/pro-choice classifier for one client to rank over 200,000 individuals in greater Texas on a scale from most likely to be pro-life to most-likely to be pro-choice. By using known pro-life and pro-choice accounts, it was straightforward to gather examples of individuals with a strong affiliation to one side or the other and learn a classifier based on their graph connections that was then applied to the graph connections of individuals who follow none of those accounts.

Could you say a bit about how People Pattern identifies salient data and makes sense of it, the algorithms?

The starting point is to identify an audience. Often this is simply the people who follow a brand and/or its competitors, or who comment on their products or use certain hashtags. We can also connect the individuals in a CRM to their corresponding social accounts. This process, which we refer to as stitching, uses identity resolution algorithms that make predictions based on names, locations, email addresses and how well they match corresponding fields in the social profiles. After identifying high confidence matches, we can then append their profile analysis to their CRM data. This can inform an email campaign, or be the start for lead generation, and more.

Making sense of data — let’s look at three aspects — demographics, interests, and location —

Our demographics classifiers are based on supervised training from millions of annotated examples. We use logistic regression for attributes like gender, race, and account type. For age, we use linear regression techniques that allow us characterize the model’s confidence in its predictions — this allows us to provide more accurate aggregate estimates for arbitrary sets of social profiles. This is especially important for alcohol brands that need to ensure they are engaging with age-appropriate audiences. All of these classifiers are backed by rules that detect self-declared information when it is available (e.g. many people state their age in their bio).

We capture explicit interests with text classifiers. We use a proprietary semi-supervised algorithm for building classifiers from small amounts of human supervision and large amounts of unlabeled texts. Importantly, this allows us to support new languages quickly and at lower cost, compared to fully supervised models. We can also use classifiers built this way to generate features for other tasks. For example, we are able to learn classifiers that identify language associated with people of different age groups, and this produces an array of features used by our age classifiers. They are also great inputs for deep learning for NLP and they are different from the usual unsupervised word vectors people commonly use.

For location, we use our internally developed adaptation of spatial label propagation. With this technique, you start with a set of accounts that have explicitly declared their location (in their bio or through geo tags), and then these locations are spread through graph connections to infer locations for accounts that have not stated their location explicitly. This method can resolve over half of individuals to within 10 kilometers of their true location. Determining this information is important for many marketing questions (e.g. how does my audience in Dallas differ from my audience in Seattle?) It obviously also brings up privacy concerns. We use these determinations for aggregate analyses but don’t show them at the individual profile level. However, people should be aware that variations of these algorithms are published and there are open source implementations, so leaving their location field blank is by no means sufficient to ensure your home location isn’t discoverable by others.

My impression is that People Pattern, with an interplay of multiple algorithms and data types and multi-stage analysis processes, is a level more complex than most new-to-the-market systems. How do you excel while avoiding over-engineering that leads to a brittle solution?

It’s on ongoing process, with plenty of bumps and bruises along the way. I’m very fortunate that my co-founder, Ken Cho, has deep experience in enterprise social media applications. Ken co-founded Spredfast [an enterprise social media marketing platform]. He has strong intuitions on what kind of data will be useful to marketers, and we work together to figure out whether it is possible to extract and/or predict the data.

We’ve struck on a number of things that work really well, such as predicting core demographics and interests and doing clustering based on those. Other things have worked well, but didn’t provide enough value or were too confusing to users. For example, we used to support both interest-level keyword analysis (which words does this audience use with respect to “music”) and topic modeling, which produces clusters of semantically related words given all the posts by people in the audience, in (almost) real-time. The topics were interesting because they showed groupings of interests that weren’t captured by our interest hierarchy (such as music events), but it was expensive to support topic model analysis given our RESTful architecture and we chose to deprecate that capability. We have since reworked our infrastructure so that we can support some of those analyses in batch (rather than streaming) mode for deeper audience analyses. This is also important for supporting multiple influence scores computed with respect to a fixed audience rather than generic overall influence scores.

Ultimately, I’ve learned to think about approaching a new kind of analysis not just with respect to the modeling, but as importantly to consider whether we can get the data needed at the time that the user wants the analysis, how costly the infrastructure to support it will be, and how valuable it is likely to be. We’ve done some post-hoc reconsiderations along these lines, which has led to streamlining capabilities.

Other factors?

Another key part of this is having the right engineering team to plan and implement the necessary infrastructure. Steve Blackmon joined us a year ago, and his deep experience in big data and machine learning problems has allowed us to build our people database in a scalable, repeatable manner. This means we now have 200+ million profiles that have demographics, interests and more already pre-computed. More importantly, we now have recipes and infrastructure for developing further classifiers and analyses. This allows us to get them into our product more quickly. Another important recent hire was our product manager Omid Sedaghatian. Omid is doing a fantastic job of figuring out what aspects of our application are excelling, which aren’t delivering expected value, and how we can streamline and simplify everything we do.

Excuse the flattery, but it’s clear your enthusiasm and your willingness to share your knowledge are huge assets for People Pattern. Not coincidentally, your other job is teaching. Regarding teaching — to conclude this interview — Sentiment Analysis Symposium in New York, and pre-conference you’ll present a tutorial, Computing Sentiment, Emotion, and Personality. [Use the registration code GREENBOOK for a 10% discount.] Could you give us the gist of the material you’ll be covering?

Actually, I just did. Well, almost.

I’ll start the tutorial with a natural language processing overview and then cover sentiment analysis basics — rules, annotation, machine learning, and evaluation. Then I’ll get into author modeling, which seeks to understand demographic and psychographic attributes based on what someone says and how they say it. This is in the tutorial description: We’ll look at additional information that might be determined from non-explicit components of linguistic expression, as well as non-textual aspects of the input, such as geography, social networks, and images, things I’ve described in this interview. But with an extended, live session you get depth and interaction, and an opportunity to explore.

Thanks Jason. I’m looking forward to your session.

Share

GRIT Says Panel Woes Are Jeopardizing MR’s Future. There’s an Answer.

State-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.

Mobile-world

By Michael Smith

A running theme through the 82 pages of the most recent Greenbook Research Industry Trends Report (GRIT) is that the quality of survey sample has eroded to the point of crisis for market research.

GRIT sounds a repeated alarm over what its authors call “a known problem…with no solution in sight.” But there is a solution — all-mobile panels — which I’ll explore in a bit.

First, some facts and quotes from the report that lay out the dimensions of the panel crisis:

  • 38% of GRIT’s more than 2,000 industry respondents expect sample quality to get worse over the coming three years; fewer than 28% believe it will improve – and among clients of market research providers, optimism sank to 23%.
  • “Clients and suppliers agree that sample quality is getting worse, and there is little alignment on what to do about it. This is a perennial topic; when will the industry do something about it?”
  • “The smartphone revolution and declining participation are indeed problems that need to be addressed. Few disagree with this belief, but there is far less consensus around the extent of the problem, its implications and the range of solutions.”
  • “The difficulty of accessing truly representative sample sources….could be viewed as the single largest area of concern for the industry….We are running out of online panelists…”
  • “There are few legitimate excuses one can muster for not confronting the sample problems that plague the industry. There’s no doubt that the solutions are hard, but…far too many people…are dragging their feet.”
  • “The real existential threat to our industry is…the future of research participation. The real question therefore is when will people catch on? When will responses to these questions drive change?”
  • “We believe that the death spiral is accelerating for those researchers who fail to act. The poor experiences they create are starting to contrast markedly against the unique and engaging experiences by new entrants as well as the small number of innovators who’ve been unafraid to embrace change.”

The last sentence points the only way forward. Innovate. Embrace change.

A Formula for Successful Mobile Research

My argument is that state-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.

The misconception is borne out by one of GRIT’s most telling findings: 74% of respondents think they’re already doing mobile research, more than any other “emerging method.” An additional 17% are considering trying mobile for the first time.

MFour has long struggled to make the industry realize that not all mobile research is created equal. There’s good mobile and bad mobile, mobile that’s artless and mobile that’s state-of-the-art. There’s pure mobile that’s solely geared to smartphones, and diluted mobile that ties smartphone respondents to fading online survey technology. There’s mobile that fails and mobile that works.

MFour Mobile Research, Inc.’s aim since 2011 has been to define what mobile research can and should be, then create the new software and new approaches to panel-building that alone can make mobile work. Success means solving both ends of the equation: developing the right technology and recruiting and cultivating the right panel.

Developing The Right Mobile Technology

We’ve broken with all trappings of online research. Instead, we deploy technology that’s new to market research, the native app. Our proprietary app, Surveys on the Go®, instantly loads an entire survey into the respondent’s phone – including any pictures or multimedia content needed to enhance questions and answers. Embedding the survey into the phone is what makes it “native.”

Why does it matter? Because it frees respondents to complete surveys at their convenience. They don’t have to interrupt what they’re doing. They don’t need to be connected to the internet. Consequently, there’s no risk that the survey will become intolerably slow because of poor connections that lead to snail’s-pace downloads and data transfers. The survey can’t be dropped because of a lost signal.

At the opposite end of the spectrum are hybrid approaches that tether mobile devices to online survey software. A separate, back-and-forth exchange must take place for each and every question and answer. It’s a method that puts the respondent’s experience and the survey’s success at the mercy of internet connections that, as we all know, can bog down or disappear.

Essentially, mobile surveys embedded through a native app don’t have to be short and simplistic. Immune to smartphone signal issues, they can be long and sophisticated, and exploit special smartphone capabilities such as multimedia and Geo-location, which allows inviting panelists to surveys while they are still shopping or have just left a store. In our experience, app-based interviews run smoothly, regardless of location and even with interview lengths exceeding 20 minutes. Even at that LOI, we’ve experienced just a 6% drop-off rate. So much for the five-minute survey limit that’s commonly but wrongly posited for mobile research.

As for building a reliable, representative sample, good technology that begets a good respondent experience goes a long way toward drastically improving participation.

Curating A Winning Mobile Panel

With the right mobile technology, it’s possible to recruit the right kind of mobile panel. Ours numbers more than a million active respondents who take surveys solely on their smartphones and other mobile devices. They seem to like it, as reflected in strong ratings and comments at the App Store and Google Play. The mere fact that respondents can give us direct, unsolicited and very public feedback on their survey experiences makes app-based mobile a superior tool for becoming aware of panel problems as they arise – and taking quick action to solve them. It makes us accountable – as any firm that’s serious about its responsibilities and confident in its capabilities ought to be.

There’s much more to tell about the all-mobile approach – not least its ability to reach Millennials, Hispanics and African-Americans who, as GRIT notes, are vital to research but increasingly inaccessible to online surveys.

The Solution to Successful Mobile Research

But my main point is that the industry needs to understand that the available mobile technologies differ drastically. Then firms can make the natural comparisons, try different mobile providers, and see which can deliver a good panel and fast, reliable, representative data.

I think the most important sentence in the new edition of GRIT comes near the end, in a section called “Opportunities for the Market Research Industry” that examines ways forward from the current dead end.

“Mobile research has been seen as an opportunity for many years, but there is a sense that now we are at the stage where we can really start to exploit mobile data gathering techniques.”

Before you can exploit mobile techniques you must get to know one technique from the next. You have to stop stereotyping all iterations of mobile research as prone to the same limitations and drawbacks.

GRIT has done our field a great service by refusing to sugarcoat the sample problem and by sounding a clear alarm that something has to be done about it. There’s just one point I dispute.

I wouldn’t say that market research’s pervasive sample woes are “a known problem…with no solution in sight.” There is a solution, but until now it has been overlooked.

That appears to be changing. MFour’s year-by-year growth since we debuted our native app in 2011 suggests that an increasing number of researchers are starting to make the kinds of distinctions about mobile research that need to be made.

Market researchers need to do some research in their own backyards, to gain insights into their own most crucial interests – especially, as GRIT makes clear, when the industry’s health depends on it. Getting a more sophisticated understanding of mobile, market research’s most widely-adopted but least understood “emerging method,” would be a good start.

Share

The Top 40 Most In Demand Research Suppliers at IIeX North America

An analysis of the 221 private meetings between research buyers and suppliers at IIeX North America.

iiexna2016_featured

 

Whew! It’s been a breakneck past month as we raced towards the biggest and best IIeX event yet: IIeX North America 2016. Last week all the hard work of many folks paid off in droves and we had a fantastic event here in Atlanta. With over 800 attendees from all around the world representing 453 unique organizations (including 115 different client-side companies) and almost 200 speakers over two and a half days, it was jam packed with the extraordinary at every level.

If you missed it, or want to be reminded of what a great time you had, check out this sizzle reel our video team at Smilesstyles Media, put together:

 

 

Of course, the lifeblood of IIeX is connecting buyers with suppliers and supporting business, so I thought it would be interesting to take a look at our Corporate Partner program meetings and see what we can glean on what client’s were looking for at the conference. If you’re not familiar with the CP program it’s pretty simple: we work with buyers of research and investors by giving them the opportunity to meet with any potential suppliers/partners they pick during the conference. They can choose from any company attending, and there is no charge for anyone to participate. It’s one of the most popular aspects of IIeX events for all involved, and we’re thrilled to be able to help the industry grow in this way.

In Atlanta, 15 partners met with 112 different supplier companies for a total of 221 private meetings! That’s an awful lot of business value generated! In many cases the Corporate Partners sent teams of people to cover both the meetings and to absorb the content from the agenda or take advantage of the informal networking and exhibitor discussions, ensuring that no stone was left unturned in getting everything they could from the conference.

The participating partners in Atlanta were:

Burke
Clorox
CVS
Facebook
General Mills
Harley Davidson
Keurig
Lowe’s
Merck
Nestle Purina
P&G
Panera
Transamerica
Twitter
WPP

Now here is the interesting part: what were they looking for?

We categorized every supplier invited to the meetings and added up the number of meetings for each group and here is what we found:

 

IIeX CP Meetings

 

A few notes:

  • Before anyone asks in the comments, no, we will not divulge supplier names here. You are welcome to make guesses based on public information on attending companies and exhibitors but we will neither confirm or deny. 
  • Neuromarketing includes any method that is based on neuroscience or cognitive science, not just EEG based research. It does not include Facial Scanning or the application of Behavioral Economics models; they are counted separately.  
  • Online Qual includes both traditional approaches and emerging “agile” or automation solutions, as well as hybrids that combine qual and quant in an online setting. It does not include Ethnography or Communities; the companies in this cluster use either a group or IDI model in an online environment.
  • Innovation Consultancy are firms that either focus on NPD or Innovation as their primary offering, or offer tools that are focused on those fronts. Many here are also Full Service firms, but I’m using a bit of insider knowledge in working with the Corporate Partners to give context to why they were selected and in all cases it was due to a need to find new product/service innovation models. Other companies in other categories such as Prediction Markets could also fit here, but since they offer a very specific approach I kept them separate.    

So what’s the big picture here? Here’s my take on the highlights.

Since the inception of the CP Program, Neuromarketing has been at the top of the list. Despite only showing tepid growth in GRIT adoption rankings, interest in the value that nonconscious techniques can deliver for insights remains very high and the client-side continues to explore what suppliers can offer here.

Online Qual has been around for many years, but like online surveys not much speed or cost efficiency was gained by what amounted to a simple form factor shift. A few years ago that began to change in quant with the advent of DIY platforms, micro-surveys and more recently automation and the same thing is now happening with qual. Advances in recruiting tech (sample APIs for instance) as well as integrating new tools such as video analysis, text analytics, and AI-driven probing are making qual much more efficient, cheaper, and closer to real-time. That is driving interest in the next generation of online qualitative tools.

We are also witnessing the emergence of the next gen generation of social media analytics offerings, with the new players entering the market more focused on using social data to drive segmentation, nonconscious measurement, data synthesis or advanced analytics.  Clients have gone through the hype cycle as well and understand the value and use cases for social data (or text in general) and are now anxious to explore how the new class of tools can deliver more value from their predecessors.

Everything else on the list shows continued interest in both established and emerging approaches on a more focused level. Obviously Analytics, Gamification, Research Automation, Sample, Image-based Data Collection, Shopper Insights, Video-based Research, Behavioral Economics Research, Facial Scanning, and Mobile Research continue to be of high interest across the board.

Just as the GRIT Emerging Technology adoption rankings can be used to help gauge where investment dollars should be spent, I think this analysis of where client interest lies at IIeX is another vital data point to consider during the strategic planning process. Of course clients come to IIeX looking for “new stuff” , and we also tend to attract suppliers that fit that mold so there is a certain amount of confirmation bias here and we should not assume that traditional suppliers or modes are not in demand; of course they are. However, there is no denying that at the very least this list points to where the industry is going and it’s well worth exploring what this means to your business.

Share
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

What Automation Means for Healthcare Market Researchers

What exactly does automation mean for healthcare market researchers and what kinds of automation are already here or on the horizon that you should be watching?

healthcare

By Tom Lancaster, Chief Technology Officer, InCrowd

The latest GRIT report on 2016 market research industry trends is out, and I’m really excited to see a special new section on “Adoption of Automation.” I’ve been thinking a lot lately about automation and how it can and should be applied to healthcare-related market research. (See my guest post on marketresearch.com.)

Automation and the prospect of machines doing what humans do is of course nothing new. For businesses, analytics, and reporting have long been automated across sectors and industries (think about the advent and promise of big data!). For busy professionals, word processers instead of typewriters, and meeting scheduling applications have been a boon for productivity and efficiency.

For market researchers in particular, the new GRIT report notes that charts, infographics, analysis of text, survey, and social media data are already popular automated features.

image1
The report goes on to say, “what can be automated will ultimately be automated as well.” I couldn’t agree more. But what exactly does this mean for healthcare market researchers and what kinds of automation are already here or on the horizon that you should be watching?

Here are four important market research features to think about when considering automated solutions:

  1. Sampling—How do you fill a survey with the right respondents and how do you do it quickly? And, with physicians busier than ever, getting them to provide high-quality answers while on the go is also part of the puzzle. The GRIT report notes that “while a third of respondents already use automated sampling, many [are] concerned about the impact of this automation on data quality.” Sophisticated sampling algorithms however now let you optimize for speed and response quality. The better ones do this through a “trickle” sample methodology that goes out to smaller subsets of respondents at a time in order to reduce the number clicking through only to discover that the survey has closed. Such sampling automation means a better survey experience for physicians, which in turn leads to faster responses and increased rates of participation all in the service of higher data quality.
  1. Survey Data—One reason traditional surveys take weeks to compile and report out on is the enormous amount of time and human resources spent on filtering and cleaning data sets. New survey technology applications deal with this problem head on by validating responses as they come in. This is also known as real-time data quality assurance and it uses software to clean data sets during fielding or as soon as surveys close in order to provide high-quality, real-time survey data. Survey technology providers are mastering the art and science of survey data analysis, and market researchers are adopting this approach in large numbers, with nearly 42 percent using some form of survey data automation, according to the GRIT report.
  1. Tracking Studies—The social media timeline—with its algorithms to automate shares, tags, and tweets—is a great analogy for the tracking study. These are your micro-moments captured over time so you can focus on finding joy and meaning in those streams of thoughts and pictures. Market research applications are applying similar timeline algorithms to the tracking study. Automation is able to take the onerous work out of repetitive fielding of tracking surveys, aggregating responses, and providing visual comparisons of multiple waves. In other words, machines do the large quantity of tedious work so market research teams can spend their time defining and analyzing KPIs—and making smart decisions with that data.
  1. Translation & Transcription—In the last 10 years, there has been an explosion of “application program interfaces” or APIs that allow different software programs to connect with each other. Think of looking up that new restaurant in Yelp using Google Maps—this is all done through automated APIs for a seamless user experience that allow you to stay in Yelp the entire time. The same is true for quantitative surveys and qualitative interviews that require translation and/or transcription. The world of APIs allows us to create a unique “network of services” that collects survey feedback, upload the data files to a third-party transcription or translation service provider, and receive the translated materials—all through a single user interface.

Such automation is really about bringing the human factor into your work. It’s how innovation and technology are in fact stimulating a demand for skills only humans have: creative thinking, critical decision-making, complex human-centered analysis.

An example from the past tells us this can happen: When automatic teller machines (ATMs) came out, bank employees were freed from conducting basic transactions behind a counter to engaging in higher-value responsibilities like sales and financial advising—activities that helped build customer loyalty and the company’s brand.

Imagine what automation in market research would allow you to do.

 

Share

It’s all in the Process: 6 Steps for Successful Market Research

It is vital to have a unique, collaborative, client-centric approach to market research which will enable better, actionable results with higher ROI for clients.

anatomy-of-an-idea-in-collaborative-innovation

By Tim Glowa, Bug Insights

What’s wrong with what we do now?

Typical market research projects follow a predictable process – the client tells the consultant what they want to learn (most often via a written brief), and the consultant designs a survey to share with the client for review. Through a series of meetings and survey iterations with the research client, the consultant develops a final survey, then fields it, collects the data, analyses it and presents the results (or, more often emails the results) in a report. Job done.

But wait a minute. We think this process is flawed, indeed, we think it often results in poor research. In our view, it is vital to have a unique, collaborative, client-centric approach to market research which will enable better, actionable results with higher ROI for clients.

If you want to get the most out of the market research consultants you hire, you should follow the process that we share in this article. Just six steps that market research clients should take in order to get the best results.

Step One

When a client hires us, the first thing we do is have a kick off call to make sure that we really understand the client’s problems, what they have tried in the past, what hasn’t worked, and what has. We want to learn as much as possible about the company, so we can address their greatest concerns. We call it “being smart” about a company, their customers and competitors. In this kick off call, it is important that we also learn hard metrics like: defection levels, market share, penetration, average tenure rates, etc. After we gather all of this information, we ask what actions they plan to take based on the data we provide for them. This is important, so the client has a clear understanding of their ideal outcome, and so do we.

Step Two

After the kick off call, our team constructs a straw man study based off the metrics the client shared with us. The straw man study is a great way to establish a starting point with the client. Typically, the straw man is a conjoint study that encompasses levels, attributes, demographics, and attitudinal programming. We have found that creating a foundational study is significantly more useful than entering the next meeting with a blank sheet of paper. The straw man gives the client something to react to. During step three, we want them to scrutinize the proposed study, keeping what they like, getting rid of what they don’t, and adding what they need.

Step Three

Our third step in the process is a survey design workshop with the client. We encourage them to bring anyone into the meeting that may be administering the survey, or people who will be implementing the results, often called the “end users”. Typically, a person from finance will join as well. This portion of the process is unique, but critical for positive market research outcomes. Since the client knows their issues from the inside, we need their input into what will work, and what will not. Collaboration between the client and market researchers is imperative during this step for a successful and fulfilling study.

The main goal of this workshop is to have 95% of the survey finished. We want to test something that is as broad as possible, but still attainable.

Step Four

Now that the survey is finalized, we work with the client, or somebody in finance to make gross profit estimates. During this step, we put all the features we are testing into the conjoint study. This leads into optimization, which allows us to get to gross estimates and gross profits for the client changes we may suggest.

Step Five

During this step, the survey is programmed; we gather and analyze the data, and create the choice modeling.  Ironically, with most research programs this is where most of the time and effort is spent with many of the previous steps skipped. But we cannot create the choice modeling our clients need without having gone through the process to this point.

Step Six

At the end of the project, we share the process, and findings with the whole group that participated in the survey design workshop. At this point, the client has been part of the whole process, so we are not presenting new research to a new audience. They are already invested in the process, and are excited to see the results. Here, we share findings, make suggestions and the client builds plans.

Satisfaction through a better process

Each one of these steps contributes to overall market research satisfaction and success. It is too often that clients are left with stacks of data without any actionable steps to take. Those piles of data eventually collect dust, and little (if any) organizational change is implemented. So this is a core issue for the market research industry but also for clients. If you don’t want to see your research dollars wasted, you need a better process.

Share

How Would Silicon Valley Reinvent Market Research?

Kristof De Wulf shares his key takeaways on the future of the market research industry through a Silicon Valley lens.

the-future (1)

By Kristof De Wulf

The future has never been more fascinating. 10 years from now, we expect to be able to manipulate DNA the same way as a Word doc, to buy the power of a human brain for just $ 1,000 and to see a robot journalist win a Pulitzer. That’s why I was extremely excited to join a group of 20 fellow entrepreneurs on a 5-day trip to Silicon Valley at the end of last year. We had the unique opportunity to discover the secret fabric of the Silicon Valley ecosystem and reflect on what it means for our very own future. Here are my key takeaways on the future of the market research industry putting on a Silicon Valley lens.

Robo researchers

Back in 1930, the economist Keynes already predicted that in 2030 we would only work 3 hours a day because machines would be doing our jobs for us. While robots and artificial intelligence are still in the early stages of development, automation will undoubtedly take over from humans. We are about to witness a world where having robots around us will be as familiar as having a coffee machine in our kitchen. Oxford University estimates about half the existing jobs will be gone in less than 20 years from now. With intelligent machines getting smarter not just by learning from humans but also from their machine peers, technology will destroy more jobs than create new ones, which is referred to as the ‘great decoupling’. What’s more, machines never get tired or sick, don’t waste time on Facebook, don’t choke under pressure and don’t join a union. So expect robo researchers to come for your job soon. They will be able to analyze data in a more sophisticated and context-dependent way using artificial learning (think about the way Google’s AI won a game of Go by defying millennia of basic human instinct), they will replace human moderators having passed the Turing test multiple times already (think about the near-human cognitive capabilities of Amelia, an AI system learning how to perform the work of call center employees and possibly taking out 250 million jobs in no time), they will auto-code and -interpret speech and text as well as visual information (think about Google’s PlaNet computer that can tell where nearly any photo was taken). While most of us are still convinced the research future will be about blending robotized automation with humanized interpretation, I expect it to be heavily skewed towards smart machines rather than to be balanced.

Platform-driven

During our visit at Andreessen Horowitz, Benedict Evans talked about the power of mobile, describing it as “the first universal tech product”, which is a fundamental change compared to previous tech innovations. By 2020, 80% of adults are expected to have a smartphone. Mobile is the new scale, accelerating the creation of many more components, smaller and cheaper. Smartphone components are becoming the Lego bricks providing the basis for new technologies such as virtual and augmented reality, wearables, drones, etc. As such, mobile is a new platform, not a device. Cars for example are being transformed into ‘smartphones with wheels’, with the future being electric, on-demand and autonomous. I expect a similar evolution for the market research industry, with the traditional and labor-intensive market research value chain most likely being replaced by platforms bundling mobile, peer-to-peer sharing, open data, and ‘always-on’ in a unique new mix. ZappiStore is already a good example of the direction we will be moving in.

Native

Consumers have cultivated a power never seen before. Consumer emancipation is taking new heights with consumers capable of raising their voices through social media, of knowing more about themselves and the products they use through wearable and sensor technology and even of creating the products they want themselves with increasing access to maker tools and ‘how to’ information. It implies that research will need to revolve around consumers and their lives. I anticipate to see native research grow really big just the way native advertising is gradually oppressing traditional advertising. Our visit to IndieGogo is a good example; started up in 2008, IndieGogo is now one the global leaders in crowdfunding. By democratizing funding (400,000+ campaigns per year), Indiegogo helps fill a massive gap in providing access to capital, helping small entrepreneurs as well as larger corporations like GE, Sony or Hasbro to source, improve, sponsor and/or validate innovations. IndieGogo applies native research as, in just 30 days, companies can get critical feedback on features, price, messaging and so on, while at the same time building awareness for the new product. It taps into the logic of behavioral economics to test and optimize the performance and go-to-market strategy of new products. In addition to asking what people want, our industry needs to learn from companies such as IndieGogo to tap into real rather than claimed or intended behavior.

On demand

The on-demand society which we live in will hugely impact our professions. This was well demonstrated during our visit at Quid, a company focusing on solving the problem of dealing with the huge amount of data, 80% of which is unstructured. With most existing tools being unsuited to deal with complex questions and not providing any context, Quid reads and organizes information on a massive scale and helps you see connections. Augmented intelligence is what they believe in: a blend of computer intelligence with human intelligence. An algorithm reads texts from public economic (patents, private investments, newspapers, etc.), public consumer (social media) data or data dragged in from specific owned data sources, removes meaningless words, creates a mathematical fingerprint of a document and develops connections between different fingerprints. The result is an overview of different networks containing massive amounts of documents. Really cool is that, depending on the similarity of the content that is scraped, Quid gives a color code to content which is similar in nature, thus creating natural ‘segments’ in the network (e.g. interpretation of facts, objective facts, discussions related to impact on politics, etc.). Following the Quid example, the future of the market research industry will be much more visual, slicing and dicing information based on different tags, in line with how our human brains work. In the future, I expect to see a big shift in the way we consume data, moving from static, one-off and textual information to dynamic, continuous and visual information.

The future is unfolding quicker than we all think, with exponential technologies driving the massive change. But, as one of the speakers said during our Silicon Valley trip: “It is never too late to be early”. Make sure crazy ideas have a place in your company and apply a ‘just do it’ philosophy: sometimes just trying is better than thinking too hard about how to do things. Good luck in embracing the Silicon Valley DNA!

 

Share

The Return on Investment from Insights Part 1 – Why You Need to Care

What would happen to YOU if your CFO / your clients’ CFO’s demanded to know, right here, right now, the return on investment from money spent on customer insight and market research last year?

pepper

Work on the client-side and want to grow your budget? 

Work on the agency-side and want to win more business? 

The answer to both your dreams might well be a ROI AUDIT.

I know you’re probably pressed for time and it will take you 5 to 10 minutes to read this blog, so please answer these two questions in order to evaluate whether it is worth you investing your time in this post or not:

  1. Do you believe it is important for market research and customer insight to deliver a strong return on investment (ROI)?
    1. If yes; answer Q2
    2. If no; go read something else
  2. Do you currently measure the return on investment you deliver?
    1. If yes; congratulations! Go read something else
    2. If no; invest 5 to 10 minutes of your time. It might just be worth it.

staircaseOn The Way Up Or On The Way Down?

A cautionary tale* illustrating that how you deal with the ROI question today might well affect your future quite dramatically

One month earlier at the national association’s CEO networking evening…

Bob, CEO at Great Tools Research, a mid-sized research company, was chatting with happily with Susan, his counter-part at Big Impact Research, explaining what a great year his company had had last year growing the margin from 12 to 15%. Little did he know how things were about to change…

Read below to follow Bob’s path

∞∞∞∞∞∞

Sharon, V.P. Customer Insights at Tight Ship Inc, was sitting alone in her office head in hands. Four weeks ago she had learnt that her budget proposal had not gone through and that instead her budget was to be cut in half. As a result, she had just had to fire two of her four team members and had just got off the phone with Bob, CEO at her primary research vendor, Great Tools Research, to tell him that he needs to come up with a proposal for delivering as much as they can for a 50% reduced budget and that she has no choice but to put her account out to tender.

∞∞∞∞∞∞

Two months ago everything had been so different. She thought back to her appraisal meeting with her boss, the Marketing President, and the nice bonus she had received for meeting her key targets of getting 10% more projects out of her budget and creating a fantastic new customer insights portal. She recalled the meeting with the Account Director at Great Tools Research, where they had talked about setting up a customer panel and introducing a new interactive reporting tool this year. She sighed knowing that both projects would now have to be shelved.

∞∞∞∞∞∞

Bob was sitting alone in his office head in hands. He had just had to fire ten of his staff. Two weeks ago he had learnt that their two biggest clients were both cutting their research budget in half, leaving him with a 15% revenue gap versus budget, and his board had agreed that serious cost-cutting was needed to maintain margins.

Four weeks ago everything had been so different. Generous bonuses had been given to the two account directors for increasing the margin on these established “cash cow” clients, and Bob had announced to his staff the plans to invest in new technology, as well as in the hiring of a new Vice President of Business Development.

∞∞∞∞∞∞

Bob still was wondering what the heck at happened at Tight Ship Inc, such a long-standing and reliable client, and Sharon was still in disbelief that half of her budget had been given to the Digital Marketing team, kicking herself for not seeing this coming.

∞∞∞∞∞∞

Six months ago Tracy had joined Tight Ship Inc as the new CFO. Her first job was to ask all of the Presidents in the company to provide a report on the return on investment from each of their budget lines. Unfortunately, the Marketing President was not able to provide any hard numbers for the customer insight budget line, since they had always viewed this as a cost item and not an investment and therefore had no ROI metrics in place.

Upon analyzing the ROI reports, Tracy could clearly see the positive impact of digital marketing on the top and bottom lines, but could not see the impact of their investment in customer insight in the same way. Therefore, cutting the customer insight budget in half and reallocating the spend to activities with a demonstrable return on investment was a “no brainer”, which would immediately demonstrate to her boss her effectiveness as a CFO who drives profitable growth. Tracy set the wheels in motion, which would lead to such a negative impact on Great Tools Research and the people working there.

Read below to follow Susan’s path

∞∞∞∞∞∞

Mark, V.P. Customer Insights at Ahead of the Comp Inc, was having lunch with his team. The team were delighted to hear about their department’s budget increase and Mark’s plans to hire a new Insights Manager to the team. Earlier that day he had spoken with Susan, CEO at his primary research partner, Big Impact Research, to share the good news with her and discuss how to use the extra budget to deliver even greater value to the business.

∞∞∞∞∞∞

Mark reflected that only two months earlier he had proudly presented his latest ROI Audit to his boss, the Marketing President, and received a bonus for meeting his key targets of increasing Customer Insight ROI by 20% and creating a new customer insights portal. He recalled the meeting he had had with the Account Director at Big Impact Research, where they gone through the agency’s own ROI audit report had agreed on how to adjust the spend for the year ahead in order to focus even more resources on the high-ROI-delivering activities.

∞∞∞∞∞∞

Susan was sitting in the conference room with her team enjoying a glass of champagne. She had just hired four new researchers to her key account team. One month ago she had learnt that their two biggest clients were both increasing their research budget by 20%, and the board had agreed to further investments in order to deliver even more value to their key clients. The key account teams had just presented their plans for improving the return on investment delivered to their clients and everyone had agreed on the internal changes needed to deliver on those plans.

∞∞∞∞∞∞

Susan reflected on how fortunate her company was to have clients who understood the importance of a partnership based on trust and transparency. Mark reflected on how fortunate he was to have a partner agency, who shared his goal of increasing the Customer Insight ROI and who worked openly together with him to audit the return on investment they delivered.

∞∞∞∞∞∞

Six months ago Simon had joined Ahead of the Comp Inc as the new CFO. His first job was to ask all of the Presidents in the company to provide a report on the return on investment from each of their budget lines. The Marketing President presented the data from their recent Customer Insights ROI Audit, which showed that last year Customer Insights had contributed towards both sales growth and cost savings, delivering an overall 27X return on investment.

Upon analyzing the ROI reports, Simon could clearly see the positive impact of Customer Insight on both the top and bottom lines, and therefore had no problem recommending that the customer insight budget be increased by 20%. Increasing spend on an activity with such a demonstrable return on investment was a “no brainer”, which would immediately demonstrate to his boss his effectiveness as a CFO who drives profitable growth. Simon set the wheels in motion, which would lead to such a positive impact on Big Impact Research.

∞∞∞∞∞∞

One month later at the national association’s CEO networking evening…

Susan looked across the room at Bob, feeling slightly sorry for him, but at the same time was wondering which of his clients she would be calling in the morning.” THE END.

* This is a work of fiction. Names, characters, businesses, places, events and incidents are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.

∞∞∞∞∞∞

bookFact or fiction?

Definitely a bit of both no doubt, but the question is which of these two scenarios best reflects what would happen to YOU if your CFO / your clients’ CFO’s demanded to know, right here, right now, the return on investment from money spent on customer insight and market research last year?

Put simply:

If you work client-side, what return on investment did your customer insights budget deliver to the business?

If you work agency-side, what return on investment were you able to help each of your clients deliver from the money they spent with you?

In either case, assuming that the 80/20 rule also applies to research budgets, do you know which 20% of the budget delivered 80% of the return on that investment? Do you also know why?

keyAre you a Customer Insights budget holder?

If so, then I recommend you run a ROI audit, if you do not already do so. Even if your CFO isn’t currently asking you for your ROI report, I believe it will only be a question of time before she/he does, and when that time comes, unlike Sharon in the story above, you need to be prepared.

If your company currently considers customer insights as a cost item and not an investment, then preparing and presenting an ROI audit report to senior management will help change that viewpoint and help protect, if not even grow, your budget.

sparksIf you happen to work agency-side…

…with client budget responsibility, then my request to you is to not just passively wait and see if your clients do come to you with a ROI audit request, but that should proactively seek to work with your clients to understand and improve the return on investment you are helping them create.

handshakeForming a win-win ROI partnership to co-create a brighter future

Over the next few years, I believe that more and more companies will make customer-centricity an even more central part of their company’ strategy, and more and more companies will see the effective use of market / customer data and insights as a key driver of competitive advantage.

Customer Insight budget-holders therefore can have an even more critical role to play in guiding business decisions, but they will need to prove their worth and demonstrate the return on investment are delivering now and can deliver in the future. The onus is on those working agency-side to help give them the ammunition needed to do this.

I ask you to consider making a ROI audit the foundation of a strong client-agency partnership, which unites everyone behind a common objective that will help ensure that everyone’s budgets grow.

Part 2 of this post shares advice on how to set up a ROI Audit and will be released after IIeX North America. If you are going to be in Atlanta for IIeX and want to get further insight into this topic, please come listen to the panel discussion in track 2 on Tuesday from 10.40 to 11.20, during which Andrew Cannon (GRBN), Simon Chadwick (Cambiar), Kathy Cochran (Boston Consulting Group), Lisa Courtade (Merck) and Alex Hunt (Brainjuicer) will share their thoughts on what insights teams and research companies alike should do to add value and deliver a stronger return on investment. Welcome!

 

Share

5 Reasons Online Studies Fail

Based on his experience over the last 12 years as an online qual researcher, Ray Fischer shares five reasons why most online qual studies do not deliver on expectations.

dreamstime_xs_50764488

By Ray Fischer, CEO, Aha! Online Research Platform

Let’s admit it: many market researchers are either uncomfortable with online qual or they don’t get the results they expect, and therefore shy away from it.  That’s too bad, because the new wave of online qual tools and techniques is producing incredible insights for clients who have discovered the benefits of both technology, and adopting best-practices based on years of learning. Based on my experience over the last 12 years as an online qual researcher, I have seen five reasons why most online qual studies do not deliver on expectations.  And all of them are fixable…with the right amount of experience and skills.  Here is my take:

1. Not Enough Experience

Online qual can be a bit of a black box if you have not used the method before.  It is a bit like skydiving – you might want a guide attached to you on your first jump or two, but after that you’ll feel like an old pro.  In those early studies, make sure your platform provider is committed to your success; they should offer study design consulting and a dedicated project manager to share best practices along the way.  The same is true if you are new to online or are simply trying a different platform.  All platforms are not the same, nor are the services and support they offer.

2. A Boring Activity/Discussion Guide

The discussion (or activity) guide needs to be clear, concise, and dynamic.  Go beyond a battery of open-ended questions and use the variety of projective techniques that modern platforms offer.  A study that goes beyond open-ends and mixes in respondent video, collages, perceptual maps, social activities, and storytelling will make things SO much more interesting to your respondents and your clients.

3. Committee Approach to Study Design

Avoid the committee approach where everyone gets to add in everything they could ever want to learn in one study.  Don’t let it become a free-for-all.  I’ve seen these more than a few times: you create a mountain of unstructured data loaded with redundancy and irrelevance, ultimately detracting from your objectives.  Not only will the data haul yield insufficient results, it will also bore your respondents.  A key sign from respondents that the committee has left its mark is when you see comments from respondents like “I just answered that question…3 times!”  Stick to your guns and assure clients that the insights will come out if the questions and projective exercises are well thought out and diverse.

4. Lack of Communication

I firmly believe that communication with respondents -from the beginning of the recruiting process through the completion of the study- is key.  Be clear with respondents in the screener sharing exactly what the study is about, why they are important to the research, how much time is expected of them, how many days the study will take, and activities they are required to complete in what time frame.  Moderators – send a morning note to everyone each day of the study giving them group encouragement and letting them know what they are doing on that particular day.  Send at least one probe to all respondents on day 1 telling them, personally, how much you appreciate their contributions.

5. Insufficient Incentives
Nothing will discourage a respondent more than doing a lot more work than they anticipated when recruited.  Typically, a multi-day study should require a respondent to commit at least 30 minutes per day. If the study is interesting and well-designed, respondents will often spend a lot more time sharing because they want to, not because they have to.  Typical incentives for online qualitative tend to be $100 for 3 days, $125 for 4, and so on.  Store trips including video and/or pictures with added open- and closed-ends should include an additional $25+.  Of course numbers can vary, but these are pretty tried and true guidelines. I have heard a few clients who tend to use $.50-$1 per minute that they expect the client to engage.  I encourage higher incentives if the budget will allow.

After reading this you may think a few of these points are a blinding glimpse of the obvious.  And they are.  The lessons learned are pretty straightforward:  Keep it simple. Pay attention to the basics of good research. Work with an online qual platform that is intuitive and user-friendly, and most importantly, is supported by seasoned consumer researchers.  With a skilled team guiding you through the process, you should EXPECT better results with your next online study.

 

Share

Online Panels: The Black Sheep of Market Research – Part 2

Panel companies are the ones with background, knowledge, tools and technologies to help online market research be great again.

black_sheep

By Adriana Rocha

A few months ago, I wrote the first part of this article, “Online Panels – The Black Sheep of Market Research?, and all the positive reaction and feedback were very inspiring and overwhelming. So, in this second part, I’d like to explore why I believe panel companies have the background, knowledge, tools and technologies that can help online market research be great again, as well as how they can “turn the table” and lead innovation in this industry.

Building and managing online panels for so many years, I’ve felt on my own skin the pains and frustrations of panel companies with bad surveys they need to fulfill. In our case for example, we have tried to be creative and tested several ways to improve UX, from developing research games, to building 3D worlds where respondents can participate with their avatars, to designing beautiful survey templates, and, more recently, applying Gamification techniques.  Regardless, the years have proven that all of those methods still need a vital part in order to improve the user experience: how well the questionnaire is written and designed.

It has been a difficult mission to make our clients write user-friendly surveys, or take UX as a priority; however we have learned a lot by listening and collecting feedback from our users.  And then we realized we could go far beyond delivering consumer data just based on surveys but increasingly from the data spontaneously shared by the users, including their experiences with products and brands, as well as their mobile and social data.  The more data our panelists share, the greater the potential to apply such data to extract consumer intelligence, as well as to improve user experiences with market research.  That is where Machine Learning, Deep Learning and AI can play a key role in understanding consumers, and also helping improve surveys. See a few examples below:

  • Based on users’ profiles, interests and behavior, panels don’t need to keep sending users repeated questions and surveys. They can use existing data and ML algorithms to respond to known questions and just ask the needed ones;
  • Using data shared by the users on their experiences with products and brands (eg: product reviews and customer service satisfaction) , panels can apply ML methods and deliver brand KPIs, product preferences, etc.
  • Panel companies have extensive profiling data about their members, from socio-demographics to thematic profiles on health, travel, technology, etc. By combining such data with users’ own social media data (eg: Facebook or Twitter), panels can provide to market researchers insights they will never get by just analyzing public social media;
  • Using historic survey data and users’ input, from both researchers and panelists, panels can create “smart surveys”, that is, surveys that improve over time, recommending the right questions to the right audiences.

Well, the application of machine learning and multi-source panel data is an example of how old and new technologies can make a big impact and create new opportunities in the Market Research industry. As commented before, I still believe in a bright future for online research and panels, at least the ones who are taking the right steps now. Am I dreaming too high?

(BTW, I’ll be at IIEX in two weeks presenting “The Next Generation of Online Research: when Machine Learning empowers Surveys”, and a platform designed to implement it. I would love to continue the conversation with those of you who will attend the conference) .

 

Share