1. Official
  2. eu2015_480x60
  3. RN_ESOMAR_WP_GBook_480_60_11_14
  4. casro-newsletter-banner

Participate In The Q1-Q2 2015 GRIT Survey!

We'd like to invite you to share your experiences and perspective with us in the Q1-Q2 2015 GreenBook Research Industry Trends (GRIT) Survey.

gritbanner

 

Yep, it’s that time of year again folks: time to participate in the newest wave of the GreenBook Research Industry Trends (GRIT) survey.

We’d like to invite you to share your experiences and perspective with us in the Q1-Q2 2015 GreenBook Research Industry Trends (GRIT) Survey.

As our industry changes rapidly, it’s more important than ever to truly understand what is happening and what the implications are for the business and profession of market research.

Only with the support of marketing and insights professionals like you can GRIT continue to yield insights into how research buyers and providers are adapting to the rapidly evolving research landscape.

Join thousands of global researchers and help our community better understand where we are and where we are headed.

Participate in the Survey

We’ve redesigned the survey to make it more engaging, more device agnostic, and most importantly, SHORTER! The survey takes less than 15 minutes to complete.

What’s new:

  • GRIT 50 Most Innovative Suppliers: The survey features a familiar, but updated, approach to determining the 50 most innovative companies in market research.
  • GRIT 50 Most Innovative Client-Side Companies: Why should Suppliers get all the credit? Clients drive innovation with their budgets, and it’s time to understand who they are and why they should get kudos.
  • Research Time and Tools: A new set of questions has been added to better understand your time dedication to different research tasks, and the software you utilize.
  • Interactive probing of many verbatim questions and enhanced usage of text analytics in the analysis.
  • A redesigned report format and structure with more data visualizations, more expert commentaries, and more strategic insight.

Tracking questions:

  • Biggest Threats and Opportunities in the industry
  • Adoption of new methods and technology
  • Profiling the researcher of the future: what skills and qualities are necessary for success
  • Budget/revenue projections for 2015
  • Emerging titles and the way researchers describe themselves

Don’t  miss this chance to give back and support your profession. All who complete the survey will receive:

  • Full version of the GRIT report detailing the results of this survey
  • Exclusive access to an interactive online dashboard with the complete dataset for your own analysis
  • Priority registration to webinars featuring industry experts and thought leaders who will discuss GRIT results and implications

Who Should Participate

  • Market research suppliers, technology providers, and consultants
  • Client-side marketing and insights professionals
  • Academics

Special Thanks to All GRIT Partners:

RESEARCH PARTNERS: Dapresy, Decooda, Gen2 Advisors, GMI, Keen as Mustard, NewMR, Q Research Software, Quester, Researchscape, Vision Critical

SAMPLE PARTNERS: ACEI, AIM, AIP, AMAI, AMSRS, APRC, ARIA, AVIA, BAQMAR, Blauw, BVA, CASRO, CEIM, Datos Claros, ESTIME, FeedBACK, Gen2 Advisors, GMI, Insight Innovation, LYNX Research, Michigan State University, MRIA, MRII, MROC Japan, MRS, New MR, Next Gen MR, NMSBA, NYAMA, OdinText, PROVOKERS, QRCA, Researchscape, SAIMO, Sands Research, The Research Club, University of Georgia, University of Texas, University of Wisconsin, Vision Critical

Thank you in advance for sharing your time and experience!

Share

3 Tips Weather Forecasting Can Teach Us About Conjoint Analysis

Posted by TRC Market Research Friday, January 30, 2015, 7:04 am
Even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised. Here are some keys to avoiding making the same kinds of mistakes as the weather forecasters recently made.
blizzardwarning

Courtesy of cbsnews.com

 

By Rich Raquet

Here in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?

The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.

At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:

  1. Understand the limitations of your data. A conjoint or discrete choice conjoint can obviously only inform on things included in the model. It should be obvious that you can’t model features or levels you didn’t test (such as say a price that falls outside the range tested). Beyond that however, you might be tempted to infer things that are not true. For example, if you were using the conjoint to test a CPG package and one feature was “health benefits” with levels such as “Low in Fat”, “Low in carbs” and so on you might be tempted to assume that the two levels with the highest utilities should both be included on the package since logically both benefits were positive. The trouble is that you don’t know if some respondents prefer high fat and low carbs and others the complete opposite. You can only determine the impact of combinations of a single level of each feature so you must make sure that anything you want to combine are in separate features. This might lead to a lot of “present/not present” features which might overcomplicate the respondent’s choices. In the end you may have to compromise, but best to make those compromises in a thoughtful and informed way.
  2. Understand that the data were collected in an artificial framework. The respondents are fully versed on the features and product choices…in the market that may or may not be the case. The store I go to may not offer one or more of the products modeled or I may not be aware of the unique benefits one product offers because advertising and promotion failed to get the message to me. Conjoint can tell you what will succeed and why but the hard work of actually delivering on those recommendations still has to be done. Failing to recognize that is no better than recognizing the possibility of an inverted trough.
  3. Understand that you don’t have all the information. Consumer decisions are complex. In a conjoint analysis you might test 7 or 8 product features but in reality there are dozens more that consumers will take into account in their decision making. As noted in number 1, the model can’t account for what is not tested. I may choose a car based on it having adaptive cruise control, but if you didn’t test that feature my choices will only reflect other factors in my decision. Often we test a hold out card (a choice respondents made that is not used in calculating the utilities, but rather to see how well our predictions do) and in a good result we find we are right about 60% of the time (This is good because if a respondent has four choices random chance would dictate being right just 25% of the time). Weather forecasters are not pointing out that they probably should have explained their level of certainty about the storm (specifically that they knew there was a decent chance they would be wrong).

So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.

Share

Consumer Collaboration: Get it Right, Get it Going, Make it Sustainable

Based on a recent InSites Consulting study among 735 participants of 11 ongoing Consumer Consulting Boards (also known as online research communities), we identified 3 steps to make your consumer collaboration a real success in the long run.
Consumer Collaboration

Editor’s Note:  Occasionally our friends produce content for their own blogs that are designed to promote their own expertise, but have great information for everyone (including their competitors). If it seems to me that the balance of the content is focused on knowledge sharing vs. promotion then I’ll post it as well. Today’s post is  a great example of this. The InSites Consulting team is very good at striking that balance. They know how to own a topic that is important to their business while making it useful to the industry as a whole, and without a doubt creating engaging and impactful communities (or other collaboration methodologies) is an important topic for the industry. Soo, enjoy Anouk’s post detailing the top tips for driving good results from your community: whether you work with InSites or not, it’s good information filled with practical examples of success.

Managers know that collaborating with customers makes good business sense. Over the past years, consumers have turned into contributors and volunteers, having become a world full of problem solvers who are creating billions of dollars’ worth in value without even being paid for it. So, companies continuously search for ways to tap into this Consumer-Innovator, to benefit from key insights and ideas for better decision-making. As a result, many collaboration initiatives arise but few are sustainable. Most initiatives are designed to give an instant boost of ideas but die a quick death. Only 1 out of 10 companies succeeds in managing consumer collaborations that tackle more objectives than merely idea generation, such as developing concepts or launching new products. This shows there is a lot of unused potential in the collaboration space.

Tweetaway pinkTweetaway: Many #collaboration initiatives arise but few are sustainable insit.es/1w90zsA by @AnoukW1 via @InSites #mrx #newmr

But how can you develop these initiatives into sustainable collaborations? Based on a recent InSites Consulting study among 735 participants of 11 ongoing Consumer Consulting Boards (also known as online research communities), we identified 3 steps to make your consumer collaboration a real success in the long run.

When consumers become your part-time employees

A famous example of a sustainable collaboration is the GiffGaff community, where consumers are running parts of the business such as answering questions in the community or helping to promote the company. Companies like GiffGaff do one thing fundamentally differently: they treat consumers as if they were their part-time employees. They take consumers behind the scenes, so they understand the questions and challenges for the brand. And this is the tricky part….How do you treat consumers as if they were part-time employees of yours, so that your collaboration becomes sustainable and grows over time?

Tweetaway yellowTweetaway: 3 steps to make your #consumercollaboration a real success; purpose, engage, impact insit.es/1w90zsA by @AnoukW1 via @InSites #mrx #newmr

1. Get it Right #purpose

Employees who believe in the company purpose work harder, act smarter and add more value. Just like employees, consumers are more engaged when they feel the purpose of the collaboration and know what they are fighting for. Our study shows that 90% of consumer collaborators are motivated by the community purpose, making this the number 1 driver to keep on collaborating. Unfortunately, too many initiatives lack that big purpose and are set up for unclear reasons or for the wrong ones. When the underlying motivation to start collaborating is just the opportunistic desire to make more money or to look better, you are bound to fail simply because the reason to collaborate is not mutually shared between all participating actors.

Tweetaway greenTweetaway: N°1 driver to collaborate? The community purpose insit.es/1w90zsA by @AnoukW1 via @InSites #mrx #consumercollaboration

A company that found a shared purpose for their collaboration is De Lijn, a bus company in Belgium. Their purpose is to make public transport future proof for the younger generations. Such a bigger purpose does not only motivate consumers, it also guides internal stakeholders on how to use the collaboration to answer their daily business questions. There are plenty of techniques to align on the bigger goal and identify the right business questions. For example, for Telefonica we invited the global team to an online platform to complete exercises such as a SWOT analysis. This gives you the right input to align everybody on the overall purpose of the CCB and to detect the right business questions.

After collecting the answers on these questions, the trick is to match them with the right collaboration activity. For example, in our Danone CCB, we launch Deep Dive weeks to collaborate on bigger strategic questions such as identifying new insights when eating yoghurt. Next to that, 48h challenges are shorter activities to find quick inspiration and answer the internal stakeholders’ urgent questions. The third stream in the community generates continuous bottom-up feedback through activities initiated by members. By mixing these different streams and communicating the goals, big or small, the collaboration is used to the rhythm of the company which enables us to keep the sense of purpose in everything we do.

Community engagement

2. Get it Going #engage

Once in a while, employees need to be surprised in order for them to stay motivated. For example, a friend of mine works at Google; he is used to choosing his lunch from 5 types of pasta and 10 types of salad, he can even make his own (non-fat) ice-cream(!). While this is nothing new to him anymore, I think it is awesome – I guess anything can become the new normal. The same goes for our collaboration initiatives: we can gamify all we want, but sooner or later it will become the new normal. How do we deal with this?

Tweetaway redTweetaway: Keep the #collaboration going by mixing activities, intensity & techniques insit.es/1w90zsA by @AnoukW1 via @InSites #mrx

Our study shows that 3 out of 4 consumers want to keep on collaborating because it’s a fun and exciting activity. So, in order to keep the collaboration going, we mix different activities with the right level of intensity to keep the participants’ attention. We do this, for example, by mixing top-down activities initiated by the moderator (e.g. discussions, diaries, idea storms, games, battles, collages and video testimonials) with bottom-up inspirations initiated by consumers.

Next to the type of activity, the intensity varies from challenging to more relaxing weeks, because you can’t expect everyone to play along all the time. Our study shows that, on average, members prefer 69% challenging and 31% rather relaxed weeks of collaboration.

Activities are also mixed with gamified techniques, such as badges and the secret room. This is a special room, which opens when a new milestone is reached such as completing a Deep-Dive week challenge. The more experienced a member becomes, the more important this intrinsic motivator will get. New members prefer a monetary reward that is 33% higher than what experienced members expect. Therefore, to make your collaboration sustainable, prevent such a gold rush effect by setting the money reward within range and keep stimulating curiosity with new play elements.

Engagement mix

Engagement and stimulating curiosity are at least as important for a company’s internal stakeholders as they are for consumers. Report back with findings not only in standard presentations, but through surprising techniques as well. For example, invite them to play a quiz game to test their knowledge on the consumer or organize a speed date event to introduce managers to their consumers. By using engaging techniques to involve the right stakeholders, we can better trigger meaningful actions and engage them in the long run.

3. Make it Sustainable #impact

Employees believe that good decisions can originate from anywhere. Just like employees, consumers also embrace this principle and want to have a voice in and a meaningful impact on the company. Our study shows that 84% of consumers are motivated by the idea of having an impact on the brands they love, making it the second most important motivator in collaborative settings. However, many initiatives fail to effectively communicate the results of collaboration efforts. This causes a collaboration hangover and keeps the collaboration from becoming sustainable. To prevent such a powerless feeling, we need to trigger all relevant company stakeholders to make decisions, by playing on their hearts, minds and actions. For example, for eBay Classifieds’ 2dehands, the community collaborated on the renewal of the classifieds platform. Next to sharing weekly bite-size reports, we organized an Advisory Day with members from the community, to bring the results to life and define actions together. Afterwards, the community members were the first ones who got to access the beta-platform and experience their impact firsthand.

When these results are communicated to consumers in a tangible way, members see the impact of their own efforts. A great way to share this form of feedback is by giving them their own Wall of Fame, visualizing the achievements of the community. This form of feedback gives members a proud feeling and motivates them to keep collaborating in the long run.

Get your party started (and ditch the hangover)

Everybody loves a good party but hates a hangover. To make your collaboration sustainable and avoid feeling powerless, we must treat members as our part-time employees. Give them the right purpose aligned with the business, keep them engaged and empower them to impact decision making. This is how you set the right conditions for sustainable growth. What are your best practices to grow collaboration initiatives over time?

Tweetaway orangeTweetaway: 10 tips to avoid a #collaboration hangover insit.es/1w90zsA by @kristofdewulf via @InSites #mrx #newmr

Share

The Unconscious Processes that Produce Brand Recognition: The Physics Behind the Brand

Understanding how basic sensory information is transformed by the brain is critically important for structuring a marketing campaign in today’s technologically embedded markets.

ubbrain

 

By Matthew M Gerhold

From a purely physical perspective one can reduce a brand to physical energy in the surrounding environment of the consumer: Physical energy is the fundamental building block of a brand. In visual communication strategies, the brand is primarily composed of light, electromagnetic radiation at varying visible wavelengths. In the instance of multimedia platforms, light may be accompanied by sound—traveling air pressure waves. In below-the-line communications, chemoreception becomes more important—chemical information encoded by the nervous system that we experience as taste and smell. Touch or somatosensory information will also play a major role. Thus, physical energy and its interactions with sensory mechanisms on the periphery of the human nervous system form the basis for brand perception and recognition. Understanding how basic sensory information is transformed by the brain is critically important for structuring a marketing campaign in today’s technologically embedded markets.

This short article covers the fundamentals of brand recognition and perception from a neuroscience perspective. Running in parallel to the processes of perception and recognition discussed herein are the processes that generate the emotional and motivational aspects involved in interacting with a brand—these are the components of marketing and advertising interactions that drive and motivate the individual towards a purchase. I hope to provide an overview of motivational processes in the near future. For now, the focus will be on perception and recognition. Let us look at a basic example of communications in an above-the-line context. Let us think, for example, of a television advertisement.

Unconscious Processing in Above-the-line Communications 

The peripheral nervous system (PNS), specifically the eyes and the inner ear, encodes the physical attributes underpinning the brand, product, and creative content of the advertisement: light and sound. This information is delivered to the thalamus, a structure nestled deep within the brain that receives much of the sensory information encoded by the physical senses.

 

eeg1

Figure 1. An MR-image displaying a side on view of the central nervous system (CNS), sagittal perspective. The thalamus is encircled in red.

 

Through various anatomical way-stations, the encoded physical energy is directed from the thalamus to the auditory cortex, a region responsible to processing sound; as well as to the visual cortex, a region responsible for processing light.

 

eeg2

Figure 2. (Left) Side on view of the CNS, the auditory region is encircled in red. (Right) Side on view of the CNS, the visual region is encircled in red.

 

This low-level processing creates a primitive representation of the brand within the human brain—at this junction the processes and information being handled by the central nervous system (CNS) would still not have availed themselves to the conscious mind. Based on electrophysiological data, the timeframes for these processes occur within a window of 0-150 milliseconds after encountering the marketing material.

Feature Integration within the Brain

As the visual and auditory features of the brand, sitting within the visual and auditory regions, are situated some distance from each other, the brain has to integrate the separate pieces of information. Feature integration, known to neuroscientists as binding, is the process by which this is achieved. This process is mediated by fibres tracts within the brain—different length fibres connecting different regions within the brain.

 

eeg3

Figure 3. Different regions of the brain integrate in order to provide a high-level representation of the brand.

 

Feature integration or binding will lead to a new more sophisticated representation of physically encoded attributes underlying the brand, product, and creative content. This new high-level representation, spanning across different regions of the brain will elicit memories, or cellular imprints, that have encoded/memorised previous encounters with the brand and product.

The Conscious Experience of the Brand

Through the process of brain regions sharing information and elicitation of previous experiences/memories, the encoded physical information activates, modulates, and influences the frontal lobes of our brains. The frontal regions are key to conscious experience. The recruitment of such areas into a dynamically evolving integrated system of distinct brain regions gives rise to conscious experience of brand and product.

Emotional and Motivational Processes

Parallel to the perception and recognition of the brand are the emotional and motivational components. These aspects of brand interaction rely on deep brain structures driving and interacting with the rest of the body via neuro-electric and chemical systems. In my next article, I will provide a sketch of the processes involved in generate motivational states that are the key drivers of human behaviour within the market place.

Share

Congratulations to the 6 Finalists of the Insight Innovation Competition for IIeX EU!

The votes are in and the six finalists for the latest round of the Insight Innovation Competition have been picked.

logo_eu

 

The votes are in, and after a slower than normal start and then a hard fought last two weeks with 12 entrants vying for votes, the six finalists for the latest round of the Insight Innovation Competition have been picked. Here are the finalists:

 

IDEA NAME AUTHOR VIEWS VOTES
Smartsight – The smarter way to engage consumer segments across… Dushyant Gupta 10829 1505
Bakamo.Social – Driving the conversation Daniel Fazekas 4916 1039
+iSee Reality Isaac Martin Rojas Mora 6288 897
LivingLens Carl Wong 4048 686
Dalia Research Nico Jaspers 6726 437
Meta4 Insight Anders Bengtsson, Ph.D. 3393 342

 

These six companies will now go on to the judging round at IIeX Europe in Amsterdam next month.

In this eighth round, we’re giving new innovators the opportunity to submit their concepts for a crowdsourced round of voting. Past participating companies have seen their businesses accelerate due to their involvements in the Insight Innovation Competition, resulting in funding, partnerships, new clients, and global brand exposure. However, there is a defined prize for the single winner that comes out of the Judging round:

  • $15,000 cash award.
  • Opportunity to be evaluated for inclusion in the Lowe’s Innovation Lab Accelerator located at the Singularity University Campus at NASA Ames Research Center in San Francisco.
  • Exposure to large international audience of potential prospects, funding partners and inventors, including the European Commission’s Horizon 2020 funding framework (available fund of $75B), Lowe’s Innovation Lab, and independent venture capitalist and angel investors.
  • A free consultation provided by Gage-Cannon Venture Consulting Ltd. for technology developers that want to explore European Commission $75B Horizon 2020 funding opportunities or expand into the rapidly growing Brazilian market. The consultation also applies to developers that seek equity financing or are looking to stabilize revenue streams.
  • An invitation to present at the next Insight Innovation eXchange
  • An interview to be posted on the GreenBook Blog, viewed by 36,000+ industry professionals per month
  • An opportunity to work with successful senior leaders within the market research space

The Insight Innovation Competition is collaborating with the Lowe’s Innovation Lab to help competition entrants gain awareness within a broad consortium of global brands.

All participating companies will be vetted for inclusion in the Lowe’s Innovation Lab program. Selected participants will gain guaranteed organic funding through pilot programs with program partner companies, as well as access to acceleration resources for marketing, strategy, finance, and business development. In addition, all companies will be vetted for inclusion in the Ricoh Innovation Accelerator, an investment and business accelerator program focused on developing new technology solutions that can be applied to a host of business issues, including marketing insights.

On February 18, 2015, as part of Insight Innovation eXchange Europe 2015, the finalists will present their concepts to a panel of judges comprised of sponsors of the competition in a live event. Each presentation will last 10 minutes: 5 minutes to pitch and 5 minutes for Q&A from the judges.
The judges include:

 

 

Using a 10-point scale for each category, judges rate each presentation on:
  • Originality of concept
  • Presentation quality
  • Market potential
  • Scalability
  • Ease of Implementation

On February 19, 2015 we’ll reveal the scores. The highest final score wins. The winner takes home the pot and chooses which of the judges they would like to engage with afterwards as a mentor.

Past winners of the Competition have all gone to greatness, becoming significant players in the industry. They include:

Winners

This round also included some amazing entrants, and the finalists are very strong indeed. Check out a bit more about them.

 

logo
Borderless Access Panels Pvt Ltd. – India
Smartsight is a progressive consumer insights solution that leverages machine learning algorithms to segment & predict consumer behaviour. Smartsight uses over 150 behavioural data points across online, mobile & social media usage to generate real time insights around product consumption, ad effectiveness, message relevance and branding.

 

 bakamo Bakamo Social – Hungary

Bakamo.Social solved the social media paradoxon: As understanding cannot be automated and the sheer amount of content puts understanding beyond the scope of the human mind; our solution empowers the interpretative, creative human cognition by intelligently utilizing machine text analysis, weaving it into a meaning driven discovery engine.

 

isee+iSee reality – Denmark

Unlock the hidden content from printed materials, images, objects and the real world where you see the +iSee logo. Be amazed when hidden exclusive digital content appears in front of your eyes enabling you to interact with the world around you like never before. Engage with +iSee reality hidden content and watch it come to life on magazines, books and the printed page. Buy items easily and quickly with direct mobile shopping links. Connect with additional useful web content and like and share on social media. Browse offers from restaurants, shops and historical landmarks in the real world around you. Claim and redeem vouchers from shops in your location and enjoy and experience exclusive offers. Enjoy these amazing offers and experiences where you are, by using the +iSee augmented reality app, changing perceptions of how you see the real world forever.

 

LivingLens_500Living Lens Enterprise Ltd. – United Kingdom
Brands want more insight and inspiration, and simply don’t maximise their video assets.  LivingLens exists to capture & pinpoint specific video content. Search video by the spoken word in any language and make better decisions, faster.

 

Dalia Research – Germany
The rapid spread of smartphones and tablets around the world revolutionizes our ability to understand what people think and feel. Dalia Research provides real-time access to mobile respondents in 76 countries around the world for market and opinion researchers to generate efficient, high-quality and instant insights.

 

protobrandProtobrand – United States of America

 

Meta4 Insight is an online survey platform that enables exploration of people’s subconscious thoughts and feelings through visual metaphors. Unique in its kind, Meta4 Insight yields powerful emotional insights for positioning, segmentation, and product innovation.

 

 

Good luck to all six!

One of the great things about the Competition though is that there are no “losers”. All of the past finalists have benefited from the experience via the brand exposure, learnings gathered during the process, direct business and (in many cases) investment from IIeX participants.   Whoever wins the Judging Round, the other five companies will be in very good company indeed.

competition_winners_finalists

To meet these finalists and see what they are all about, get to IIeX in Amsterdam and join the hundreds of research professionals converging on Amsterdam in just 3 weeks to help shape the future of the industry together!

Share

Doing Your Homework: Some Tips on Telecommuting From A MR Pro

Innovation can come in an assortment of shapes and sizes, including new forms of working arrangements. 

telecommute

Editor’s Note: I often take telecommuting for granted. I started occasionally telecommuting in 1997 when I was a Regional Manager for a financial services company, but since 2002 I have worked exclusively from a home office.  During that time I have built 6 companies (all virtual), hired over 50 employees, worked with literally hundreds of clients, and amassed a global network of many thousands of  colleagues. All from my home, and only rarely actually meeting people face-to-face. I have been blessed to convert some of those colleagues to dear friends, and in some cases we have never physically met. It’s pretty amazing, and I forget that for many people this style of work is a foreign concept.

That is why I was thrilled when Kevin Gray sent me this post. I believe that technology will only continue to make telecommuting easier and more effective and will eventually become the norm for just about anyone who works in a professional service industry. That being the case, some basic tips, tricks, and wisdom are in order and this is a fantastic primer for anyone, but especially for those who work in the insights field.

So kick back in your bathrobe and enjoy!

 

By Kevin Gray 

When I established my consultancy in 2008, I chose to keep my operation to minimal size – hopefully, too small to fail – and to work from my home office.  Besides keeping overhead down, my main motivations for flying completely solo were to maximize efficiency and minimize hassle so I’d be able to concentrate on the things I like to do most and think I do best.  I am my company’s sole employee and my business partners include a few end clients but are mostly marketing research agencies scattered throughout the world.  Only part of my business is local and face-to-face.

People were working from home long before “telecommuting” was coined by Jack Nilles in 1973, so I cannot call myself a pioneer.  However, most of us physically commute to offices, as I did for the bulk of my career, and some may find the very idea working remote from home hard to grasp.  There are plusses and minuses to anything, to be sure, and before I made the leap I sought the advice of contacts who had worked this way at some point in their careers, some in marketing research and others in unrelated fields.  Self-discipline and the need to structure one’s workday were mentioned by several as crucial, as was being able to work autonomously without becoming a hermit.  It’s not for everyone.

I should stress that this would not have been a realistic option if I hadn’t already been an experienced marketing researcher when I set up my company.  Before I set out on my own I’d worked for Nielsen’s Customized division, Kantar (Research International) and McCann’s Marplan division for more than 15 years, in addition to having been on the client side early in my career.  At Nielsen and Kantar, in particular, a significant component of my responsibilities was international and I’d worked with colleagues and clients located in dozens of countries for many years.  When one is establishing a consultancy I think it’s quite normal to work through existing contacts, at least initially, and therefore not working remote was not really an option for me at the time.

How can you work as a consultant without regularly meeting with your clients face-to-face?  With the right experience and know-how, it is actually more efficient that way.  Though there are times when face-to-face meetings are truly essential, over the years in my role as a marketing science consultant and statistician, I have found that these are rare exceptions.  In fact, taking part in meetings too early in the process in some instances can cause the conversation to stray towards technical details before the basic issues have been sorted out.

At this phase in my career if I must be physically present explaining methodological details to a client, it usually means I haven’t done my job well enough, to be honest.  Of course, I do join meetings or presentations remotely – sometimes at odd hours and sometimes with the assistance of an interpreter – and now and then meet with clients face-to-face as well.  However, considering the cost and downtime that comes with travel, hopping on a plane at a moment’s notice for a two-hour meeting is usually not sensible, at least for what I do.  Phone, Skype and email are all I need, even when involved from the early stages, as I typically am.

Do your homework.  Whatever your specialization within marketing research and regardless of your own working arrangement, if you will not be the user of the research I would urge you to do your homework in another sense.  Marketing researchers are researchers and part of our job is to unearth important business questions and help clarify the objectives of the research.  I do a lot of on-the-job coaching about how to tackle this with business associates who are new to our profession and, even when working with veterans, I often ask lots and lots of questions.  Put simply, we need to find out what the end client really needs, which may be very different from what they say they want.  As a former client, I know this all too well!

When designing research, it’s important to consider who will be using the results, how the results will be used and when they will be used, and then work backward into the methodology.  Though I have read more than one-hundred textbooks on research methods and statistics, I am not preoccupied with methodology (though I will admit to a strong interest in it).  Marketing researchers need to be prepared to respond to diverse requests very quickly and to be able to do this requires a large multipurpose toolkit.  On the other hand, we shouldn’t let the tools be our boss or sell statistical techniques, in my opinion.  Instead, it’s better to concentrate on what you need to do to help your client make decisions more effectively.  Though complex solutions sometimes work better, aim for simplicity whenever possible.  Avoid jargon but also be wary of oversimplifying…this is like a tightrope walk at times!

Be especially cautious about making assumptions when dealing with a client for the first time.  Try to learn about their corporate culture.  The client’s website and web searches will tell you a lot about the company and their industry, and also give you hints about what goes on within their walls.  That is often the best place to begin and takes little time.  Having some sense of corporate culture and marketing research expertise comes in handy because these things can have a big impact on how your proposal is received.  Suggesting an innovative solution to a conservative organization in which marketing research isn’t well-established or is viewed skeptically can backfire; to paraphrase Voltaire, the “best” may be the enemy of the good enough.  In some situations it may be appropriate to propose more than one option, for example, a basic option and an advanced option with different costs.

Learn about market trends in the client’s category and, more fundamentally, how the client defines the market and competition in the first place and why.  Within any organization there can be strong tendencies towards groupthink and on some brand teams almost a religious fervor that can blind them to facts and issues that are truly important.  Habits are hard to break…but that doesn’t make them good habits!  Develop hypotheses, even rough ones, to help clarify your thinking when designing research.  These can be formally tested against the evidence when data become available.

Long before the recent clamor about “big data” clients have been incorporating internal company data, such as customer transactions, and external data, such as economic trends, into their decision making.  It is very helpful to learn what data your client is using now to make decisions.  Also consider what other information they might have on hand or would be able to acquire that would enhance the research.  When appropriate, ask them directly.  And, even if you’re a hard core quant, don’t write off qualitative approaches.  Qual can help bring numbers to life and your client may have past reports they’d be happy to share.

Think ahead.  Multivariate analysis is generally more useful when planned in advance and designed into the research and, with methods such as conjoint, this is imperative.  Once again, though, for most projects it’s best to avoid selling statistical methods and a good idea to keep several options in mind when designing research since the data you obtain may not behave as you’d expected.

It’s important that marketing researchers do their homework, and even more so when they are working from home because communication can break down more easily under that arrangement.  In my experience, you have to be more proactive and more cautious about making assumptions if you’re telecommuting.  Either way, though, given today’s understandable fascination with information technology, we all must sometimes remind ourselves that marketing research is more than math and programming.  The human side is a great deal bigger and we need to put ourselves in the shoes of the decision-makers who will be using our research and in their customers’ shoes.  This means doing a lot of homework, wherever you’re doing it from.

Share

CASRO Transformation Series: 20/20 Is Writing the Lyrics of Qualitative Transformation

This month’s Transform Blog takes us to Nashville, Tennessee – home of the Grand Ole Opry and the Country Music Hall of Fame. It is also the home of 20|20 Research and its CEO, Jim Bryson.

 

 

By Jeff Resnick of Stakeholder Advisory Services

 

JB Headshot_2015This month’s Transform Blog takes us to Nashville, Tennessee – home of the Grand Ole Opry and the Country Music Hall of Fame.  It is also the home of 20|20 Research and its CEO, Jim Bryson.  By many accounts, qualitative research will continue to flourish as an integral element of a researcher’s tool kit.  However, like so many aspects of our industry, the discipline of qualitative research has undergone significant change with more dramatic change to come in the future.  Jim and his team have been at the forefront of this evolution, having grown from a 1986 start-up in a 10 x 10 room with two filing cabinets to a company that now services clients from over 100 countries.  Jim shared his thoughts about transformation based on more than 25 years of learning to perfect it.

Transform ahead of the crowd.  Jim’s view is that innovation today has a lot has to do with structural trends.  The ability to transform ahead of the crowd is dependent on being able to see a vision for the application of technology before it becomes obvious to everyone else. This doesn’t diminish the value of client input but reflects the reality that clients do not always know what they need until they see it.   For example, 20|20 invested heavily in online bulletin board technology back in the early 2000’s – far ahead of when the technology became used in mainstream research.  Recently, 20|20 began to discuss its upcoming virtual reality.  Jim firmly believes virtual reality will be the next “big thing” in qualitative research.

Pivot without losing control.  Pivoting is all about moving to where your skills meet market needs. As Jim said “You have to pivot in a way that doesn’t put your company into a spin or create a lot of disruption”.  Often this results from taking a realistic look at your company strengths and realigning them with existing or emerging market needs. Radical departures from the historic core of a business are rarely a good idea.

Stay true to who you are.  One of the most important decisions made by Jim Bryson was NOT to become a software company.  20|20’s roots and strengths are in the area of qualitative research, not software development.  This led to a strategy of providing innovation coupled with a very strong service offering rather than a company focusing on selling software.

Know the strengths and weaknesses of your target audience.  Transformation needs to recognize potential limitations of your clients.  20|20’s focus is the development of technology-based services to support qualitative research.  However, many qualitative researchers are not technologists.   20|20’s staff provides training for researchers in new methodologies plus a full complement of project support services to ensure a positive experience.   While the development and introduction of technology-based qualitative research products is the transformation, service and support are critical enablers to the adoption of the technology.

Find ways to ensure the old co-exists with the new.  Melding the new and the old is a difficult task but it has to be done effectively.  It is often the traditional business that is funding the development of the new business.  It is an imperative to find ways to bridge the skills of employees working on the new side of the business with those from the traditional side of the business.  For example, proactively finding ways to bring new technology into the management of the traditional side of the business can be an effective way of introducing new skill sets across the organization and making certain everyone feels part of the transformation.

Expect surprises.  Every day.  It is simply part of the business of transformation.

Do good.  Social responsibility is a core value of 20|20.   They refer to it as simply “doing good.”  While not part of transformation per se, it is a strong statement about the core culture and value of the company – an important stabilizing element during transformation.  It is something that everyone can be proud of and reminds all that they are not only part of the business success but can also bring change to areas that are, perhaps, even more important.

20|20 has been transforming since the 1990’s.  While Jim Bryson and his team do not claim perfection in this area, they do understand the “art” of business transformation.  It has clearly driven 20|20 to be one of the leaders in the evolution of qualitative research.  It has been a true team effort that could not have been accomplished without the dedication and commitment of 20|20 employees.  I have no doubt the spirit and action of business transformation will continue to be a trademark of 20|20.

 

20|20 is a global leader in online qualitative research software and services aiding research firms worldwide with over 30 languages.  Leading innovation, easy-to-use software and unmatched service is how 20|20 is committed to helping you do better research.

Share

Is Online Sample Quality A Pure Oxymoron?

Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.
6a00d83420cedf53ef014e8a87325d970d-800wi

 

Editor’s Note: It must be something in the air, because the topics of panels, online sample, and the interface of technology and quality has been a hot topic lately. So far this year alone I have engaged in four different advisory conversations with investors on the this topic, which has never happened before.  It’s no surprise though: online sampling is now the backbone of market research globally. Whether we are engaging respondents on mobile devices or PC’s, the same principles apply: personal online access is ubiquitous globally, and programmatic buying for ad delivery, predictive analytics, and online panels/sampling are BIG business. REALLY BIG business, and it’s only going to get bigger.

 

That being the case, issues around quality and how we ensure it is the primary factor while the industry continues to maximize the mix of speed, cost, and value will only grow in importance over the next few years. And that brings us to Scott Weinberg’s call to action post today. Scott doesn’t pull any punches and his concerns harken back to Ron Seller’s post a few years ago on the “Dirty Little Secrets” of online panels.  I believe we have made progress in this area and that some suppliers remain clear leaders in the quality arena, but this is an issue we shouldn’t take our eyes off of and Scott reminds us of why.
By Scott Weinberg 

 

I attended a CASRO conference in New Orleans back in late ’08 or early ’09. The topic was ‘Online Panel Quality.’ I’ve often thought about that conference: the speakers, the various sessions I attended. I recall attending the session about ‘satisficing’ which at the time was being newly introduced into the MR space (the word itself goes back decades); I thought that was an interesting expression for a routine occurrence. Mostly however I remember the hand wringing over recruitment techniques, removing duplicates, digital fingerprinting measures and related topics du jour. And I remember thinking to myself, for 2 days non-stop: ‘are you kidding me?’ Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Allow me to explain where I’m coming from. My academic training is in I/O Psychology. Part of that training involves deep dives into survey design. Taking a 700-level testing & measurements course for a semester is a soupcon more rigorous than hearing ‘write good questions.’ For example, we spent weeks examining predictive validity, both as a measurement construct, and also how it has held up in courtrooms. More to the point, when you’re administering written IQ tests, or psych evals, or (in particular) any written test used for employment selection, you are skating on thin ice, legally speaking. You open yourself up to all kinds of discrimination claims. Compare writing a selection instrument that will withhold a courtroom challenge with writing a csat or ‘loyalty’ survey. Different animals, perhaps, but both are Q & A formats. A question is presented, and a reply is requested. However, the gulf in education in constructing MR type surveys is visible to anyone viewing the forest in addition to the trees.

An MR leader in a huge tech company said something interesting on a call I remember vividly. He asked: ‘when is the last time you washed your rental car?’ The context here pertained to online sample. And he was one of the few, very few really, that I’ve encountered in the last 12 years I’ve been in that space, who openly expressed the problem. The problem is this: why would you ever wash your rental car? Why change the oil? Why care for it at all? You use it for a day, or a week, and you return it. Online respondents are no different. You use them for 5 minutes, or 20, and return them. If we actually cared about them, the surveys we offer them wouldn’t be so stupefyingly, poorly written. I’ve seen literally hundreds of surveys that have been presented to online panelists. I’ve been a member of numerous panels as well. Half of these surveys are flat out laughable. Filled with errors. Missing a ‘none of the above’ option. Requiring one to evaluate a hotel or a restaurant they’ve never been to. Around a quarter consist of nothing but pages of matrices. Matrices are the laziest type of survey writing. Sure, we can run data reductions on them and get our eigenvalues to the decimal point. Good for us. And the remaining quarter? If you’re an online panelist, they’re simple boring. Do I really want to answer 30 questions about my laundry detergent? For a dollar? Ever think about who is really taking these surveys? Sidebar: do you know who writes good surveys? Marketing people using DIY survey software. Short & to the point surveys. 3 minutes. MR practitioners hate to hear it, or even think about it, but that’s reality. I’ve seen plenty of these surveys by ‘non-experts.’ They’re not only fine, but they get good & useful data from their quick hit surveys.

Since you’ve made it this far, time to bring up the bad news. I’ve been accumulating a lot of stories the last 12 years. I’ll share a few. These all happened, and I’m not identifying any person or firm so please don’t ask.

  • Having admin rights to a live commercial panel, I found a person with 37 accounts (there was a $1 ‘tell a friend’ recruitment carrot). Also found people with with multiple accounts and a staggering number of points, to the point of impossibility.
  • The sales rep who claimed to be able to offer a ‘bi-polar panel’ and sold a project requiring thousands of completes of respondents with a bi-polar or schizophrenic diagnosis.
  • The other sales reps I know personally (at least 5) who make $20,000-$30,000 per month selling sample projects. Hey, Godspeed, right? Thing is, not a one could tell you what a standard deviation is, let alone the rudimentary aspects of sampling theory. Don’t believe me? Ask them. Clearly, knowing these items are not a barrier to success in this space. Just a pet peeve of mine.
  • Basically, this entire system works via highly paid deli counter employees. ‘We can offer you 2 lbs of sliced turkey, a pound and a half of potato salad, and an augment of coleslaw, for this CPI.’ Slinging sample by the pound, and let the overworked and underappreciated sample managers handle the cleanup and backroom topoffs.
  • The top 10 global MR firm who finally realized their years-long giant tracker was being filled largely with river sample, which was strictly prohibited.
  • Chinese hacker farms have infiltrated several major panels. I know this for a fact (as do many others). You can digital fingerprint and whatnot all day long, they get around it. They get around encrypted URLs. Identity corroboration. You name it, they get around it.
  • The needle in a haystack b2b project that was magically filled overnight, the day before it was due.
  • Biting my tongue when senior MR execs explained to me their research team insists on 60 minute online surveys, and they’re powerless to flush out their headgear.
  • Biting my tongue when receiving 64-cell sampling plans. The myopic obsession with filling demographic cells at the exclusion of any other attributes, such as: who are these respondents? You’re projecting them out to non-panelists as if they’re one and the same?
  • A team of interns inside every major panel, taking the surveys, guessing the end client, and sharing that with the sales team in a weekly update.
  • Watching two big global panels merge and scrutinize for overlap/duplicates, stretching across 12 countries. USA had 18% overlap, the rest (mostly Europe) had 10%. Is this bad? No idea. Maybe it’s normal.
  • Most online studies are being at least partially filled with river sample (is anyone surprised by this?).
  • Infiltration of physician panels by non-physicians.
  • The origin of the original ‘Survey Police’ service
  • Visiting the big end client for the annual supplier review and watching them (literally) high-five each other as to who wrote the longest online survey. The ‘winner’s’ was 84 questions. We had performed a drop-off analyses, which fell on deaf ears.

Lastly, and for me the saddest of my observations, are the new mechanics of sample purchasing. The heat & light on sample quality that peaked about 4 years ago has been in steady decline. In the last couple years, sample quality is simply assumed. End client project sponsors assume their suppliers have it covered. The MR firms assume their suppliers have it covered. And the sad part? The sample buyers at MR firms, and I’ve seen this countless times, do not receive trickle-down executive support for paying a bit more for the sample supplier who actually is making an effort and investment to boost their sample quality, via validation measures for example. There are exceptions to this, or were, in the form of CPI premiums, but no widespread market acceptance to pay a buck or three more. In fact, the buying mechanics are simple, get 3-4 bids, line them up, and go with the cheapest CPI, assuming the feasibility is there. This happens daily, and has for years. And by cheaper, I’m talking 25 cents cheaper. Or 3 cents. That’s what this comes down to. So chew on this: why would a sample supplier pour money down the quality rabbit hole? Quality is not winning them orders. Margin is. Anyone working behind the scenes has also seen this movie, many times. Incidentally, there’s nothing wrong with buying on price, we all do this in our daily lives. The point is this: if you’re going to enforce or even expect rigorous sample quality protocols from your suppliers, then give your in-house sample buyers the latitude to reduce your project margins. I won’t hold my breath on this, but that’s what it takes.

I could go on but more is not necessarily better. This is the monster we’ve created: $2 and $3 CPIs has a ripple effect. How can a firm possibly invest in decent security architecture, with prices like this? How can we expect them to? If you’re buying $2 sample, why not go to the source and spend 50 cents?

Now that I’ve thoroughly depressed you, one may wonder, is there any good news? I remember telling my colleague 5 years ago ‘if a firm with a bunch of legitimate web traffic, like Google, ever got in this racket, they would upend this space.’  I didn’t think that would actually happen, but there you go (that one may still be depressing to some). I also believe that ‘invite-only’ panels give the best shot at good, clean sample. When you open your front door to anyone with a web connection, and tell them there’s money to be made, well, see above. More recently I’ve become a convert to smartphone-powered research. Many problems are removed. It has its own peculiarities, but from a data integrity perspective, it’s hard to beat. Lastly, and I could do a whole other riff on this: when we design surveys with no open end comment capture, you’re hoisting an ‘open for business’ sign to fraudulent activity. Yes you can add the ‘please indicate the 4th option in this question’ but both bots and human pros spot red herrings like that. It’s much more difficult to fake good, in-context open ended verbiage. Yes it takes a bit more work on the back end, and there are many solutions that can assist with this, one in particular. And the insights you can now share via this qual(ish) add-on is a nice change of pace relative to the presentation of trendlines and decimal points.

That’s all for now. Thank you for reading.

Share

Learning From IBM’s Mistakes: How To Win in Fast Changing Markets

What can the market research industry learn from IBM's poor performance in adapting to a changing marketplace? Quite a bit it seems.

Winning2 (1)

 

Editor’s Note: IT is a great barometer for MR; both are data-centric offerings that are vital to business success on a foundational level. They are also increasingly deeply interconnected on both an infrastructure and offerings perspective. And finally, both are facing massive disruption from new solutions that deliver cost, speed, and scale efficiencies while establishing frameworks for new value creation via integration.

With that in mind, Larry Gorkin’s deconstruction of the ills IBM is experiencing are instructive for MR companies, and his prescription for success is a great primer for industry leaders to align to.

 

By Larry Gorkin

IBM’s earnings announcement was full of bad news. Profits were far below expectations, revenues continued an ongoing decline, and the company said it would not meet its widely promoted profit target for 2015. Wall Street sent IBM shares down -10%.

These results reflect IBM’s below par response to major shifts impacting the overall IT market. The story highlights the challenges of sustaining growth in the face of rapid market change. There are important lessons for everyone, making it the subject of this Winning Ways.

Leaders facing rapidly changing markets must aggressively manage their business from both defensive and offensive perspectives to ensure on-going progress. Included are to set internal milestones for action, decisively invest in new opportunities, and develop back-up strategies. That’s the lesson from IBM’s recent business set-backs in the fast changing IT market.

As context, the IT market is undergoing big shifts based on fast customer adoption of cloud technology. This change has hurt IBM and other established players whose profits have been driven by sales of on-site hardware, software, and services. Cloud prices and margins are much lower than old IT products, and IBM’s new initiatives aren’t growing enough to off-set legacy declines.

IBM’s challenge has been compounded by its own widely promoted goal to achieve $20 EPS by 2015. The company has aggressively cut costs and spent billions on share buy-backs to meet that target.

Challenges aside, even IBM’s CEO described the company’s most recent performance as disappointing. Revenue declined for the tenth consecutive period, earnings missed Wall Street expectations by -15%, and growth in emerging markets was lackluster. Moreover, IBM conceded it wouldn’t make the $20 EPS target for 2015, a conclusion many outsiders had reached long ago.

IBM’s results reflect several key missteps. To begin, the company seriously misjudged how quickly customers would move to the cloud. IBM’s own cloud initiative was late and under-resourced, as was its effort behind other growth initiatives like big data and Watson.

Moreover, IBM invested billions of dollars in share buy backs to meet its 2015 EPS goal that could have been invested in growth initiatives. At the same time, the company ignored on-going revenue declines and other signals that the target was unachievable. Most importantly here, IBM has yet to find a way to off-set legacy business declines with equal revenue/profits from new offerings.

Fortunate for IBM, it is a big company with lots of talent and deep financial resources. IBM is far better positioned than many companies would be to recover from these kinds of mistakes. For many companies and in many markets, missing important industry shifts would leave permanent damage.

So, IBM’s story is a good reminder that “stay the course” doesn’t work in periods of rapid market change. More than other times, leaders must stay ever close to customers and be prepared for the unexpected. Plans need to be regularly updated to reflect changing business results and market conditions. Objectivity and milestone metrics are fundamental requirements.

Lest you be unprepared yourself, here are five steps leaders can take to ensure their own success when faced with a rapidly changing market.

1. Monitor Customers Closely–Establish on-going tracking research and regular live dialogues to know what customers are thinking and buying. Look for trends/changes and conduct sensitivity analysis; determine the possible business and strategy implications.

2. Establish Milestone Metrics–Define threshold measurements that will trigger you to re-evaluate or change plans. Examples are changes in market size, revenue, and pricing. Decide what actions you will take for each trigger. Put a supporting management system in place.

3. Respond Decisively–Develop strategies that aggressively defend the threat and/or exploit the opportunity presented. Be sure the budget, people, and skills are in place to successfully execute the plan. This is not a time for indecision or saving money.

4. Prepare a Back-up Plan–Have “Plan B” ready in case the original response doesn’t work; you probably won’t have tested it in advance. Per the above, decide in advance what milestone metrics will trigger a shift to the alternate plan.

5. Establish Objective Oversight–Create a team of senior leaders to regularly monitor progress. Have the team constructively engage with the business owners on performance and strategies to ensure overall success.

Rapid market change can challenge any business. Decisive action by strong leaders will win.

Is your business facing a potential market shift? What changes could impact your business? Are you prepared?

Share

Lessons from Automating Social Media Monitoring

The myth of social media is that the data is free and therefore the analysis is as well. The data is free, but the analysis can be time-consuming and tricky. If social media is fragile, your monitoring must be robust.

social-media (1)

 

By Jeffrey Henning

Social media monitoring can be fragile.

I’ve been recapping the top 5 research links of the week since I coined the #MRX hashtag back in July of 2010. Originally I did it by hand, and then I had one of my sons automate the process for me in June of 2011. He had to make some minor tweaks to it whenever there were changes to the Twitter API (its Application Programming Interface, which is the way other programs are instructed to interact with Twitter as opposed to fetching and parsing its web pages).

As people use social media differently, as programs interact with it differently, as APIs change, social media monitoring programs can behave strangely. For instance:

  • At some point along the way, we stopped giving stories from The New York Times the clout they deserved, as their paywall interfered with how we looked up the URL. (Sadly, this is still a problem.)
  • When we wrote the system, Twitter didn’t have embedded images. People used third-party tools for that. Now that Twitter supports embedded images (e.g., https://pbs.twimg.com/media/B1SpTxTCYAAHKin.png:large from this tweet), sometimes the URL of an image would confuse our system.
  • When the system was originally written, emoji was not as prevalent. The initial implementation could not handle emoji and a variety of other obscure characters, so it ignored tweets with these symbols. Once emoji was added to the default keyboards of both iPhone and Android in 2011 and 2013 respectively, these symbols got used more often and our system would ignore more and more tweets.
  • The worst news at some point is that our system started producing bad data, and I didn’t notice, as it occasionally highlighted a top story that wasn’t a top story at all. This wasn’t because of an explicit change to the Twitter API, but a change to how data was returned. Sometimes, for no reason that we can tell, the URL would be returned as “t.co/…” and would inflate the count of another story. (Twitter uses its own URL shortener, t.co, even if you’ve already used a third-party URL shortener to get around the 140-character limit.)

Because of the shift to tweets with rare Unicode characters such as emoji, my son ended up rewriting our system from scratch. And the system now outputs additional diagnostics so I can verify its accuracy.

The algorithm seems to be working well now. Now, algorithm is just a formal word for automating a sometimes arbitrary process. We’ve implemented certain heuristics – another formal word, for rules of thumb! Really, a program is just an embodiment of judgment calls. Some of ours:

  • Tracking influence – We give everyone who tweets an influence score, based on some factors. There’s been a lot of research into measuring influence, and we have our own method for estimating it. If @lennyism retweets a link, it counts for more than if a new Twitter user retweets that same link.
  • Handling spam – If you retweet the same link three times to #MRX over a week, we’ve always counted it only once. We’ve implemented some new rules to better handle bots and other spammy activity that we’ve seen, including closely-related accounts retweeting a link. Beating spam is a constant battle.
  • Determining which link is canonical – We resolve shortened URLs so that we are counting the underlying link, not the different representations of it. For instance, we treat it/1D5E09h, bit.ly/1vpik1H, and lnkd.in/d8EVzD3 all as http://www.greenbookblog.org/2014/12/30/embracing-change-in-mr-a-year-end-perspective/. And we’ve added a few special rules to account for some different versions of hyperlinks to the same pages. Rediscovering how hard it is simply to track URLs makes me realize how error prone tracking brand names must be!

Fortunately, in our case, our program produces a report for a human to read and analyze, rather than simply spits out its results to Twitter. So a human can catch the things that the automation didn’t. For instance, I manually exclude references not related to market research (I am so not looking forward to the February release of the Bollywood film Mr. X!). I skip over any expired links – invitations to webinars now passed, for instance. And I curse any spammers who get by the system.

The lesson? Social media monitoring automation requires vigilance and updates, even for a hobbyist project like tracking the top 5 research stories of the week. Implementing custom brand trackers requires even more diligence – you should schedule regular audits to double-check the results. The myth of social media is that the data is free and therefore the analysis is as well. The data is free, but the analysis can be time-consuming and tricky. If social media is fragile, your monitoring must be robust.

Share