Personal Data: The Ultimate Commodity?

Personal data has been described as the “new oil” that will drive the economy of tomorrow, but it’s currently being treated as a commodity rather than a precious resource. We need to start developing models that both incentivize and reward individuals for contributing to the data economy — and here’s how. 



Recently, John Thornhill wrote an interesting article in the Financial Times on the role platforms like Facebook can play in establishing a universal basic income via data. Here is the crux of his argument:

“The most valuable asset that Facebook possesses is the data that its users, often unwittingly, hand over for free before they are in effect sold to advertisers. It seems only fair that Facebook makes a bigger social contribution for profiting from this massively valuable, collectively generated resource.

His shareholders would hate the idea. But from Facebook’s earliest years, Mr. Zuckerberg has said his purpose has been to make an impact rather than build a company. Besides, such a philanthropic gesture might even prove to be the marketing coup of the century. Facebook users could continue to swap cat pictures knowing that every click was contributing to a greater social good. “

In 2011 The World Economic Forum classified personal data as a new Asset Class along with property, investments, cash, etc. This laid the foundation to rethink how data can be utilized to deliver value to the owners and originators, not just the users. In the original report issued by WEF and Bain, they call out both the technical and philosophical challenge:

“At its core, personal data represents a post-industrial opportunity. It has unprecedented complexity, velocity and global reach. Utilizing a ubiquitous communications infrastructure, the personal data opportunity will emerge in a world where nearly everyone and everything are connected in real time. That will require a highly reliable, secure and available infrastructure at its core and robust innovation at the edge. Stakeholders will need to embrace the uncertainty, ambiguity and risk of an emerging ecosystem. In many ways, this opportunity will resemble a living entity and will require new ways of adapting and responding. Most importantly, it will demand a new way of thinking about individuals. Indeed, rethinking the central importance of the individual is fundamental to the transformational nature of this opportunity because that will spur solutions and insights.”

We have come far in developing the technologies that can enable the management of personal data in a trading environment. In fact, Blockchain shows great promise as the underlying architecture to power the trading of data as a type of cryptocurrency and is rapidly evolving as a transformative model for transactions. AI and “Big Data” models have largely addressed the type of analytical frameworks needed to combine data sources, and the marketing world advances in attribution, single-source, and programmatic ads have proven we have the systems to use personal data to deliver highly targeted content (in the form of ads and recommendations) in almost real time based on the digital treasure trove of data available.

However, and as John Thornhill pointed out, today consumers’ data is simply used without direct reward to the consumer. It’s a barter system: “let us use your data so we can try to sell you more stuff, and in return you get access to these nifty technology platforms.” That has been fine, but it’s a far cry from treating data as an asset class that can generate real financial gains for consumers — not just value.

What is missing is not just a shift in thinking, but also a fundamental reshaping of the value exchange. In short, we need to stop treating data as an easily accessible commodity and start paying for it as a precious resource. We need incentives and rewards to help kick-start a new system.

Fundamentally, people do things because they get something out of it: we take action because it fulfills a need, whether it’s conscious or unconscious. This core motivation is central to every school and application of behavioral science. Game Theory and Behavioral Economics specifically have taught us that a system of incentives and rewards are necessary to engage humans. In general, this system can be boiled down to a few key categories:

  1. Social: Connects us to others for fun and social interactions. Think of all the games on Facebook or online game networks.
  2. Financial: Delivers a direct financial reward such as research incentives, discount or deal networks, personal data lockers or recommendation systems.
  3. Values: Altruism, charitable causes, political or social campaigns and anything else that is aligned to our values.

The ideal system combines all of these. The market research industry has actually pioneered quite a few examples in action via the advent of online communities, and there is much from that model that could be applied throughout the research industry in support of a personal data economy. This system looks something like the example below:

  1. A brand wants to engage in a long-term dialogue with a subset of consumers to explore new product ideas.
  2. Participants are identified via profiles established in panels or in social media.
  3. Targeted consumers are recruited and paid a financial reward for engaging.
  4. Exercises are “gamified” and participants are sent on missions, earn badges/rewards and discuss their ideas and the ideas of others in forums within the community.
  5. Results are shared back so participants can see how their contributions help create new solutions to issues that impact their lives.

This approach to creating a real, engaging motivational framework for consumers to share their data is a good example of how we can rethink the value of personal data and how people can gain more than just access to apps for its use. That model of value-exchange is proven and has created value for all parties involved — but it has limitations.

A multi-dimensional system that has real incentives and rewards that pay consumers for their participation in an accretive way is not only more fair — it also drives the shift in necessary thinking to support the emergence of the personal data economy. Whether it’s getting a “data access annuity” from Facebook or Google, direct compensation for participating in research or data analysis initiatives or receiving goods and/or services as a “lease” on consumer data access, each model has incentives at their core.

No longer a tactical afterthought, consumer rewards are the tip of the spear in leading a transformation in how consumers use their data for their own benefit vs. others using it for personal gain. Direct reciprocity simply changes the game.

Incentives solution providers today are inherently “fintech” companies. My friends at Virtual Incentives for example use a robust enterprise API system to deliver real time, customized reward options from scores of partners through multiple consumer channels. They are a combination e-tailer, bank and stock exchange, processing millions of transactions at scale — all driven by consumer demand. That kind of technology is a vital key component of building the personal data economy, and the thought leaders behind it need to be part of the debate on what the future should look like.

The debate around personal data ownership and value, alternative and universal income schemes and the role of technology in making it all happen will continue to be important topics over the next few decades. However, incentives and rewards won’t just be a big part of that dialogue; they have already gone far in solving many practical issues.

Building on their firm foundations, the future of the personal data economy looks bright indeed.

Purpose Built Innovation: Aligning Product to Need

Most innovation efforts start with trying to solve a problem and tend to focus on increasing efficiency or effectiveness, but forget to stay focused on the purpose: addressing a need.


By Laura Livers

In our last post we explored the drivers of innovation and the differences between disruptive and incremental innovation. We dove into the idea of pragmatic innovation and its application and its impact on efficiency and effectiveness. The point was to differentiate between the “shiny objects” that generate much hype (and often deservedly so) and the less heralded but no less important work of taking an existing process or product and making it better. However, we barely touched on the beginning of the innovation journey – aligning the product to a need. In other words, purpose built innovation.

Most innovation efforts start with trying to solve a problem and tend to focus on increasing efficiency or effectiveness. But it’s easy to get sidetracked with the “coolness factor” of some ideas and forget to stay focused on the purpose: addressing a need.

As researchers we are familiar with the process of creating, testing, and optimizing concepts, but how do we develop a process within our organization to make purpose built innovation part of the DNA of the company? A process is required as well as a cultural commitment and the allocation of resources. Let’s explore …

The Process:

The innovation funnel is a visual representation that people are familiar with when discussing how companies move from ideas to products. And it accurately reflects the narrowing of possibilities that occurs over time: Every crazy idea you can imagine makes it into the top of the funnel; as the ideas are analyzed, fewer and fewer move down the funnel. And only a select few exit the funnel and become products.

Tom Fishburne has a wonderful Marketoon that represents how many feel about this process:



But is the funnel the right shape for 21st-century innovation? David Nichols actually suggests a rocket shape to innovation, where a team starts with a very sharp vision and strategy that informs which ideas make the cut in the development process. The idea of a rocket is quite inspiring! A rocket is powerful, fast, and requires a lot of collaboration across different teams to make it work well. Shouldn’t we all be aiming for rocket-propelled ideas and products?!



The basic pieces of the original funnel still hold true in terms of the key inputs that need to take place.

To extend this analogy one step further, each phase of innovation corresponds to one component of rocket propulsion.

  • Ignition = Opportunity Assessment – What is that spark of inspiration that ignites the innovation process? Typically it is the recognition of a new market opportunity. Understanding a specific opportunity in the context of your firm’s overall vision and strategy gets everything in motion and sets the trajectory for the types of ideas you’re going to consider.
  • Nozzle = Insights-Based Ideation– In a rocket, fuel flows through the nozzle, which is flared out to propel the vehicle off the ground. Ideas – and a lot of them – are precious fuel in the innovation process. Ideally, these will be drawn upon viewpoints across and outside the organization and will be based on knowledge you have of the marketplace and your customers. These ideas are then narrowed down and explored in the conceptualization phase.
  • Combustion Chamber = Conceptualization–This is where the classic funnel misrepresents the process. Actually turning a select set of ideas into product concepts requires a new set of considerations: How should the offering be packaged? Do we need a new logo?  How will we message it? Does the color palette make a difference? What should the price point be? This expands the innovation space again: each concept has a particular set of attributes or characteristics that need to be explored. In a rocket, the combustion chamber is where hot, highly pressurized gasses expand to create the energy that leads to lift-off. A poorly executed expansion won’t provide enough power to get you where you want to go.
  • Guidance System = Evaluation and Benchmarking– If you don’t have a way to efficiently make informed choices on all these attributes, then the innovation process becomes an exercise rooted in gut instinct rather than in data. Measuring the appeal of one concept chosen by gut isn’t measurement; you need to identify and measure the best opportunities out of all the options. Marketers need clear maps outlining which attributes will be most important to a target audience, and how the best concepts perform against the competition.
  • Payload System = Go/No Go. The very top of the rocket is the payload, and it’s what’s actually carried into space. This is your precious cargo, which depends on a well-tuned system to get it off the ground.

Since ideas are the fuel of the rocket, “Open innovation” is an attractive destination to aim for. Many companies today are struggling to bring multiple constituents together within the confines of the firm, and this movement to a more open innovation cycle relies on a combination of technology and company-wide culture change.

Enter a new era of openness, where collaboration is key.

  • Idea management systems: As companies go global and business becomes digital, more formalized idea management systems allow people from the far reaches of the organization to submit their suggestions.
  • Innovation management: The company uses technology to methodically connect the best ideas to company goals and incorporate them into the innovation process.
  • Collaborative innovation: At this stage, the firm begins to seek out broader employee input throughout the innovation and development process, not only at the ideation stage.
  • Open innovation: What if the best ideas aren’t inside the organization? Now, customers, suppliers, and other constituents outside of the company are tapped for input and ideas in the never-ending search for competitive advantage.

But the process is only the framework to manage the work; filling the funnel and maintaining it via cultural adoption within the organization is needed.

Filling the rocket tank 

The stereotype of the lone innovator and the anecdotes about inspiration are clearly not true. Edison was not a lone inventor. He built the first modern day industrial lab. Steve Jobs was not a lone genius. His alchemy was instigation and circulating ideas through his people.

To reliably innovate, companies must adopt collaboration platforms that empower workforces to interact. Platforms that enable teams to better disseminate ideas, share information, and (most importantly) work together.

As an alternative, let’s consider two forms of innovation idea generation that share similar trajectories:

  1. Product Hacking– This new form of hacking is not done by expert computer users or cyber criminals, but is in fact done by everyday people. This new terminology for hacking refers to the act of modifying or customizing everyday products to improve their functionality or repurpose them.
  2. Customer Co-Creation– Co-creation is when two or more people come together as a collaborative team, with a strong desire to create something beyond their individual capabilities. In contrast to Product Hacking, folks are now working in teams, typically coming up with ideas on their own to begin with and then moving on to collaborate towards common ground.

What Product Hacking & Customer Co-Creation have in common is that anyone can be a contributor – employees, customers, suppliers, and other constituents outside of the company.

It’s at the raw emotive level where customers are able to design rough constructs to solve their pain points.  The themes that emerge from hacking and/or co-creating can serve as the creative spark to a talented developer.

Steve Jobs didn’t simply come down from the mountaintop with a fully functioning iPhone.  He correctly “observed” what the customer was searching for from a social/communication device and developed a product to answer that.  Of course, geniuses like Steve Jobs only come around every century, so the key to filling the innovation rocket at your company is via product hacking and collaborative co-creation.

Building an Innovation Culture

In order to stay focused on aligning the product to the need, single minded focus and discipline is required; so often companies jump for the low hanging fruit at the expense of the big picture opportunity. That said, nimbleness in a rapidly changing market, especially one as technology-driven as the market research industry, can be important as well.

A hybrid system for product development that mixes elements from both imperatives (Product Hacking & Customer Co-Creation) is a proven model.

First, always start with a goal in mind – solve for x. Then structure a project plan based on its critical parts, identifying interdependencies and target dates. Importantly, within the plan, build in time for additional learnings and the potential for risk or surprises. For example, if part of the plan includes building new features into your online research tool, set aside time for prototyping and user feedback. At the extreme, if you think part of the plan is high risk, develop multiple, simultaneous approaches to ensure certainty. Within that system, you have flexibility to reprioritize items or projects.

Last but not least, while each project can be more planned or more agile, what should never change is the focus on the power of teams and collaboration. Make project teams cross-functional with representation from customer stakeholders, R&D, operations and finance. Also, as projects progress, let the teams smartly self-organize – individual team members can be added or switched out based on the needs at that juncture.

Collaboration is an absolute requirement for successful innovation. Don’t be afraid to look outside of your organization to collaborate externally – it can be invaluable in building a culture of collaborative entrepreneurship. By partnering with specialists that are willing to learn together, every member of the team can be enriched by the experience.

Be mindful that what you get from the open innovation process depends on who you include. Your core users already see value in what you offer. They are an important element, but focusing only on their input might get a tweaked version of what you’ve already got, but won’t impact competitive advantage. Lead users can push the product in new directions, potentially influencing the next generation of your product. Last, non-customers have needs that you don’t currently fulfill, and their input may very well be the future of your company and open up new spheres of innovation.

Finally, learn to be accepting of failure. Through the innovation process we just outlined you will purposefully go through many iterations of your original vision – many of which may not be successful. As you iterate quickly, learning critical requirements and using the learnings to make further developments, the success of the product is built on every failure that comes before it. As an organization this will strengthen your commitment to new ideas, understanding that the ideal solution will often be much changed from the original concept.

Now, go forth and innovate with purpose!

Want to learn more? Visit Focus Pointe Global.

Machine Learning Bolsters Market Research

From big data to predictive answers and targeting, machine learning technology is changing the market research industry.

By Frank Smadja, PhD.

Technology continues to empower marketers to improve the efficacy and personalization of messages.  Data informs targeting capabilities and insight supplements understanding. This insight has become more critical as marketing teams need to understand the ‘why’ behind the ‘what,’ making survey research ever more important. To address this need, market researchers are often looking at ever more targeted individuals to participate in survey research.

Take for example, when an insights professional is planning a survey of 1,000 people who drive a specific make and model of motorbike and live in the London area. A traditional survey approach would be to send the survey to the 200 people that have been pre-identified on a research panel and ask many more about their ownership of a motorbike in order to try to get the 1,000 motorbike owners. While this is a longstanding practice, it runs contrary to achieving speed to insight; not to mention does not improve upon the survey experience.

The good news for market researchers that face this problem is help is on the way.

Machine learning technology is emerging that will help online survey tools to predict the answer to the type of questions like “do you own a motorbike.” The technology teaches computers to learn from experience – learning from data without relying on a predetermined equation as a model. Machine learning algorithms adaptively improve performance as available sample numbers for learning increase.

Machine learning has an expanding presence in marketing, such as in online advertising as a tool for making digital campaigns more targeted and personalized. Another way marketers leverage machine learning is in customer experience applications to identify patterns among customer interactions to increase revenue opportunities. Market research is a natural extension of this technology trend.

With machine learning, survey platforms can learn and predict properties of users based on their answers on other questions and demographic and profile data similarities to additional panelists. The technology enables insights professionals to rely less on asking questions in qualifying respondents, which means less wasted time and resources. It aligns with respondent expectations that researchers already have knowledge about them and don’t need to ask basic information questions which can turn off respondents.

Back to our motorbike example, machine learning will enable surveys to primarily target people who are likely to have a high chance of owning a motorbike even though they never specifically answered the question. Based on this learned intelligence, a survey will simply ask respondents to verify this fact as they enter the survey, reducing the number of people rejected and boosting panel satisfaction because the survey is attuned to their profile.

Machine learning addresses a challenge that is impossible to handle by traditional heuristic-based approaches; these methods would not perform well and would be impossible to scale. Survey panels can run into millions of users, with each panelist having thousands of data points – potentially billions of data points in total. Machine learning techniques address this scale issue to learn about panel users based on their activities and then predict their answers to questions.

In examining ways machine learning can advance market research, Toluna has used an open source library developed from Google to investigate and compare a number of learning algorithms. Our research shows that this technology can help improve targeting significantly. Survey design will continue to first look for people who match the target and in a second step, look for people whose predicted answers match the target. We expect this new process will reduce panel fatigue significantly. It will also help drive more completes per survey panel and require less capital expenditure for each survey.

Machine learning technology shows great promise in market research. It can learn from huge amounts of data to generate insights and predict answers without asking irrelevant questions to panelists – improving respondent experience and survey results.

The Next Sea Change in Marketing is Coming Fast. Are You Ready?

Spearheaded by the growth of Amazon, retailers are becoming a mix of retailer, publisher, and ad networks effectively changing the future of marketing.

By Joel Rubinson

Retailers are becoming publishers and ad networks, offering reach, unified IDs and most importantly, a way to target active shoppers that can dramatically improve marketing ROI as massive amounts of ad spending shift to this new “channel”. Amazon could turn the big two into the big three.

Marketers and researchers…ARE YOU READY?

In April 2017, I published a white paper in partnership with Viant and NCS that proved you could double return on ad spend (ROAS) simply by targeting those modeled to be close to an upcoming purchase, i.e. active shoppers.

On August 2nd, I blogged about the importance of focusing on active vs. dormant consumers.

On Monday August 21st, I wrote in a LinkedIn blog that Amazon would turn the big two into the big three as it gets serious about ad revenues. I said that Amazon was like Google plus Walmart combined. I said that Walmart and other retailers would soon follow and there would be a massive shift of ad dollars as retailers become retailer/publisher/ad networks, following the Amazon model.

Then two days later, on Wednesday August 23rd, Google and Walmart announced an alliance, reportedly to blunt Amazon.

Now, on August 29th, it is reported that Amazon has a programmatic offering for any off-site web property in its network. The same article quotes Morgan Stanley as predicting Amazon’s ad revenues will grow to $7 Billion by 2020.  Although in the right direction (big growth), I think this estimate is way LOW! They could easily get into the $20-30 BILLION range because marketers will WANT to invest their ad dollars this way…activation and brand building, all in one platform.  No one else has this yet, although Walmart COULD.

Marketers…it’s on! Are you ready for the sea change as marketing shifts tens of billions of BOTH brand-building and performance advertising spending towards active shoppers?

A new “segment in the Lumascape” is about to develop. This new ecosystem will include:

  • Retailers as publishers and ad networks
  • Intender data aggregators
    • Frequent shopper data for targeting
    • Intender targeting: mine intention and interest signals on the web to target intenders both programmatically and via traditional media
    • Location data
  • Analytics providers who can help you measure and adjust marketing spending towards “Actives”.
  • Media agencies who develop expertise in the systems that the new class of retailer/publisher/ad network offer for targeting ad messages.

What do marketers and researchers need to do to prepare for this sea change?

  1. Segment creation and lookalike modeling. Develop methods for identifying actives at scale so they can be targeted. Depending on your business, this will involve frequent shopper data, CRM data, digital profiles from 1st, 2nd, and 3rd party data, location data, and lookalike modeling that also includes demographic factors and profiling factors (e.g. life change events like getting married, buying a home, having a child, etc.)
  2. Media effectiveness insights. You need to prove or disprove this hypothesis: can you do brand-building by targeting active consumers and is it more effective than changing brand consideration and preference levels among those who are dormant.
  3. Separate measurement of actives and dormants. They do not offer the same opportunities for either short or long term growth.  To continue to conflate these as we currently do in brand tracking would be a tragic mistake. Conflating reach across these groups for media planning might be even worse.
  4. Set aggressive targets. What is your marketing return on ad spend today?  You probably do NOT currently know or measure this.  You need to.  Then you need to set an aggressive target to grow this by, say 20% for each of 3 years. Why would you not do this in an age when you can target active shoppers and demonstrate TWICE the ROAS?

Some naysayers will say this is the death of marketing and branding as we move to performance marketing. They are as wrong about this as they were that search would kill branding. This is the rise of “Relevance marketing” because consumers respond to advertising (both brand-building and performance oriented) that is about what they care about at the moment. “Right message, right time, right consumer, right way”…not the death of marketing but its future.

Top 7 Video Content Analysis Tools

Get up to speed with the seven most common video content analysis tools available today.


With over 500 million hours of video watched daily on YouTube its no wonder why video analysis is all the rave these days.  Many marketers talk about or dabble with video analysis tools like facial coding and utilization of AI for high frequency video analysis. The reality is facial coding is only the type of the iceberg when it comes to video analysis tools available today.

Now widely available, video analysis is commonly used in health care, automotive, retail, home automation and security. By and large, video analytics over the past few years has advanced greatly in capabilities allowing users to capture, index and search faces, colors, and objects such as license plates, and more. For my own curiosity, I decided to investigate the landscape of video analysis tools and their common applications.


Top 7 video content analysis tools:

Motion Detection – In video analysis, motion detection is used to identify relevant motion in an observed scene.

Shape Recognition – Analyses shapes in the inputted video. This feature is often used in advanced functionalities like object detection.

Objects Detection – Object detection is another key feature of video analysis used in determining the presence of an object such as tree or car.

Intelligent Recognition – Used to automatically recognize and possibly identify human faces (Face Recognition) or number plates (Automatic Number Plate Recognition) in cars. Intelligent video analysis uses deep learning solution for facial recognition.

Smoke and Flame Detection – Cameras with intelligent video surveillance technology can detect smoke and flame in less than 15 seconds. Using a micro chip built into the camera video analysis can be done of smoke/flame chrominance, color, shape, flicker, pattern and direction.

Egomotion Estimation – Used to determine the position of a camera by the analysis of its output signal.

Video Tracking – This feature is used to determine the location or positions of persons or objects in a video with regards to an external reference grid.

Tamper Detection – This is a feature used to detect whether the camera or an output signal has been tampered with by anyone.


Future Applications of Video Analysis

As video and the tools used to analyze them continue to evolve I see interesting new opportunities to gleam more insights from video assets. Among these opportunities I’m most bullish on the environmental and spatial analysis tools being utilized more by qualitative researchers and product developers in R&D. One of the technologies currently being utilized for video analysis includes Suspect Tracking Technology. This works by tracking all of a particular subject’s movements: their origin, destination and exactly when, where, and how they move. Another is the Indexing Technology which is able to locate people with closely identical features who were within a camera’s viewpoint in a specific period of time.

Looking back, I can identify various qualitative projects and R&D product launches I’ve been a part of where environmental and spatial video analysis tools would’ve provided valuable insights.

According to reports, the global video analytics market was worth millions in 2016 and is expected to witness a tremendous growth during the forecast period of 2016-2022.  The market demand for video analysis will continue to rise as the days go by especially with the emergence of smart combination of video analysis with artificial intelligence.


BiS Research. (2017). Global VSAAS Market Report. Retrieved July 24, 2017, from

IFSEC Global. (2017). Video analytics: analytics as a sensor. Retrieved July 24, 2017, from

Video Analytics. (n.d.). Retrieved June 24, 2017, from

Wiki. (n.d.). Retrieved July 24, 2017, from

SSI Embraces DIY: Interview with Chris Fanning, CEO of SSI

An in depth interview with Chris Fanning, CEO of SSI on their new suite of products, the future of the insights industry and what lies ahead for the company as they celebrate their 40th Anniversary.


The market research industry, like many industries, is going through dramatic changes.  But the main goal remains the same – gaining insights from consumers and business professionals.  How fast those insights can be turned into money-making endeavors are what many organizations are striving for every day.

This imperative has, and continues to, disrupt the market research industry and very company in it. From CATI to online surveys and panels, to DIY and now to AI-driven automation the pace of change has only increased. Few companies have weathered that change unscathed and fewer still have thrived.

One of the exceptions is Survey Sampling International; through it all SSI has adapted and grown. They are a textbook example of how to adapt to a changing market.

Now, the venerable sample company is entering a new phase with their launch this week of a suite of do-it-yourself products that will make it a lot easier and less expensive to conduct professional research 24/7.  They join the ranks of the disruptors  that are leading the industry by breaking down the barriers in the value chain and creating an integrated platform to enable online research easier, faster, and if not better, at least good enough for the needs of much of the industry.  Their competitive set as an integrated platform is fairly limited: Research Now, Toluna, SurveyMonkey, AYTM and Google Surveys are the major examples, but SSI is carving their own path of differentiation by being smart,well funded, and opportunistic.

If you’ve been around the industry as long as me you’ll remember SSI as one of the “go to” sources for telephone sample and an early player in online sample, but they largely served the research supplier community. As the industry has shifted to make research available to a wider market, SSI has expanded its customer base to include media agencies, consulting firms and corporations.  In addition to its core market research agency customers and firms, SSI is scratching the surface of new vertical industries for additional sources of revenue, for example, financial services, health care, transportation and technology.

Since CEO Chris Fanning joined after the acquisition of the company by HGGC, a middle-market private equity firm from Providence Equity Partners and Sterling Investment Partners, he has been focused on building the business through internal investments in the team and technology, and through a series of acquisitions to expand their footprint and capabilities. Since doubling in size during the last five years, SSI’s new initiatives and product expansions put it on a path to become a billion-dollar revenue company.  It has set its sights on other panel companies, similar to the MyOpinions acquisition in 2015 that further expanded its business and made it the market leader in Australia and New Zealand.

SSI’s MRops acquisition in 2015 gave it additional technology, expertise and scale to grow SSI’s consulting customer base.  The 2016 acquisition of Instantly gave SSI additional sample it didn’t have in mobile and access to technology for the DIY part of the market.

Today, the company is one of the top 50 largest research companies in the world and close in size to the giant of DIY, SurveyMonkey. With the launch of their new suite of sample and data collection tools, I would expect them to give that darling of Silicon Valley a real run for their money.

I sat down to talk to Chris about the new offering, where the company has been, and where it’s going in the future. It’s a frank and funny talk that I think will be instructive for everyone in the industry. And if you don’t think the discussion is fun, you can at least laugh at the debut of my new “Breaking Bad” look!



Since Tom Danbury started the company in his garage in 1977, SSI has changed quite a bit; this year marks the 40th anniversary for them and I expect they are just getting started.

Below is the press release on the SSI Suite.


 SSI today announced a new way to conduct self-service market research, from authoring surveys and selecting target audiences to viewing results. The SSI Suite provides fingertip access to integrated market research tools and technology, giving the expert market researcher or novice complete control to conduct research when and where it is needed.

The SSI Suite features a number of tools including:

  • SSI Survey Builder – SSI’s free and easy-to-use survey authoring tool enables users to create their own online or mobile surveys. It allows the user to control questions asked, who answers them and how to report results. Survey Builder enables a user to create unlimited surveys, with unlimited questions, sent to unlimited respondents. This means no risk, no budget and no approvals, just quality consumer insights using a professional platform.
  •  SSI Self Serve Sample – Based on the platform in its high-touch sample service, SSI Self Serve Sample features the same powerful targeting capabilities – gender, age, marital status, education, income, job title and more – 24/7, with the click of a button. It works with Survey Builder and most major survey platforms to provide seamless integration with SSI’s global online and mobile sample providing instant access to a wide range of target audiences. In addition, the user will know price and feasibility feedback on any project, and be able to track its progress in field.
  • SSI Survey Score  – A free survey diagnostic tool that tests questionnaires before they go into the field to detect problems that may affect survey outcome and success. This tool assesses elements such as question types and length of interview to spot issues related to completion rates.
  • SSI Sample API  – An automated application program interface that maximizes the benefits of the SSI Suite and allows complete control and access to SSI sample audiences using a company’s existing in-house systems.

“SSI Suite represents the natural evolution of SSI’s industry-leading technology platform and capabilities,” said Chris Fanning, SSI president and CEO. “SSI Suite provides researchers control, speed to market and the power of quality consumer insights 24/7 using a technology platform that has delivered more than 100 million surveys in recent years. The SSI Suite gives you the speed and flexibility needed to compete successfully in a data-driven world.”

For example, SSI Survey Builder creates effective and compelling surveys without needing complex coding or years of experience. It is the self-service solution for business professionals that is easy and powerful. It connects the survey to SSI’s online and mobile audiences, or, if a user maintains her own audience, SSI Survey Builder makes it easy to promote the survey by email, social network or even embedding it on the company’s website.

“Market researchers are demanding faster access to insights,” said Simon Chadwick, managing partner, Cambiar Consulting. “The SSI Suite positions SSI at the forefront of the industry movement to use technology to reach and develop insights at a moment’s notice. This suite of tools is the desired advancement for direct access to technology for market research. It provides solutions that embrace working smarter, not harder.”

SSI Self Serve Sample gives users immediate access to a target audience for customer insights. It provides a way to send any survey to SSI’s global panel of respondents 24/7 without the need for external coordination. With millions of qualified market research participants in multiple countries, your survey will be connected with one of the largest consumer panels in the industry. Its intuitive interface allows the user to further refine his audience by additional segments and assign nested quotas for a more accurate sample representation.  Each research project is traced and displayed in a dynamic dashboard, giving the user visibility into important KPIs that help evaluate the project’s performance every step of the way.

“Sample needs arise anywhere and anytime,” said Bob Fawson, SSI chief product officer. “Response times are critical to our customers’ success, and by introducing the SSI Self Serve Sample, we address a critical market need. The Self Serve Sample tool was created to give more business professionals the opportunity to access SSI’s global panel audience. It acts as the cornerstone in the SSI Suite of tools to help you better understand your target market and achieve rich customer insights.”

For additional information on the SSI Suite, please visit

Celebrating 40 years in business, SSI is the premier global provider of data solutions and technology for consumer and business-to-business survey research. SSI reaches participants in 90+ sample countries via internet, telephone, mobile/wireless and mixed-access offerings. SSI staff operates from 40 offices and remote staff in more than 20 countries, offering sample, data collection, CATI, questionnaire design consultation, programming and hosting, online custom reporting and data processing.  SSI’s employees serve more than 3,500 customers worldwide. Visit SSI at

Correspondence Analysis of Brand Switching and Other Square Tables

Posted by Jake Hoare Thursday, September 7, 2017, 7:00 am
Correspondence analysis can be used to visualize a complex table of results as a much simpler chart. This post describes how correspondence analysis works and how to interpret the results.

By Jake Hoare, Displayr

Correspondence analysis is a powerful technique that enables you to visualize a complex table of results as a much simpler chart. In this post I discuss the special case of square tables, which often arise in the context of brand switching, using examples of cereal brand-switching, and switching between professions.

As background, this earlier post describes what correspondence analysis is. This post describes how correspondence analysis works and how to interpret the results.

Correspondence Analysis of Square Tables

A typical table used for correspondence analysis shows the responses to one question along the rows and responses to another question along the columns. Each question has its own set of mutually exclusive categorical responses. The cell at the intersection of any row and column contains the count of cases with that combination of row and column responses. I say “typical table” above because there are other use cases, such as tables of time series, raw data, and means, all of which are described in this post which describes when to use correspondence analysis.

In general, the sets of responses labeling the rows and columns are different. For example, the rows may be labeled by each respondent’s favorite color and the columns by their favorite sport. If instead, we labeled the columns by their partner’s favorite color, then we have an example of a square table.

A square table, in this case, does not just have the same number of rows as columns. The rows and columns have identical labels, and they are presented in the same order. Such tables may also be called switching matrices, transition tables or confusion matrices.

Below I show an example as a heatmap for easier visualization. The data relate to brand switching between breakfast cereals. The rows contain the first cereal purchased, the columns contain the next cereal purchased.


Data symmetry

Before delving into the correspondence analysis, let’s take a look at the data above. One of the first observations that we can make about it is the strong diagonal. On the whole, people tend to buy the same cereal repeatedly.

Looking away from the diagonal, there is also high symmetry. For example, the numbers switching from Cornflakes to Rice Krispies (80) is almost the same as switching in the other direction (81). Both of these observations are quite typical of square tables from consumer data.

Now let’s perform the correspondence analysis. The scatterplot below shows the first 2 output dimensions.


Interpretation of Square Correspondence Analysis

It’s tempting to draw immediate conclusions from the plot above. Before we do so, we need to take note of a few things.

First, any square matrix can be broken down into symmetric and skew-symmetric components. The correspondence analysis of those two components is driven by different aspects of the data, and they are best analysed separately.

  • The symmetric component shows us how much 2-way exchange occurs between categories. Points that are close together have a relatively high rate of exchange between them.
  • The skew-symmetric component determines the net flow into or out of a category. Points that are close together have similar net flows with the other categories.

We can tell which dimensions are symmetric and which are skew-symmetric by inspecting the how much variance each dimension explains. The symmetric component produces dimensions that each explain a different amount of the variation in the table. In more technical language the eigenvalues, inertias or canonical correlations are unique. Correspondence analysis of the skew-symmetric component produces dimensions that occur in pairs. Both dimensions within a pair explain the same amount of variation.

Second, always take note of the amount of variance explained by each dimension. When the total explained by both dimensions of a chart is much less than 100%, the unseen dimensions contain a significant amount of information.

Third, points further from the origin are more strongly differentiated. Conversely, points that are close to the origin are less distinct and may not be similar (other than their mutual lack of distinction!).

Finally, this post covers interpretation of correspondence analysis in much more detail.

Cereal interpretation

To understand the cereal correspondence analysis, let’s look at the variance explained by each dimension,

We see that dimensions 1 to 6 have unique amounts of variance explained, so they are the symmetric. Taking a closer look at the raw output below, dimensions 7 to 12 occur in pairs, so they are a result of the skew-symmetric component.

Since the earlier scatterplot showed the first two dimensions, we can now say that they are symmetric. This means that there is relatively little switching to or from Shredded Wheat. Frosties and Crunchy Nut Cornflakes form a pair, indicating a relatively high level of switching between those brands. The other 4 brands also form a loose group of mutual interchange. However, these two dimensions only account for 62% of the variance, so they do not tell us everything about the data.

The fact that the first 6 dimensions result from the symmetric component confirms our earlier observation about the symmetry of the data. In fact 99.5% of the variance is due to symmetry. It is not unusual that the symmetric component is dominant. In this case it would be unwise to plot the skew-symmetric dimensions since they represent such a tiny part of the information. I would also never plot a symmetric and skew-symmetric dimension on the same chart.

Less Symmetric Data

As a second example, I am using data about how people transition between jobs. This is a somewhat mature data set, referring to German politicians in the 1840s. You will not find software engineer listed. The rows of the table tell us the professions held by the politicians prior to their terms, and the columns tells us the professions that they held after they left office.

The plot below shows the first two symmetric dimensions.


From this we conclude that there is relatively high exchange between Justice, Administration and Lawyer. There is also high exchange between Education and Self-employed.

The skew-symmetric component is 15% of the variance, which is much more than for the cereal data but still a small part of the whole. On the chart below we see that Lawyer and Justice are at the extremities. This means that those professions experience a relatively high net inflow and outflow.


We cannot say from the chart which has the inflow and which has the outflow. The only way to tell is to look at the raw data. To clarify this point, we can compute the net inflow for each professional by working out the difference between the row totals and column totals for each profession in the original table. The final column chart shows us that Lawyer has the inflow, and Justice has the outflow.


One key advantage of using correspondence analysis specifically for a square table is that we do not need to plot row and columns labels separately. This means that we can interpret the closeness of points on the same scale. However, as with all correspondence analysis, we need to take care to draw correct conclusions. In particular, the symmetric and skew-symmetric components should be analyzed independently. The symmetric parts tells us about the exchange between different categories, why the skew-symmetric parts tell us about net flows into or out of categories.


The Statistics and Statistical Tools Used in Market Research in 2017

In August NewMR ran a survey about the sorts of statistics and statistical tools used by market researchers and gathered the views of over 300 people. Ray Poynter has created a short summary which shares the top four findings and the two key charts.


By Ray Poynter, NewMR

In this post I am sharing the summary and two key charts. The eight-page version of the results can be downloaded.

The top four things that I want to share about the use of statistics and statistical tools are:

  1. Most statistical tests/approaches are not widely used. Only Correlation, Regression, z- or t-tests, and Cluster Analysis have been used by more than 50% of the participants in this research, during the first half of 2017 – and this sample probably over-represents people using statistics, and under-represents those using statistics less often.
  2. SPSS is the dominant software package amongst people using statistical packages. Given SPSS is approaching 50 years old, that may not be the sign of a dynamic industry? But, there are many people using tools such as Q, Sawtooth Software, SAS – and beyond them programs such as Latent Gold, Tableau, and XLSTAT.
  3. One of the growth areas is the use of tools is the use of integrated data collection / analysis solutions, for example Confirmit, Askia, Vision Critical, Qualtrics. The use of these tools requires the researcher to make fewer decisions. For example, survey monitoring flows into the analysis without any extra steps, the packages have a default way of looking of testing differences (for example t-tests) – making it less likely that the researcher will consider less convenient options, such as Chi-squared tests.
  4. The most widely adopted complex solution is R, an open-source programming language that leverages large numbers of libraries for things like advanced analytics, data science, and data visualisation. People have been highlighting the growing role of R for a few years, and it seems to be gaining a stronger share of market research and insight analyses.



Download more details here.

Layered Data Visualizations Using R, Plotly, and Displayr

Posted by Tim Bock Thursday, August 31, 2017, 7:00 am
Learn how to create layered data visualizations using R in this week's blog with Tim Bock.


By Tim Bock, founder of Displayr

If you have tried to communicate research results and data visualizations using R, there is a good chance you will have come across one of its great limitations. R is painful when you need to create visualizations by layering multiple visual elements on top of each other. In other words, R can be painful if you want to assemble many visual elements, such as charts, images, headings, and backgrounds, into one visualization.

The good: R can create awesome charts

R is great for creating charts. It gives you a lot of control and makes it easy to update charts with revised data. As an example, the chart below was created in R using the plotly package. It has quite a few nifty features that cannot be achieved in, say, Excel or Tableau.

The data visualization below measures blood sugar, exercise intensity, and diet. Each dot represents a blood glucose (BG) measurement for a patient over the course of a day. Note that the blood sugar measurements are not collected at regular intervals so there are gaps between some of the dots. In addition, the y-axis label spacings are irregular because this chart needs to emphasize the critical point of a BG of 8.9. The dots also get larger the further they are from a BG of 6 and color is used to emphasize extreme values. Finally, green shading is used to indicate the intensity of the patient’s physical activity, and readings from a food diary have been automatically added to this chart.

While this R visualization is awesome, it can be made even more interesting by overlaying visual elements such as images and headings.

You can look at this R visualization live, and you can hover your mouse over points to see the dates and times of individual readings.



The bad: It is very painful to create visual confections in R

In his book, Visual Explanations, Edward Tufte coins the term visual confections to describe visualizations that are created by overlaying multiple visual elements (e.g., combining charts with images or joining multiple visualizations into one). The document below is an example of a visual confection.

The chart created in R above has been incorporated into the visualization below, along with another chart, images, background colors, headings and more – this is a visual confection.

In addition to all information contained in the original chart, the patient’s insulin dose for each day is shown in a syringe and images of meals have also been added. The background has been colored, and headings and sub-headings included. While all of this can be done in R, it cannot be done easily.

Even if you know all the relevant functions to programmatically insert images, resize them, deal with transparency, and control their order, you still have to go through a painful trial and error process of guesstimating the coordinates where things need to appear. That is, R is not WYSIWYG, and you really feel this when creating visual confections. Whenever I have done such things, I end up having to print the images, use a ruler, and create a simple model to estimate the coordinates!


Good-looking complex dashboard


The solution: How to assemble many visual layers into one data visualization

The standard way that most people create visual confections is using PowerPoint. However, PowerPoint and R are not great friends, as resizing R charts in PowerPoint causes problems, and PowerPoint cannot support any of the cool hover effects or interactivity in HTMLwidgets like plotly.

My solution was to build Displayr, which is a bit like a PowerPoint for the modern age, except that charts can be created in the app using R. The app is also online and can have its data updated automatically.

Click here to create your own layered visualization (just sign into Displayr first). Here you can access and edit the document that I used to create the visual confection example used in this post. This document contains all the raw data and the R code (as a function) used to automatically create the charts in this post. You can see the published layered visualization as a web page here.

Monthly Dose of Design: Introducing Design Fundamentals For Researchers

New to design? We're bringing back the Monthly Dose of Design series. Get caught up with the fundamentals for market researchers and make your work stand out.


By Emma Galvin & Nicholas Lee

The visual design of your proposals, discussion guides, questionnaires and reports is probably one of the later aspects you consider after methodology and content.

However, here at Northstar, our philosophy is that design has the power to positively disrupt, inspire and elevate research within organisations and therefore should in fact be a primary consideration. We believe in ‘Interpretative visualisation’ meaning the look and feel of our deliverables start to tell the story before any word is readFrom ensuring proposals resonate with clients, to communicating discussion guides clearly to international moderators, to engaging participants with questionnaires and exciting clients with insight presentations, design can enable a better research experience for everyone involved.

Resultantly, we’d like to share 10 design fundamentals that all researchers can use to enhance their work.

Planning Fundamentals

Do Your Visual Research
Create a moodboard embodying the look and feel of your upcoming document that includes colour palettes, visual styling, and font styles. Most importantly keep it consistent with your research’s theme.

Purposeful Design
Make sure you can explain why you have done what you have done visually, and there is a reason behind every visual or graphic element you use in your document. A simplistic layout is more visually appealing and easier to navigate than a chaotic one. White spaces are fine when well placed –  there is no need to pack a page with unnecessary design elements. Then you can try to replicate your inspiration through the layout, alignment, image styling, graphic elements and typography.

Build a Content Hierarchy
Once all your content is finalised, the first thing to do is establish what are the primary and secondary points of information you are trying to communicate for each section. The primary points will become the most visually engaging features in your document through clever and considered use of contrasting colour palettes, scale and different font weights.

Setting up a mock content page with all titles, subtitles and key points within each section helps improve in the planning of design. This allows you to easily view information such as consistency of title naming, quantity of pages needed and space allocation for content

Layout Fundamentals

Stretch Your Content Out
There is nothing worse than looking at a document that is overloaded with text. It overpowers your audience and makes your content difficult to understand. Spread your content out across multiple pages or slides so that it’s easier for your audience to understand, and so it can speak for itself when you aren’t there.

Less is More
Minimalism is your best friend and will help you communicate your content clearly, concisely and quickly. Keep your layout simple, utilise white space, and make sure your elements have enough room to breathe and are making the intended impact.

The benefits of minimalism also applies to your copy. Having a single word or short phase on a single page sometimes is much more effective in delivering a message.

Stay Aligned
To make sure your content and imagery doesn’t look like it has gone wild on the page, give your document alignment rules and margins so that it encourages structure throughout. Margins are some of the best rules to abide by – enforcing a “no-text zone” within an allocated distance from the edge of the page. This allows your document to have its contents neatly placed in the centre and prevents text being too close to the edge of the page and being hard to read.

Font Fundamentals

Limit Font Usage
Choose fonts that are clear and legible to read. What is the point in delivering your finished content in 5 different fonts that no one can read? Having 2 or less fonts also helps simplify design and makes it easy on the eyes to follow through a document.

Use Fonts With Variety
When choosing a font, choose one that has a lot of variations. For example, thin, light, semi-bold, condensed and italic. This easily supports your hierarchy of information that allows you to easily distinguish importance. Having a large font family helps you create new sub categories easily without the need for another typeface – helping you create a consistency in your design.

Colour Fundamentals

Use Contrasting Colours
Colour can be the S.O.S that your document needs. Create a complimentary colour palette that will create contrast in your document, and help you create hierarchy by pulling out key points of information. Use different tones of your colour palette throughout your document to keep it consistent yet still visually appealing by adjusting the brightness.

Managing Colour and Text
With text, imagery and innovative colour palettes, a rule to remember is light text belongs on a dark background and dark text belongs on a light background. Also, thinner fonts generally need to be in stronger contrast, as they can become hard to read, if you want to use a thinner font in a lighter colour, try making it a slightly thicker variation –  e.g. From light to regular.

What’s next…
Our next post will show you the rules of layout and copy presentation in discussion guide design, transforming it from a jumble sale of timings, objectives, and questions, into a clean crisp document that any moderator or client can make sense of.