Why Capability Trumps Character for Supporters of the US President

What do supporters of Donald Trump value in a President? Tim Bock breaks down the data.

Trump delivers an address

By Tim Bock

American supporters of Donald Trump believe that financial skills are more important in a president than decency and ethics, a new survey shows.

Data science app Displayr and survey company Research Now questioned 1,015 adult Americans in July 2017 on their preferences among 16 different characteristics and capabilities relevant to judging the performance of a president. Supporters of Mr. Trump consider an understanding of economics, success in business, and Christianity to be important. People not approving of Mr. Trump place a much greater store in decency, ethics, and concern for the poor and global warning.

 


What type of President do most American’s want?

For most people, the most important characteristic in a president is being decent and ethical. This is closely followed by crisis management. An understanding of economics comes in at a distant third place, only half as important as decency and ethics. These three characteristics are collectively as important as the other 13 included in the study (shown in the visualization below).

Capabilities Trump Character

The survey found that people who approve of Trump as president place greater value on different traits to most people. This is illustrated on the visualization below, which compares the preferences of people broken down by whether they Approve, have No Opinion, or Disapprove of President Trump’s performance as President.

While most people regard decency and ethics as the most important trait in a president, this characteristic falls into third place for Trump approvers, who instead regard having an understanding of economics and crisis management as more important. For supporters, capabilities trump character.

The largest difference relates to being successful in business. This is the 4th most important characteristic among the people that approve of President Trump. However, it is the 11th most important among the disapprovers. In terms of an actual absolute importance, success in business is 11 times more important to Trump approvers than disapprovers.

The data shows the reverse patterns for experience in government, concern with poverty, concern for minorities, and global warming. All are characteristics that are moderately important to most people but unimportant to those that approve of President Trump.

Finally, there is evidence for the views that those who support President Trump prefer a Traditional American (which was a dog whistle for white), male, Christian, and entertaining president. However, these differences are all at the margin relative to the other differences.

 

Explore the data

The findings from this study can be explored in this Displayr Document.

 


Methodology

Displayr, the data science app, conducted this study. Data collection took place from 30 June to 5 July 2017 among a cross-section of 1,015 adult Americans. Research Now conducted the data collection.

The state-of-the-art max-diff technique was used to measure preferences. This technique asks people to choose the best and worst of five of the characteristics, as shown below. Each person completed 10 such questions. Each of the questions used a different subset of the 16 characteristics. The data was analyzed using a mixed rank-orderd logit model with ties.

 

 

The percentages shown in the the visualizations are importance scores. They add to 100%. Higher values indicate characteristics are more important.

All the differences between the approvers and the rest of the sample are statistically significant, other than for “Good in a Crisis” and “Multilingual”.

The table below shows the wordings of the characteristics used in the questionnaire. The visualizations use abbreviations.

Decent/ethical Good in a crisis Concerned about global warming Entertaining
Plain-speaking Experienced in government Concerned about poverty Male
Healthy Focuses on minorities Has served in the military From a traditional American background
Successful in business Understands economics Multilingual Christian

 

Jeffrey Henning’s #MRX Top 10: Best Practices for Information Security, Digital Marketing, Incentives, and Predictions

Posted by Leonard Murphy Wednesday, July 19, 2017, 7:19 am
All the news fit to tweet, compiled by Jeffrey Henning and curated by the research community itself.

Twitter

Of the 2,831 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

 

  1. The Future Consumer: Households in 2030 – Euromonitor expects 120 million new single-person households to be added over the next 14 years, driven by delayed relationships and the elderly outliving their spouses. Couples with children will be the slowest-growing segment.
  2. Beyond Cyber Security: How to Create an Information Security Culture – Louisa Thistlethwaite of FlexMR offers five tips for market researchers to create an “information security culture”: 1) have senior execs take the lead; 2) include security in corporate objectives; 3) provide creative training; 4) discuss security frequently; and 5) promote transparency not fear.
  3. Best Practices for Digital Marketing in the Market Research Space – Nicole Burford of GutCheck discusses the importance of segmenting your audience to provide the right content for the right people at the right time in their path to purchase.
  4. Top 5 Best Practices for Market Research Incentives – Writing for RWConnect, Kristin Luck discusses best practices for incentives, including tailoring them to the audience being surveyed and delivering them instantly.
  5. 5 Ways B2B Research Can Benefit from Mobile Ethnography – Writing for ESOMAR, Daniel Mullins of B2B International discusses five benefits of mobile ethnography: 1) provide accurate, in-the-moment insights; 2) capture contextual data; 3) develop real-life stories; 4) capture survey data, photos, and videos; and 5) more efficiently conduct ethnographies.
  6. MRS Reissues Sugging Advice in Wake of Tory Probe – The MRS encourages the British public to report “traders and organizations using the guise of research as a means of generating sales (sugging) or fundraising (frugging).”
  7. The Future of Retail Depends on the Millennial Consumer – Writing for TMRE, Jailene Peralta summarizes research showing that increasing student debt and rising unemployment for Millennials are reducing expendable income and decreasing retail shopping by this generational cohort.
  8. Prediction Report Launched by MRS Delphi Group – The Market Research Society has issued a new prediction report, “Prediction and Planning in an Uncertain World,” containing expert takes on the issue and case studies on integrating research into forecasting.
  9. 6 Keys for Conveying What Participants Want to Communicate – Mike Brown of Brainzooming untangles the complexity of reporting employee feedback “comments EXACTLY as they stated them.”
  10. Sampling: A Primer – Kevin Gray interviews Stas Kolenikov of Abt Associates about keys to sampling. On sampling social media, Stas says, “[It’s] a strange world, at least as far as trying to do research goes. Twitter is full of bots and business accounts. Some people have multiple accounts, and may behave differently on them, while other people may only post sporadically. One needs to distinguish the population of tweets, the population of accounts, its subpopulation of accounts that are active, and the population of humans behind these accounts.”

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, pages not in English, and links outside of the research industry (sorry, Bollywood).

 

Jeffrey Henning is the president of Researchscape International, providing custom surveys at standard prices. He volunteers with the Market Research Institute International.

WIRe 2017 Gender Diversity Study

Take the WIRe 2017 Gender Diversity Study!

 

It’s been three years since WIRe released the results of our first global survey on gender and diversity in the MR industry. In order to track against our baseline data, and measure progress in our industry, we need your help once again.

Please help us out by taking 10-15 minutes to participate in our 2017 Gender Diversity Study. Your feedback is truly invaluable to WIRe and our industry! This survey is mobile optimized and can also be stopped and restarted if more time is needed to submit your feedback. 

Take The Survey

In this update to the 2014 study, we’ll once again be digging into understanding the diversity of work and people in our field—with the ability to measure against the baseline data we previously collected. We will also look to illuminate what progress has been made on improving and providing diverse and supportive work environments in our industry.

Many many thanks to Lieberman Research Worldwide for their survey design and analytical support and to FocusVision for their programming prowess, as well as to our corporate sponsors, ConfirmitFieldworkLinkedInFacebookHypothesisLightspeedFocusVision, Research Now and Kantar, for their support of this research.

We’ll be sharing the results of the survey in the Fall, including a presentation at ESOMAR Congress.

Please forward…. sharing this survey with others in the industry (both women AND men) will ensure we collect diverse points of view.

Thank you!

Kristin Luck
Founder, WIRe

Causation: The Why Beneath The What

Can market research predict what consumers will do next? Find out in this interview with Kevin Gray and Tyler VanderWeele on causal analysis.

By Kevin Gray and Tyler VanderWeele

 

A lot of marketing research is aimed at uncovering why consumers do what they do and not just predicting what they’ll do next. Marketing scientist Kevin Gray asks Harvard Professor Tyler VanderWeele about causal analysis, arguably the next frontier in analytics.

Kevin Gray: If we think about it, most of our daily conversations invoke causation, at least informally. We often say things like “I dropped by this store instead of my usual place because I needed to go to the laundry and it was on the way” or “I always buy chocolate ice cream because that’s what my kids like.” First, to get started, can you give us nontechnical definitions of causation and causal analysis?

Tyler VanderWeele: Well, it turns out that there a number of different contexts in which words like “cause” and “because” are used. Aristotle, in his Physics and again in his Metaphysics, distinguished between what he viewed as four different types of causes: material causes, formal causes, efficient causes, and final causes. Aristotle described the material cause as that out of which the object is made; the formal cause as that into which the object is made; the efficient cause as that which makes the object; and the final cause that for which the object is made. Each of Aristotle’s “causes” offers some form of explanation or answers a specific question: Out of what?. . . Into what. . . ? By whom or what. . .? For what purpose. . .?

Causal inference literature in statistics, and in the biomedical and social sciences focus on what Aristotle called “efficient causes.” Science in general focuses on efficient causes and perhaps, to a certain extent, material and formal causes. We only really use “cause” today to refer to efficient causes and perhaps sometimes final causes. However, when we give explanations like, “I always buy chocolate ice cream because that’s what my kids like” we are talking about human actions and intention and these Aristotle referred to as final causes. We can try to predict actions, and possibly even reasons, but again the recent developments in causal inference literature in statistics and the biomedical and social sciences focus more on “efficient causes.” Even such efficient causes are difficult to define precisely. The philosophical literature is full of attempts at a complete characterization and we arguably still are not there yet (e.g. a necessary and sufficient set of conditions for something to be considered “a cause”).

However, what there is relative consensus on is that there are certain sufficient conditions for something to be “a cause.” These are often tied to counterfactuals, so that if there are settings in which an outcome would have occurred if a particular event took place, but the outcome would not have occurred if that event hadn’t taken place then this would be a sufficient condition for that event to be a cause. Most of the work in the biomedical and social sciences on causal inference has focused on this sufficient condition of counterfactual dependence in thinking about causes. This has essentially been the focus of most “causal analysis”, an analysis of counterfactuals.

KG: Could you give us a very brief history of causal analysis and how our thinking about causation has developed over the years?

TV: In addition to Aristotle above, another major turning point was Hume’s writing on causation which fairly explicitly tied causation to counterfactuals. Hume also questioned whether causation was anything except the properties of spatial and temporal proximity, plus the constant conjunction of that which we called the cause and that which we called the effect, plus some idea in our minds that the cause and effect should occur together. In more contemporary times within the philosophical literature David Lewis’ work on counterfactuals provided a more explicit tie between causation and counterfactuals and similar ideas began to appear in the statistics literature with what we now call the potential outcomes framework, ideas and formal notation suggested by Neyman and further developed by Rubin, Robins, Pearl and others. Most, but not all, contemporary work in the biomedical and social sciences uses this approach and effectively tries to ask if some outcome would be different if the cause of interest itself had been different.

KG: “Correlation is not causation” has become a buzz phrase in the business world recently, though some seem to misinterpret this as implying that any correlation is meaningless. Certainly, however, trying to untangle a complex web of cause-and-effect relationships is usually not easy – unless a machine we’ve designed and built ourselves breaks down, or some analogous situation. What are the key challenges in causal analysis? Can you suggest simple guidelines marketing researchers and data scientists should bear in mind?

TV: One of the central challenges in causal inference is confounding, the possibility that some third factor, prior to both the supposed cause and the supposed effect is in fact what is responsible for both. Ice cream consumption and murder are correlated, but ice cream probably does not itself increase murder rates. Rather, both go up during summer months. When analyzing data, we try to control for such common causes of the exposure or treatment or cause of interest and the outcome of interest. We often try to statistically control for any variable that precedes and might be related to supposed cause or the outcome or effect we are studying to try to rule this possibility out.

However, we generally do not want to control for anything that might be affected by the exposure or cause of interest because these might be on the pathway from cause to effect and could explain the mechanisms for the effect. If that is so, then the cause may still lead to the effect but we simply know more about the mechanisms. I have in fact written a whole book on this topic. But if we are just trying to control for confounding, so as to provide evidence for a cause-effect relationship then we generally only want to control for things preceding both the cause and the effect.

Of course, in practice we can never be certain we have controlled for everything possible that precedes and might explain them both. We are never certain that we have controlled for all confounding. It is thus important to carry out sensitivity analysis to assess how strong an unmeasured confounder would have been related to both the cause and the effect to explain away a relationship. A colleague and I recently proposed a very simple way to carry this out. We call it the E-value, which we hope will supplement in causal analysis, the traditional p-value, which is a measure of evidence that two things are associated, not that they are causally related. I think this sort of sensitivity analysis for unmeasured or uncontrolled confounding is very important in trying to establish causation. It should be used with much greater frequency.

KG: Many scholars in medical research, economics, psychology and other fields have been actively developing methodologies for analyzing causation. Are there differences in the ways causal analysis is approached in different fields?

TV: I previously noted the importance of trying to control for common causes of the supposed cause and the outcome of interest. This is often the approach taken in observational studies in much of the biomedical and social science literature. Sometimes it is possible to randomize the exposure or treatment of interest and this can be a much more powerful way to try to establish causation. This is often considered the gold standard for establishing causation. Many randomized clinical trials in medicine have used this approach and it is also being used with increasing frequency in social science disciplines like psychology and economics.

Sometimes, economists especially, try to use what is sometimes called a natural experiment, where it seems as though something is almost randomized by nature. Some of the more popular of such techniques are instrumental variables and regression discontinuity designs. There are a variety of such techniques and these require different types of data and assumptions and analysis approaches. In general, the approach used is going to depend on the type data that is available, and whether it is possible to randomize, and this will of course vary by discipline.

KG: In your opinion, what are the most promising developments in causal analysis, i.e., what’s big on the horizon?

TV: Some areas that might have exciting developments in the future include causal inference with network data, causal inference with spatial data, causal inference in the context of strategy and game theory, and the bringing together of causal inference and machine learning.

KG: Do Big Data and Artificial Intelligence (AI) have roles in causal analysis?

TV: Certainly. In general, the more data that we have the better off we are in about ability to make inferences. Of course, the amount of the data is not the only thing that is relevant. We also care about the quality of the data and the design of the study that was used to generate it. We also must not forget the basic lessons on confounding in the context of big data. I fear many of the principles of causal inference we have learned over the years are sometimes being neglected in the big-data age. Big data is helpful but the same interpretative principles concerning causation still apply. We do not just want lots of data; rather the ideal data for causal inference will still include as many possible confounding variables as possible, quality measurements, and longitudinal data collected over time. In all of the discussions about big data we really should be focused on the quantity-quality trade-off.

Machine learning techniques also have an important role in trying to help us understand which variables, of the many possible, are most important to control for in our efforts to rule out confounding. I think this is, and will continue to be, an important application and area of research for machine learning techniques. Hopefully our capacity to draw causal inferences will continue to improve.

KG: Thank you, Tyler!

 

______________________________________________________________________________

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Tyler VanderWeele is Professor of Epidemiology at Harvard University. He is the author of Explanation in Causal Inference: Methods for Mediation and Interaction and numerous papers on causal analysis.

 

6 “Back to Basics” Steps Researchers Should Practice

With all the current buzz topics in MR, it's also important to focus on strong fundamentals including sample and data quality.

 

By Brian Lamar

I’ve been fortunate to work in research for over 20 years and have seen the industry evolve in so many ways.  I started at a small market research company in Lexington Kentucky as telephone interviewer while I was an undergrad at the University of Kentucky. Each day I came in for work, my manager would brief me on the studies that I was going to work on that day ensuring I knew the questionnaire as well as possible. We’d go through the questionnaire thoroughly, we’d role play, and she’d point out areas that the client wanted additional focus. The amount of preparation seemed like overkill to me, but I played along and occasionally would have my feedback incorporated into the questionnaire. These meetings went on for a couple of years – every single day. Every single study. And not with just me – all of us interviewers had to go through this process. I think I still have most of an utility questionnaire memorized.

Later, I managed a telephone tracking study at a different company in New York as a project manager. Like most project managers most days were very hectic with all of the different tasks that you do to support clients. About once a month I would go over to the phone center and monitor the telephone interviews. I would sit in a briefing similar to what I went through my initial role as a telephone interviewer. This briefing was at an entirely another level though. The supervisor would have an entire room full of interviewers, and they’d review the questionnaire(s) similar to what I did, only these interviewers were much more detail oriented and critical and did a very thorough QC of every study before it launched. They’d not just offer suggestions, but they’d campaign for changes and talk about how important it would be to make these changes. Each month I would receive this lengthy list of changes, and it would be frustrating to go through them and determine which suggestions were important enough to bring to the client’s attention. Typically 1-2 changes would be made, making the language more consumer (not research) friendly, more logical flow, and other improvements. Looking back on it, the process that was created long before me added a lot of value to the research.

In 2001, like most clients, this client decided to transition their research from telephone to online, including the tracking study I managed. When we initially moved the work online, we had an entire team of people review the questionnaire and offer insights on its design, keeping both new technologies in mind as well as making improvements to the respondent experience. We had internal experts discuss the advantages and disadvantages of online research, and we implemented them. We did a side by side test for over a year and were in constant communication with the client on a questionnaire/design standpoint. Rest assured, we made a lot of mistakes back then and were far from perfect. Sweepstakes as an incentive seem ridiculous nowadays. We transitioned phone surveys to online without pushing back on interview length and didn’t think long-term as much as we should have. But we had a large, diverse group of people who focused on the quality of the research as well as advocates for the respondents. Nearly all companies did back then, and the client was very involved in these discussions and decisions. They had transparency throughout the entire process that made the research more successful.

From 2001 and 2013 (when online research moved from infancy to maturity) I had a variety of positions almost exclusively in online research from project management to sales to analysis and was somewhat removed from the quality assurance processes. I know a process still existed and were important, but I wasn’t as involved. One of my current roles is to assist clients in data quality review. I review data; I review questionnaires; I review research designs at a much broader level than I did early in my career. Instead of managing or seeing 5-10 studies per week, I have the opportunity to review much more than that across a wide range of objectives and topics. Perhaps it’s the nature of this role, but I feel like the systems we, and lots of companies put in place back in my telephone and early online research days, are now non-existent. I also take a lot of surveys from non-clients as I’m a member of numerous different panels just to see what new types of innovations and research in the marketplace. According to a recent GRIT report, about half of all surveys are not designed for mobile devices, which is completely unacceptable. I can personally testify how frustrating these surveys are. Online research has made a lot of technology investments in the past few years, and many of these innovations have made improvements. But we certainly haven’t figured out how to best use this technology to improve survey design and the respondent experience – at least not yet.

I see a lot of bad research unfortunately both in my day job to evaluate data quality as well as when I take surveys in my spare time. I see screeners that are obvious for any respondent to enter the survey and even surveys without screeners entirely. I’m not sure if all researchers understand the importance of a “none of these” any longer. Respondents routinely answer the same question over and over as they’re routed from sample provider to sample provider. And this bad research isn’t from companies you would expect – they’re from names all of you have heard of: big brands or big market research companies along with small businesses or individuals using DIY tools.

At some point along the way, I feel like we’ve lost scrutiny over questionnaire and research design and that is the point of my writing this. A lot of other people have written similar blogs, and while nothing I say may be unique, it needs to be said over and over again until things improve. I heard someone recently say that “the market has spoken” when discussing sample and data quality meaning that clients, market research firms, and researchers have accepted lower quality in so many areas. Perhaps as an industry we have, but I feel like a lot of driving principals of research I described above are now non-existent. Do companies still have a thorough QC process? Do clients review online surveys? How many people are involved in questionnaire design? Just last week I led a round-table discussion on data quality and multiple brands admitted to not reviewing surveys and not looking at respondent-level data. Honestly, it makes me sad and if you’ve read this far you should be sad or angry as well.

Perhaps these data quality controls exist at some companies – I bet at the successful ones they do. I’d love to hear from you as data quality, and ultimately clients making better business decisions because of survey research is a goal of mine.

Having said all of this, I can’t discuss all of these challenges without a few words of advice for researchers:

  1. I urge you to take your online surveys. All of them. Have your team take them as well. Have someone not associated with the study or even market research test it as well. I think you’ll be amazed at what you find.
  2. Use technology to assist. Programming companies have done a great job at implementing techniques to help with the data quality process. They can identify/flag speeders. They can summarize data quality questions. They can provide respondent scores based upon open-ended questions. Become familiar with them and utilize them!
  3. Everyone else in the process should take your survey. The client shouldn’t be the only person expected to take the survey, but so should the market research firm, the sample team, the analyst, a QC team, everyone involved. Believe me, you’ll make a lot of recommendations around LOI and mobile-design if you do this. Join a panel and take a few surveys each week and odds are, you’ll want to write a blog like this as well.
  4. Know where your sample is coming from and demand transparency. Most sample providers are transparent and will answer any question, but you have to ask the question. How do they recruit? How does the survey get to the respondent? Are they ever routed? Do you prescreen? These are just a few questions you should understand about respondents to your survey.
  5. Ask respondent satisfaction and feedback. Are you getting feedback from respondents about the survey design? Insights can be obtained this way as well.
  6. Don’t remove yourself from the quality assurance role like I did for so many years. Regardless of where you are in the market research process, make sure you understand the quality steps throughout the entire process and ensure there are no gaps.

 

A New View On The MR Landscape: RFL 2017 “Global Top 50 Research Organizations”

RFL Communications, Inc. has released its third annual “Global Top 50 Research Organizations” (GT50) ranking based on 2016 revenue results.

For many years the AMA Gold Top 50 Report (formerly the Honomichl Report) and the variation of it used in the ESOMAR Global Market Research Report have been the default view of the size of the research industry. These reports have evolved over the years to encompass an ever expanding definition of what constitutes market research, but have left some critical gaps by not including sample companies, technology platforms, and organizations such as Google, Facebook, Equifax, etc… companies that fit within other categories but yet have active research divisions that are players in the market. So although incredibly useful and important, I think they are incomplete views of the industry.

Over the past few years there have been a few alternative views circulated, mostly through the work of organizations such as the MRS, Cambiar Consulting and even here at GreenBook, however industry legend Bob Lederer may have gotten closer to what such a list should be like via his own “Top 50 Research Organizations” report. While still somewhat incomplete in my view (for instance Research Now, SSI, Google are not listed and should be), it is a more comprehensive ranking and presents an interesting alternative structure.

The new RFL Report is out, and below is the press release with a link to download the report. You can download the AMA Gold report here and the ESOMAR report here.

RFL Communications, Inc. has released its third annual “Global Top 50 Research Organizations” (GT50) ranking based on 2016 revenue results. This unique tabulation first broke industry norms for such a list in 2015 by extending inclusion beyond only dedicated research agencies, and giving consideration to many of the most important research suppliers, plus dynamic and unorthodox research businesses.

Key examples on the 2016 list include Acxiom, Dunnhumby, Tableau Software, Experian, Harte-Hanks and Twitter, among others.

Revenues reported in the “GT50” are sourced from financial filings by public companies, published and confidential sources, and RFL’s own internal estimates.

The third annual edition of the “GT50” contains a surprising shuffling at the top of the industry hierarchy. Optum, a subsidiary of UnitedHealth Group, is listed as the industry’s number one player, displacing the Nielsen Company, the longtime standard-bearer on every research industry ranking.

Click Here for Your PDF Copy

“We were surprised to find Optum’s $7.3 billion revenues from its Data and Analytics division in 2016 were $1 billion larger than Nielsen in the same period of time,” says Bob Lederer, RFL Communications’ President and Publisher. “Optum’s business activities, notably its operations’ research work, validated its presence on the GT50 and its revenues consequently led to their supplanting Nielsen as the largest company in the research industry today.” The Nielsen Company comes in at number two on this year’s list.

There are eight new research organizations in the GT50 this year, led by Optum, Rocket Fuel, and Simon Kucher & Partners. Six research organizations on last year’s GT50 are not on this year’s list. Some of those are due to mergers with other GT50 companies, notably Quintiles and AlphaImpactRx, both of which merged with IMS Health in 2016.

One conspicuous company missing from this year’s ranking is comScore, whose final 2016 financial figures are part of a multi-year auditing and were not available as this report went to press. Two other companies, IBM and Omnicom, were dropped from the 2016 list. In spite of their existence inside public companies, a lack of transparency made it impossible to calculate their research revenues.

RFL’s “Global Top 50 Research Organizations” is available now on the company’s official website, RFLOnline.com. Existing RFL newsletter subscribers should have already received their copies by mail.

Hat’s off to Bob for continuing to expand our understanding of the market!

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Old-School Crosstabs: Obsolete Since 1990, but Still a Great Way to Waste Time and Reduce Quality

Crosstabs may historically be apart of DIY analysis tools, however researchers can use statistical tests to automatically screen tables.

 

By Tim Bock

Difficult to read old-school crosstab

The table above is what I call an old-school crosstab. If you squint, and have seen one of these before, then you can probably read it. The basic design of these has been around since the 1960s.

Matrix printer paper

Originally, these old-school crosstabs were printed on funny green and white paper with a landscape orientation, shown to the right. The printers were surprisingly slow. The ink and paper expensive. The data processing experts responsible for creating them tended to be very busy. So, these crosstabs were designed with the goal of fitting as much information on each sheet as possible, with multiple questions shown across the top.

Advances in computing have led to a change in work practices. Some researchers still create these tables, but have them in Excel rather than print them. Other researchers have taken advantage of advances in computing and stopped using old-school crosstabs altogether. This post is for researchers who are still using old-school crosstabs. It reviews how three key innovations made these old-school crosstabs obsolete:

  • Improvements in printing and monitors, which permit you to show statistical tests via formatting.
  • Automatic screening of tables based on statistical tests.
  • DIY crosstab software.

At the end of the post I discuss the automation of such analyses using Q, SPSS, and R.


Improvements in printers and monitors

When the old-school crosstabs were invented, printers and computer screens were very limited in their capabilities. Formatting was in black and white – not even grey was possible. The only characters that could be printed in less than a few minutes were those you could find on a typewriter. With such constraints, using letters to highlight significant differences between cells, as done in old-school crosstabs, made a lot of sense.

However, these constraints no longer exist. Sure, an experienced researcher becomes faster at reading tables like the one above, but the process never becomes instinctive. You do not have to take my word on this. Can you remember the key result shown on the table above? My guess is you cannot. Nothing in the table attracts your eye. Rather, the table is something that requires concentration and expertise to digest. For example, to learn that the 18 to 24 year olds are much less likely than older people to feel they are “Extremely close to God”, you need to know that, rather than look in the column for 18 to 24s, you instead need to scan along the row and look for other columns where either or appears.

Now, contrast this to the table below. Even if you have never seen a table quite like this before, you can quickly deduce that the 18 to 24 year olds are less likely to be “Extremely close to God” than the other age groups. And, you will also quickly work out that we are more likely to feel close to God the older we get, and that the tipping point is around 45. You will also quickly work out that females and poorer people are more likely to think themselves close to God.

 

Crosstab with sig testing displayed using colors and arrows

The difference between the two tables above is not merely about formatting. The first table is bigger because it includes a whole lot of needless information (I return to this below). The second table is easier to read because it contains less data. It also uses a different style of statistical testing – standardized residuals– which lends itself better to representation via formatting than the traditional statistical tests (the length of the arrows indicates degree of statistical significance).

We can improve this further still by using a chart instead of a table. The chart below is identical in structure to the table above, except that it uses bars, with these bars shaded according to significance. The key patterns from the previous tables are still easy to see, but they are now easier to spot as they are represented by additional information (i.e., the numbers, the arrow lengths, the bar lengths, and the coloring). We can also now readily see a pattern that was less obvious before: with the exception of people aged 65 or more, all the other demographic groups are more likely to be “Somewhat close” than “Extremely close” to God.

Cross tab with bars


 

Using statistical tests to automatically screen tables

When people create old-school crosstabs, they never create just one. Instead, they create a deck. Typically, they will crosstab every variable in the study, by a number of key variables (e.g. demographics). Many studies have 1,000 or more variables, and usually 5 or so key variables, which means that it is not unusual for 5,000 or more tables to be created. Old-school crosstabs actually consist of multiple tables pushed together, so these 5,000 tables may only equate to 1,000 actual crosstabs. Nevertheless, 1,000 is a lot of crosstabs to read. To appreciate the point look at the crosstab below. What do we learn from such a table? Unless we went into the study with specific hypotheses about the variables shown in the table below, it tells us precisely nothing. Why, then, should we even read it? Even glancing at it and turning a page is a waste of time.

Difficult to read old-school crosstab.

 

However, it is tables like the one below, which I suspect are most problematic. How easy would it be to skim this table and fail to see that people with incomes of less than $10,000 are more likely to have no confidence in organized religion? You wouldn’t make that mistake? Imagine you are skim reading 1,000 such tables late at night with a debrief the next morning.

If you instead use automatic statistical tests to scan through the tables and identify tables that contain significant results, you will never experience this problem. Instead, you can show the user a list of tables that contain significant results. For example, the viewer could be told that “Age” by “Organized Religion” and “Household income” by “Organized religion” are significant, and given hyperlinks to these tables.

Hard-to-read old-school crosstab


 

DIY crosstab software

In addition to cramming together too much data and using painful-to-read statistical tests, the old-school crosstabs also show too many statistics. Look at the table above. It shows row percentages, column percentages, counts (labeled as n), averages, column names, and the column comparisons as well.

The reason that such tables are so busy is that in the olden days there was a bottleneck in the creation of tables. There were typically lots of researchers wanting tables, and only a few data processing folk servicing all of them. This meant that when we created our table specs we tended to create them with all possible analyses in mind. While most of the time we wanted only column percentages, we knew that from time-to-time it was useful to have row percentages, so we had them put on all our tables. Similarly, having counts on the table was useful if we wanted to compute percentages after merging tables. And, averages were useful “just in case” as well.

Creating tables in this way comes at a cost. First, when people ask for tables because they might need them, somebody still spends time creating them: time that will often be wasteful. And, because the tables contain information that is unnecessary, it requires more work to read them. Then, there is the risk that key results are missed and quality declines. There are two much more productive workflows. One is to give the person who is doing the interpretation DIY software, leaving them to their own devices. This is increasingly popular, and tends to be how most small companies and consulting-oriented organizations work today. Alternatively, if the company is still keen to have a clear distinction between the people that create the tables versus those that interpret them, then the table-creators can create two sets of tables:

  1. Tables that contain necessary and needed key results that pertain to the central research question.
  2. Tables that contain significant results that may be interesting to the researcher.

If the user of such tables still wants more data, they can create it themselves using the aforementioned DIY analysis tools.


Conclusion

Nothing in this post is new. (Sorry.) Using formatting to show statistical tests has been around since the late 1980’s. The first DIY crosstab tool that I know of was Quantime, launched in 1978. And, laser printers have been in wide availability since the mid-1990s.

Stopping using old-school crosstabs is just a case of breaking a habit. A good analogy is smoking. It is a hard habit to kick, but life gets better when you do. I am mindful, though, that it is more than 100 years since the father of time and motion studies, Frank Bunker Gilbreth Sr., worked out that bricklayers could double their productivity with a few simple modifications (e.g., putting the bricks on rolling platforms at waist height), and many of these practices are still not in common use. Of course, most laborers get paid by the hour, so they do not need to improve their productivity.

3 Best Practices For Digital Marketing In The Market Research Space

Posted by admin Monday, July 10, 2017, 6:53 am
Posted in category General Information
Nicole Burford is Digital Marketing Manager at GutCheck and has been responsible for their industry leading digital marketing campaigns. By our request, in this post she details some of her hard won knowledge on how to build an effective digital strategy in the research space.

By Nicole Burford 

Digital marketing is an ever-changing puzzle that marketers are constantly trying to solve. Following the latest trends, trying to figure out the newest apps and social platforms, and keeping up with their evolution can be a full time job in itself. In my years in digital marketing, I’ve seen my share of both successful campaigns and ones I’ve at least learned from, and I’ve developed some tried and true best practices that stretch across multiple areas of digital marketing. My recent shift to marketing within the market research space has allowed me to apply techniques I’ve learned in the past as well as test numerous new strategies. Here are some learnings and tips I’ve uncovered for using digital marketing in the market research space.

General Best Practices

Know your audience!

In this case, we are talking about market researchers and marketers: understanding an audience is their job, and they will see right through a generic message that you post all over the internet. So keep these practices in mind:

  • Segment your audience: Whether it’s email, display ads, or social media, most advertising channels allow for some sort of audience segmentation. For example, segment by job title and industry. By doing this you’ll be able to send the right content to the right people at the right time.
  • Always tailor your message: Be as specific to your audience as possible in your messaging. If you are talking to a consumer insights person in consumer packaged goods (CPG), make sure that your ad is something relevant to them, rather than a general message or something meant for, say, technology or finance.

TIP: Segmentation helps with this!

  • Be honest about your brand awareness: Consider if this is a brand new audience or one that may have already been exposed to your business. This will drive how high level or in-depth you should go with your messaging.

Test your messaging and learn from it.

Whether it’s keywords, length, or benefit statements, try different approaches and see what sticks.

  • Set objectives going into the tests:
    • What do you want to learn?
    • What do you want to test?
    • How do you measure success?
  • Start with split tests: Test different messages about the same subject with the same audiences to see which message resonates the most. You can also test the same message with different audiences to see how each group responds.
  • Test across all your channels: People engage differently with different channels, so test a variety of assets, calls to action, and messaging within each channel. Then, use the results to inform more content creation relevant to that channel.

Track conversions.

Whenever possible, take the time to set up conversion tracking to get a better understanding of what drives your audience to convert (click, submit a form, etc.), allowing you to optimize your campaigns based on conversion and maximize your budget by excluding those who have already converted. Conversion tracking allows you to track what a user does after they click on/engage with your ad, so you can see if they actually purchased a product, signed up for your newsletter, or downloaded an asset, etc.

Most of these best practices hold true for a wide array of digital marketing channels. But tweaking these to cater to each channel is key to increasing success. Here are some key examples of how these tips can be applied to specific marketing channels and tactics.

Best Practices by Marketing Channel/Tactic

Remarketing

  • Segment your audience based on their behavior on your website: Whether it is via URL (ex: com/solutions), event, or number of pages visited, take account of what actions individuals are taking on your site and present them with creative that complements their behavior.
  • Set up conversion tracking to exclude those who have already converted: Conversion tracking allows you to upload lists of people who have already completed your call to action—i.e., purchased, subscribed, signed up for events, etc.—so you aren’t wasting resources getting someone to convert on the same action twice.

Display & Search Ads

  • Leverage keywords to reach the right audience: Narrow your keywords to choose those that are specific to your business, and include keywords that reflect buyer intent.

TIP: If your budget allows, use a blend of search and display ads, staying true to the goal of each: display ads are for folks who have searched market research keywords, and search ads should show up when people actively search for market research keywords.

Social Advertising

  • Segment with social media: Social media channels know a lot about people. Use this to your advantage by layering on targeting to find the audience your message will appeal to most. Test with different audiences to see who is most engaged with your content, then use those learnings to position that content in front of this audience outside of social media.
  • Get specific with your messaging: We visit different social channels for different reasons, so we expect the content on each to be different as well. Overcome a cluttered social feed with appealing copy that is straightforward, relevant, and appropriate for that channel. And don’t forget to align your messaging with your segments.

TIP: You can also take advantage of the cost-per-click model to test different messaging with the same audience.

There are a lot of different ways to reach your audience and this will continue to evolve, but if you are looking for ways to just get started don’t forget to segment, test, and track.

Jeffrey Henning’s MRX Top 10: Automating Multiple Projects, Career Networking, and British Polls

Posted by Jeffrey Henning Friday, July 7, 2017, 7:30 am
Posted in category General Information
Of the 3,039 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

Twitter

By Jeffrey Henning

Of the 3,039 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. Using MR Automation to Keep Up with the Speed of Business – Roddy Knowles of Research Now argues that automation must be applied to the front end of research. “It’s common to approach every research project as its own unique creation, essentially building it from scratch … researchers will redesign whole questionnaires as standard practice … will re-create new audiences with custom screeners. There has been … resistance to automating the front end of the research project – specifically design – as to some it feels like it is infringing on the domain of the researcher.” He argues that the real front end requires stepping back and looking across multiple projects to identify their standard elements and create a customized template for future work.
  2. New Political Identities formed by Brexit – 73% of Brits now consider themselves either Leavers or Remainers, “cutting across the traditional two-party British system [for] greater political uncertainty.”
  3. MRS Launches Innovative Online Training with Kogan Page – The Market Research Society has partnered to develop an online course on Statistics for Research, with video content and interactive activities, designed for use on PCs, tablets, and mobile phones.
  4. 5 Networking Strategies for Young Professionals – Writing in Quirks, Jill Johnson offers key pointers for networking: “1) Build your network before you need it. 2) Build relationships in small increments. 3) Be specific in asking for what you want. 4) Face time is critical. 5) Use your expertise to help others.”
  5. When Virtual is Better than Reality – Amanda Ciccatelli, writing for TMRE, finds VR intersects with MRX in three key ways: speed to market, providing shopper-based insights in weeks rather than months; using real-time metrics rather than historic data; and aligning concepts better with the market.
  6. Consumers Open to Fintech but Data Privacy Concerns Remain – Writing for Research Live, Jane Bainbridge recaps a Strive Insight survey of 1,000 Brits about financial technology: 69% are open to trying a new digital product from a new vendor, but 65% agree that banks rather than digital firms are most likely to offer high quality financial services.
  7. The MRX Future Lens – IIeX North America 2017 – Daniel Evans of ZappiStore recaps a number of IIeX NA presentations. “Rather than doing more with less, client-side researchers are being asked to do more with existing data sets. But in order to achieve this, and to work with the boardroom in a more strategic way, they need time and breathing space.”
  8. Top 5 Trends in the Electronics Industry – The Business Research Company’s newest report on electrical and electronic manufacturing sees five major trends: product design outsourcing, virtual reality, robotics/automation, IoT in household appliances, and increased demand for Smart TVs.
  9. Defining Your Next Generation Customer Experience Strategy – Writing in Brand Quarterly, Charlene Li of Altimeter outlines three steps to a CX strategy: 1) understanding your customers’ objectives and how they are changing; 2) look beyond touchpoints and journeys to the overall customer relationships, in order to link CX strategy to the brand and business strategies; 3) and prioritize initiatives that make things easier and simpler for customers.
  10. What Do US Consumers See As Their Core Drivers of Satisfaction With Companies?– An Accenture Strategy survey of 2,003 U.S. consumers found they had switched retailers (27%), TV service providers (13%), and banks (10%) due to poor service, and 8 in 10 said the previous provider could have done nothing to keep them.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, pages not in English, and links outside of the research industry (sorry, Bollywood).

 

Jeffrey Henning is the president of Researchscape International, providing custom surveys at standard prices. He volunteers with the Market Research Institute International

Sampling: A Primer

Learn the fundamentals of sampling with this interview of Stas Kolenikov by Kevin Gray.

By Kevin Gray and Stas Kolenikov

Though it doesn’t get a lot of buzz, sampling is fundamental to any field of science. Marketing scientist Kevin Gray asks Dr. Stas Kolenikov, Senior Scientist at Abt Associates, what marketing researchers and data scientists most need to know about it.

 

Kevin Gray: Sampling theory and methods are part of any introductory statistics or marketing research course. However, few study it in depth except those majoring in statistics and a handful of other fields. Just as a review, can you give us a layperson’s definition of sampling and tell us what it’s used for?

 

Stas Kolenikov: Sampling is used when you cannot reach every member of your target population. It’s used in virtually all marketing research, as well as most social, behavioral and biomedical research. Research projects have limited budgets but, by sampling, you can obtain the information you need with maybe 200 or 1,000 or 20,000 people – just a fraction of the target population.

So, sampling is about producing affordable research, and good sampling is about being smart in selecting the group of people to interview or observe. It turns out that the best methods are random sampling, in which the survey participants are selected by a random procedure rather than chosen by the researcher.

The exceptions – where sampling is avoided – are censuses, where each person in a country needs to be counted, and Big Data problems that use the entire population in a data base (though we must keep in mind that the behavior and composition of a population is rarely static).

 

KG: Sampling didn’t just appear out of thin air. Can you give us a very brief history of sampling theory and methods?

 

SK:  Sampling, as many other statistical methods, originated out of necessity. The Indian Agricultural Service in the 1930–1940s worked on improving methods to assess the acreage and total agricultural output for the country, and statisticians such as Prasanna Chandra Mahalanobis invented what is now known as random sampling. The Indian Agricultural Service switched from complete enumeration to sampling, which was 500 times cheaper while producing a more accurate figure. Random sampling came to the United States in 1940s and is associated with names such as Morris Hansen, Harold Hotelling and W. Edwards Deming.

At about the same time, the 1936 U.S. Presidential election marked the infamous failure of the very skewed sample used in the Literary Digest magazine poll, and the rise to fame of George Gallup, a pollster who first attempted to make his sample closer to the population by employing quota sampling. Quota sampling today is considered inferior to random sampling, and Gallup later failed in the 1948 election (“Dewey Defeats Truman”).

 

KG: What are the sampling methods marketing researchers and data scientists most need to understand?

 

SK: There are four major components in a sample survey. These “Big Four” are: 1) potentially unequal probabilities of selection; 2) stratification; 3) clustering; and 4) post-survey weight adjustments.

Unequal probabilities of selection may arise by design. For example, you may want to oversample certain ethnic, religious or linguistic minorities. In some surveys, unequal probabilities of selection are unavoidable – for instance, in a phone survey, people who have both landlines and cell phones have higher probabilities of selection than those who use landline only or cell phone only. Unequal probabilities of selection are typically associated with reduction in precision compared to surveys that uses an equal probability of selection method (EPSEM). At the analysis stage, ignoring unequal probabilities of selection results in biased summaries of the data.

Cluster, or multi-stage sampling, involves taking samples of units that are larger than the ultimate observation units – e.g., hospitals and then patients within hospitals, or geographic areas and then housing units and individuals within them. Clustering increases standard errors, but is often unavoidable or is more economical when a full list of units is unavailable or expensive to assemble, while the data for some hierarchy of units is relatively easy to come by.

Stratification involves breaking down your target population into segments, or strata, before sampling, and then taking independent samples within strata. It is typically used to provide some degree of balance and to avoid outlier samples. For instance, in a simple random sample of U.S. residents, by chance, you might wind up with only Vermont residents in your sample. This is very unlikely, but it could happen. By stratifying the sample by geography, and allocating the sample proportionally to state populations, the sample designer rules out these odd samples.

Stratification is also used when information in the sampling frame allows you to identify target subpopulations. Many U.S. surveys oversample minorities, such as African Americans or Hispanics, to obtain higher precision for these subgroups than would be achieved under an EPSEM design. While there is no list with race/ethnicity to sample from, these samples utilize the geographic concentration of these minorities, with higher sampling rates used in the areas with higher population densities of these minority populations. Stratification typically decreases standard errors, and the effect depends on how strong the correlation is between the stratification variable(s) and the outcome (survey questions) of interest.

Post-survey weight adjustments (including adjustments for nonresponse and noncoverage) are aimed at making the actual sample represent the target population more closely. Say, if a survey ended up with 60% females and 40% males, while the population is split 50-50, the sample would be adjusted so that the weighted summaries of their attitudes reflect the true population figures more closely.

 

KG: What are some common sampling mistakes you see made in marketing research and data science?

 

SK:  The most common mistake, I think, is to ignore the source of your data entirely. It would be unrealistic to use a sample of undergrads in your psychology class to represent all of humankind!

One other common mistake that I often see is that researchers ignore the complex survey data features and analyze the data as if they were simple random samples. Earlier, I outlined the impact the Big Four components have on point estimates and standard errors, and in most re-analyses I have done or seen, the conclusions drawn, and actions taken by the survey stakeholders based on these conclusions, are drastically different if we mistakenly assume random sampling.

In the past 10 years or so, survey methodologists have solidified their thinking about sources of errors and developed the total survey error (TSE) framework. I would encourage marketing researchers to familiarize themselves with the main concepts of TSE and start applying them in their work as well.

 

KG: Has big data had an impact on sampling in any way?

 

SK:  Survey projects can often employ a much greater variety of data sources to draw the samples from. Some projects utilize satellite imagery, or even drone overflights, to create maps of the places from which the samples will be drawn and field interviewers deployed, in order to optimize work load.

On the other hand, whether or not a particular big data project would benefit from sampling often depends on the type of analytical questions asked. Deep Learning typically requires as large a sample as you can get. Critical threat detection must process every incoming record. However, many other big data projects that are primarily descriptive in nature may benefit from sampling. I have seen a number of projects where big data were used only to determine the direction of change, though a small sample may have sufficed.

 

KG: What about social media research – are there particular sampling issues researchers need to think about?

 

SK:  Social media is a strange world, at least as far as trying to do research goes. Twitter is full of bots and business accounts. Some people have multiple accounts, and may behave differently on them, while other people may only post sporadically. One needs to distinguish the population of tweets, the population of accounts, its subpopulation of accounts that are active, and the population of humans behind these accounts. It is the last of these we researchers are usually after when we talk about sentiment or favorability. Still, this last population is not the population of consumers since many people don’t have social media accounts, and their attitudes and sentiments cannot be analyzed via social media.

 

KG: Are there recent developments in sampling theory and practice that marketing researchers and data scientists should know about?

 

SK:  The mathematics of sampling are still the same as outlined by Mahalanobis, Hansen and others in the 1940s. While theoretical developments continue to sprout, most of new developments seem to be on the technological side and on the data base side. For instance, we now have effective ways to determine if a given cell number is an active one before you start dialing it. The newly developed commercial data bases allow us to get a list of valid mailing addresses in a given area, and then find out more about people who live at these addresses based on their commercial or social network activity. Sampling statisticians need to know way more than just mathematical aspects of sampling these days, and need to understand how to interact with the data sources that they will draw their samples from.

 

KG: Thank you, Stas!

 

SK: My pleasure.

 

________________

 

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy. 

Stas Kolenikov, Senior Scientist at Abt Associates, received his PhD in Statistics from the University of North Carolina at Chapel Hill. His primary responsibilities in the company involve sampling designs, weighting of the data, and other statistical tasks related to surveys. He has taught statistics at the University of Missouri and conferences sponsored by the American Association of Public Opinion Research (AAPOR), the Statistical Society of Canada, the American Statistical Association, and Stata Corp.