Advertorial

Advertorial: Quality fit for purpose: Finding the quality “sweet spot” for today’s online research

Introduction

Data quality dominates online research industry conversations – yet the industry has never clearly defined what it is and where it starts. That lack of definition is just as much of a contributor to poor results as bad data itself – after all, how can one know what is quality data if we don’t have agreement on what defines it? This, and related considerations, are at the heart of data quality questions.

Yet, while there are many definitions, the only absolutes are that the data must be usable and participants real. Ensuring access to the right audience, with the right mix of characteristics/profile for what you want to know, helps assure quality. This means picking the right sample and sourcing partner, one you can trust to consult with you on tradeoffs and help you make the right decision for your project.

With those absolutes in place, everything else is a tradeoff, asking you to choose what is most essential to the project and what you are willing to discard to get it done. Speed vs. cost. Time in field or depth of targeting vs. population coverage … each of these is a tradeoff. Tradeoffs, however, can be just as hazily defined as data quality, with no hard-and-fast rules for when, how or why they should be made.

Because critical business decisions rely on accurate insights which in turn depend on the quality of the data inputs, it’s important to understand these tradeoffs at every step of the research process, from questionnaire design to quotas, screeners, sampling, data cleaning and analysis. It’s important also to know when to be cautious about tradeoffs – when the accuracy of insights is paramount to making the right critical business decision.

Quality Lenses and the concept of tradeoffs

For many, quality is about how participants behave in a survey – speeding, straight lining, etc. Others consider broader within-panel behavior: How many surveys do members take? How often? Others ask: Are the survey results as I expect? Do they match a benchmark?

This implies there’s no standard measurement of quality, and any definition is complex. ISO 9000, a set of international standards on quality management and assurance, uses the concept of fitness of purpose, defining quality as the “degree to which a set of inherent characteristics fulfills requirement.”

This definition allows for different quality frameworks: Those who focus on in-survey behavior need that behavior to be measurably above a standard; those who require data points to match benchmarks look for those matches. The implication, given perfection isn’t achievable, is that something must be traded off.

For example, researchers whose quality standard is “no straight lining” in a 30-minute survey make two tradeoffs: excluding those for whom a straight line is their truth and accepting only those who will work diligently through a 30-minute survey. How representative are those people? More nuanced approaches of measuring inattentive behavior might mitigate these tradeoffs, but they take time and effort– yet another tradeoff!

The problem arises when the tradeoffs are not recognized for what they are, leading a researcher to wrongly think they have achieved perfection simply by eliminating straight lining.

Common research tradeoffs

In reality, we make quality tradeoffs all the time in research. Sampling itself is a compromise: to get absolute data accuracy we require a perfect census.

We regularly accept these tradeoffs:

  • Sample size, coverage or time in field vs. cost
  • Online vs. in-person or phone methodology
  • Qualitative responses vs. quantitative data
  • Weighting vs. achieving quotas

And these are some more nebulous tradeoffs we make:

  • Asking all the questions we want vs. more engagement in a shorter survey
  • Specific quotas vs. time available to finish
  • Source breadth vs. engagement rates
  • Breadth of frame vs. cost
  • Time spent in QC vs. time for analysis

Sample selection tradeoffs

Today we have the option to make even more potential tradeoffs, especially in sample selection. Some sample sources are more expensive to recruit, keep and reward and we can make a cost tradeoff to omit those people. We can usually meet demographic quotas without them but miss the breadth of psychographic and lifestyle diversity that would improve frame coverage.

Re-examining the “Holy Laws” of research

Using a tradeoff framework to think about quality means rethinking some of the “holy laws” of sampling and research. Some of these allow us to “tick a box” and have a comfort level without true quality benefits.

  1. N=1000.  Accuracy loss from stopping at 950 is minimal
  2. In-survey confirmation of demographics.  Here we risk loss of engagement due to annoyance and fatigue
  3. Category lockouts. Dynata’s research shows almost no data inconsistencies without lockouts – and none after 12 weeks
  4. Removing all straight liners. There are false positives in even the most sophisticated methods
  5. Demographic quotas. Depending on the topic of the research, strict quotas on age, gender, income and education may be unnecessary
  6. Question wording precision. Wordiness decreases engagement.

Tradeoffs and Quality Fit for Purpose

The limit to this quality tradeoff is defined by “fit for purpose” so the two concepts must go hand in hand.

To take an extreme example, a senior politician is caught up in a scandal and a journalist demands an interview. It’s 9 a.m. and the politician has four hours to decide how to respond. As a researcher you know what you can do in that time – and it’s not optimal.

  • It takes an hour to write, program and test the survey – no pilots or second opinions
  • With only 30 minutes to prepare the data, the questions must be few and simple
  • Only people available between 10 a.m. and noon will be included
  • Online is the only practical methodology and you’ll pay almost any price for sample
  • You have to weigh the data because you’re short of some demographics.  

You make many tradeoffs but deliver usable data on time – fulfilling the “fit for purpose” requirement.

Understanding tradeoffs gives confidence that your project’s data quality will be sufficient to meet the research purpose. Choose a research and sample partner who will help you identify these tradeoffs and make the right decisions to drive quality outcomes.

The new ESOMAR questions are also a good framework to guide your thinking about what quality fit for purpose means for your research.

For more information on Dynata and quality data, visit the Dynata Data page.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles