By JD Deitch, P2Sample
In part one of this series, I presented an overview of automation and the “Good” effects on the sampling industry. Several key operational processes are now automated, including transmission of survey specs and the redirection of participants to surveys. Companies that have embraced automation are enjoying faster execution, lower costs, and reduced error rates. Their clients are enjoying these benefits, too. Indeed, automation has become a competitive advantage. That said, implementation remains uneven and plagued by poor design. Let’s look at the impact of these shortcomings and discuss the “bad” and “ugly” effects.
The Bad
First and foremost, bad things are not caused by automation in the sense that, say, the consumption of fossil fuels leads to greenhouse gas emissions. Rather the negative effects arise from insufficient planning or desire to address its implications.
There is no greater bad—and no greater self-inflicted wound—than the superficial and plodding approach many companies are taking despite the clear benefits that await. Beyond the significant efficiency gains and lower costs, automation lets people focus on value for the client instead of repetitive tasks. In fact, automation done right can also dramatically improve participant experience and data quality.
Conversely, the costs of not automating or taking shortcuts are mounting daily. They include reduced feasibility, increased fraud, and ever-greater participant disengagement. In short, laggard suppliers are creating long-term problems for themselves and lousy data for their clients. Moreover, these firms lack the agility that clients desire.
Research agencies have never been known for their technological savvy, nor their willingness to march their core business into areas of risk and change. Limited implementations, insufficient optimization and maintenance, and long development cycles are thus unsurprising, but they do not make sense given the stakes.
Automation is forcing staffing changes, too. It requires experts who understand how and why sampling should work, as well as people who can speak to the research process and data collection. This means rethinking professional development, organizational boundaries, and even succession.
The Ugly
As noted above, the things that remain ugly about sampling automation are also not intrinsically caused by automation. They equally arise from insufficient or downright poor implementation.
Top on the list is the modern-day participant experience. One must conclude that few researchers ever subject themselves to this. Were they to do so, they would be justifiably appalled. The experience is shameful. From endless bouncing around in routers to broken links, from avoidable overquotas to constantly being asked age and sex and other common demographics, from improperly-set expectations about interview length to derisory incentives, an awful user experience is now a feature of many suppliers’ tech stacks!
Sample companies have typically not taken it upon themselves to address these issues. For years they have been leering at survey writers and tut-tutting non-mobile-optimized studies. Researchers still need this reminder, yet there is plenty of blame for sample providers to shoulder.
Part of the problem is the rush to build, launch, and then iterate quickly into a more stable version. The trouble with this is that the easy thing to do is to write computer programs. The harder, and more time-consuming, thing to do is to make automation work well. With the implications embedded in algorithmic black boxes, the effects are not always evident.
As suppliers press participants into more fatiguing experiences, dropout, overquotas, and disengagement all increase. The latest GRIT study confirms that fewer than one in four research participants are satisfied with their experiences. Often, these arise because the supplier has not properly and fully mapped data fields or and reconciled inconsistencies in coding. But the Ugly does not stop here.
As suppliers automate basic execution, related processes like feasibility and monitoring remain largely manual, causing shortages and delays in field. Human project managers, overworked from cost-cutting, cannot react as quickly as machines to troubleshoot and correct problems.
Worse still, industry standards for detecting fraud are now largely inadequate and only getting worse. The advertising industry has seen “bot” impressions skyrocket. Social media is riddled with automated troll accounts. Our industry is not immune. There is reason for concern.
In effect, the real ugliness from automation arises because companies are seduced by the efficiencies and take shortcuts (out of ignorance or deliberate neglect) around the hard work of getting things to work correctly.
Tune in on Friday to read the steps suppliers and clients can take to ensure automation is successful.
By JD Deitch, P2Sample