Making tech-powered research both fast and fun!
The adaption of digital tools has continued apace over the past 10 years. Mobile ethnographies and online communities (MROCs) for example are part of the proven mainstream in many mature markets, much in the same way that Social Media analysis is.
The benefits of using a digital/face-to-face interplay in multi-phase project designs are well documented:
- Integrating a mobile quick poll into a focus group helps researchers save time, allows them to grasp individual reactions and to focus more clearly on relevant deep-dives
- Digital pre-tasking works well for co-creation tasks: inviting participants to take photos as part of an MROC pre-task, uploading them to a portal, prior to attending a F2F workshop, is inspiring and helps generate richer, more focused outputs
The list could be extended at will.
Many agencies with qualitative roots are now genuinely hybrid, offering quant as much as qual, with their own IT and analytics departments. All good.
Integrating AI and Machine Learning is a Different Challenge
The task of integrating AI tools poses many challenges.
The sifting, screening and testing process is tricky. The market is awash with new tech-driven options, many of which are outdated virtually the moment they hit the market.
Price tags are often high; SME and startup budgets modest.
Scalability is often difficult: MR Markets across the globe are still quite different at least in consumer research.
That notwithstanding, IT vendors can (and do) over-claim on validation in the rush to market, often accompanied by comparatively large marketing budgets. It’s easy to get over-excited. Or, on the other end of the spectrum, do little, lull oneself into a false of security, reminding oneself of the high rates of failure for most types of innovation. Adopting a multi-pronged approach can help as outlined below.
Organizationally: Having an R&D capability to screen the software environment on an on-going basis with a view to task-specific validation shouldn’t be the prerogative of larger agencies. Even if resources don’t allow a fully dedicated person, identifying individuals within existing teams and allocating a percentage of their hours to R&D is an option.
Managing such teams over the business cycle is tricky. In times where there’s lots to do, it’s easy for innovation work to be neglected.
A ring-fenced approach supported by management is one way of helping to ensure that the clarion call of “clients first” doesn’t lead to innovation neglect and operational myopia.
Engaging Mixed-disciplinary task-forces: R&D needs to be plugged into operational realities to ensure that upstream work such as AI will be relevant to the mainstream business.
Having a mixture of project managers and R&D folk ensures focus – improvement efforts fall squarely on operational pain points that project teams experience in their daily business.
Execute your own first-stage validation: As mentioned above, the rush to market means that tech companies often over-promise on what their software can do; narrow validation is glossed over as being of universal relevance. Engaging on validation specific to your business is therefore key.
Using data from historical, non-sensitive data, is one easy way to ensure this isn’t a drain on resources. An example: a UK-based company approached H/T/P recently, promising to automatically script, tag and collate data from group discussions and IDIs, significantly shortening the time to first-stage analysis and rough top-line.
They claimed the software worked with German language outputs.
We took data from a historic project with a large proportion of regional German dialects and ran it through the machine’s system.
The scripts produced were only 50% readable – the machine struggled to accurately identify regional phrases, intonation. The Ruhrgebiet in particular (Germany’s largest conurbation with over 5 million inhabitants) beat the machine.
Systematic intelligence gathering is an oft neglected (ironically, given our status as insights specialists), but highly useful area when resources are tight.
One approach is to organize a team-debrief after the attendance of an industry conference.
Keeping a tag on award-winning companies is also useful – the annual IIEX Innovation competition award winners are worth tracking, for example.
The same is true for the annual UK MRS’ Conference and AURA awards, notably understanding what has made them stand-out. It can give an indication into how client preferences are evolving in the broader market.
Mining published sources is also invaluable as key word searches become more refined and useful – the ESOMAR online library ANA, for example, can deliver best-practice insights on the Mensch-Maschine axis.
The experiments published recently by HYVE AG/Beiersdorf (ESOMAR Fusion 2018) and SKIM (ESOMAR Global Qualitative 2017) have shown where an AI based approach can help – and specifically what areas “humans” should focus on.
Consider sponsoring projects with experimental designs
Change in MR is often driven by client needs or competitive movements – we tend to take our participants for granted. Adapting to societal changes and shifts in communication preferences, however, is equally important.
One example: social media usage is increasingly visual – Instagram growth is continuing to explode, Twitter timelines are peppered by animated GIFs and memes, Facebook Stories already has over 500 million users.
How should research adapt its approach to design and participation engagement?
Taking an experimental approach to understand these shifts can help. H/T/P recently conducted a two-phase study designed using only visuals, with no words whatsoever after the set-up phase.
Phase 1 involved asking teenagers to share their fears, hopes and dreams for the upcoming 12 months – and to express their “answers” with #nofilter digital pictures. The outputs were difficult to interpret – as researchers they were lacking in contextual detail, words were absent to help us interpret. They also didn’t really reflect how social communication often occurs – using filters, emojis, etc. This lead us to the next stage.
Phase 2 involved young women in 3 continents (USA, India, Germany) sharing their feelings on “Bad Hair Days” over a 5 day period, using a test-and-control approach. The test group was explicitly asked to apply filters before uploading their purely visual answers, whereas the control group undertook #nofilter pictorial tasks.
The use of augmenting the pictures worked well – the outcomes were much more expressive and revealing of the participants’ emotions, suggesting a heightened degree of authenticity.
The results have been useful for a broad range of our mainstream client work.
One Millimeter Behind Your Clients
One way of innovating that helps minimize risk is following client needs very closely, one millimeter behind their innovation pushes, learning with them, not for them. One project we conducted recently was for a major household appliance manufacturer who asked us to help them on an innovation process; evaluating first-stage designs for an expensive piece of high-end kitchen equipment.
The stimulus material came in the form of 3D renders – the cost being significantly lower than investing in prototypes.
Our design involved Virtual Reality. The approach worked well, delivering on all the required insights areas – both client and agency aware in advance that there were areas such as spatial aspects that a VR approach could not deliver.
In Summary
Whilst the world of tech can seem bewildering and sometimes far removed from recognizable human needs and people-centricity, its potentially disruptive power is too great to ignore.
Dealing with digital disruption is an ongoing task – one that involves a continuous alertness to new developments across the industry.
Multi-pronged approaches to innovation can help – dedicating resource to renewal and renovation, whilst not taking the eye off the current business flows.