The tumultuous events of the past several months have re-ignited public debates on several topics that are important to our society’s well-being – public health, racial and gender equality, social justice, surveillance and digital rights, to name but a few. Organisations, too, have been far from immune to these debates, often being ‘called out’ by a public that deserves and expects more ethical corporate practices.
So as I began my research in preparation for an upcoming event on navigating ethics, I dove into some of the latest literature on the topic and was surprised to come across the concept of ‘Ethics Owners’ – a term coined in a recent report by the organisation Data & Society. The report is based on interviews and ethnographic work inside Silicon Valley tech companies, focusing particularly on ethics professionals.
As a term, Ethics Owners seem to cover any professional in AI ethics, privacy and rights leads, research ethics, organisational ethics, and even data science research. Whilst they might officially be part of a legal or policy department, their work spans the entire organisation, with regular involvement in product teams, marketing teams, and data, insights and research teams. It emerges that the role of an Ethics Owner is to operate in a rather grey area, characterised by informal standards and the ongoing tension between the company’s need to generate profits and fulfil its ethical responsibilities towards society.
Although Silicon Valley tech companies may face uncertainty in this grey area, the market research and insights professions have a long history of abiding by certain ethical principles that can inform our work going forwards, particularly with the support of the ICC/ESOMAR Code of Conduct. But is there anything we can learn from Silicon Valley’s ‘Ethics Owners’ when it comes down to operationalising these principles and truly embedding them in our daily workflows? If so, where do we start?
The importance of routine
One of the (many) things that stood out to me in this report is the attempt by Ethics Owners to ‘routinise behaviours’. Much like in our personal lives, the act of routine also plays an extremely important role in our organisations, adding a layer of predictability and accountability to daily operations.
Thinking back to the work done on implementing the GDPR and other data protection laws, you might realise that it has now become routine to check whether you need to update your privacy policy in light of a new project, or whether you need to conclude a data processing agreement before taking on work for a new client.
So, too, we must now make the move to include ethical considerations in our business and organisational routines. In practice, this may mean setting aside some additional time in your meeting agendas to begin to ask the right questions.
Asking the right questions
Let’s zoom in on data quality as an example – a principle that goes to the heart of the research sector. Data quality is not only a foundational principle of the ICC/ESOMAR Code, but it also holds a high level of practical importance to achieving an overall healthier dataset and, ultimately, more accurate insights.
But we can also use data quality as a means to a deeper ethical question: can the highest standards of data quality help us ensure fewer chances of reproducing biases or inequalities in our societies? If yes, how can we ensure data quality across the datasets we work with?
Towards an Ethics Risk Assessment?
The overriding principle of data quality also goes to the heart of three key ethical principles: fairness, respect and transparency. In the data-driven economy we find ourselves in, and particularly within a sector that depends so much on data, the principles of fairness and respect can help guide our thinking around ethics by being integrated in an Ethics Risk Assessment.
Here, again, we draw on the requirement to carry out a Data Protection Impact Assessment under the GDPR, but adapt the model slightly to include ethical considerations, embedding some the questions above:
- If data is not neutral and has the potential to exacerbate existing inequalities, can we ensure the highest standards of data quality in this instance? What could be the risks should we be unable to guarantee that, based on what the dataset will be used for?
- Can we ensure transparency around our data processing and use in a way which shows respect for the people behind the data?
The complexities of the data ecosystem and large number of parties that exist in this space make it difficult to ensure full transparency, but this principle has nonetheless pushed organisations to think innovatively and explore the different and often visual ways to try to explain how the processing might work.
Ethical considerations are not easy, particularly considering that it involves looking at the wider picture, and often try to anticipate or imagine what a dataset may end up being used to achieve. But the insights profession has a solid history in this regard, and is closer to having internationally-agreed upon standards that we see codified in the ICC/ESOMAR Code – and that’s a great place to start.