Privacy & Ethics Reactions & Foresights

Particles of truth. Our key takeaways from Conference on Privacy, Data Protection and Artificial Intelligence 2020

As the 13th edition of the Conference on Privacy, Data Protection and Artificial Intelligence (CPDP) began in Brussels, delegates from all backgrounds – regulators, privacy advocates, industry, and academics – came together to discuss the future. In light of the increasingly connected digital age we find ourselves in, one question remained at the heart of the conference: what does a fair and inclusive digital world look like and how can we come together to achieve it?

Photo by Bianca Marcu

To begin to address this question, three days of fascinating panels and lively discussion unfolded, with some key messages emerging:

Ethics

There may not be one standardised ethical framework, but ethics must be embedded in risk-based approaches.

When developing new technologies, which inevitably involve the processing of individual’s personal data, you must think about at least the following risks: the potential risk of this new technology on individuals’ rights and freedoms; whether you are able to ensure an adequate level of security for those individuals’ personal data; and whether those individuals may face harm as a result of the processing.

Whilst this is the classic approach to a Data Protection Impact Assessment (DPIA) as specified in the EU GDPR, there is also an increasing role for standard-setting to play in supporting organisations to implement often complex legal principles into practice. In the panel ‘Can Ethics be Standardised?’ hosted by the Institute of Electrical and Electronics Engineers (IEEE), some suggestions for standards-setting in this space include designing steps to implement the principle of accountability. Taking a step even further back, it is also worth mentioning that human rights principles come in useful in the DPIA exercise – whether for use-cases of developing new technologies, AI, or Machine Learning.

The ethics theme continued throughout the three days, and whilst one specific direction is not yet certain, we can certainly ensure a more secure process by incorporating ethics considerations alongside carrying out a DPIA. 

Regulating Artificial Intelligence – do we know where we’re headed?

The short answer: well, not really.

But some indications did arise in the panel ‘What does good AI legislation look like? And how to get there?’ hosted by Facebook. When it comes to AI Governance, the process must begin internally by adopting an applied ethics perspective and a suitable review process involving project managers and engineering teams. Care must be taken with the data used to train the AI, and with designing a fairness assessment phase which takes into account checking algorithms for bias and sufficient diversity.

On the other side, regulators must develop and implement clear rules which are both risk-based and evidence-based, and with minimal gaps between the law and the technology itself. The European Commission’s take on beginning to develop an ethical and legal framework for AI arises from European values enshrined in the Treaties and the Charter of Fundamental Rights. Whilst the EU task force on AI was given a deadline to develop an AI framework within 100 days, it must be noted that AI is not entirely unregulated – we have the GDPR which provides a legal framework and starting point. At the same time, the potential of AI is not realised in its entirety and is a rapidly evolving field which means that regulation must come soon or risk playing a game of catch-up. 

Photo by Bianca Marcu

Surveillance and Sustainability

One message was clear from European Data Protection Supervisor (EDPS): to prohibit processing activities that are against the law. In his closing remarks, Wojciech Wiewiorowski made it clear that “digital sovereignty is pointless if we adopt the toxic system of surveillance we have used so far.”

One key priority of the EDPS  is to stop dealing with abstractions and talk about precise types of processing by zooming in on data-centric sectors and take adequate action. Following on from this, the ongoing discussion around ad-tech is not expected to end any time soon and we will certainly see further action this year.

And, as it turns out, surveillance is not sustainable for society

In times of climate emergency, we must also consider the impact of the data economy on the environment. A key message was delivered regarding sustainability: surveillance is not sustainable for society.

In 2008, the internet was already responsible for 2% of global CO2 emissions, exceeding those of the entire aviation industry. Since then, global energy demand related to internet-connected devices has increased by 20% each year and is potentially hampering any attempts to meet climate change targets. There is certainly an environmental impact behind the data economy, and more attention will be given to this issue in the next years, as well as on developing innovative solutions to tackle the potential impact.

On a final note…

…participating in such multi-disciplinary conversations around AI, ethics, data protection and privacy reminded me of one of the reasons I love this field so much – in the end, we are talking about our future, our society, and the kind of world we want to live in.

Want to hear a more in-depth analysis of the CPDP? Write to us at plus@esomar.org

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles