On this Data Privacy Day, as a community, we have much to celebrate in raising privacy awareness and harmonizing best practices. It also welcomes reflection on where we can advance the conversation and ask whether our policies are moving us in the right direction.
Despite good intentions and lively discussion, in my opinion, we’ve lost the plot on data privacy. Certainly, we’ve successfully raised consumer awareness about how data is used, particularly across the tech landscape. Particularly in the last decade, many laws now prioritize goals of consumer privacy, increasing transparency, and holding companies accountable for bad data practices.
Nevertheless, and now with some history to our advantage, are we meeting these goals? The objectives of GDPR include protecting natural persons to the processing of personal data and to formulate rules relating to the free movement (neither restricted nor prohibited). The California legislature declared its adoption of CCPA as a way to give consumers “an effective way to control their personal information,” following the voters’ amendment to include the right of privacy as an inalienable right under the California Constitution. These responsibilities, appropriate as they may be, have placed greater attention on disclosures, particularly when creating the expectation and disclosures of end-to-end supply chain obligations requirements. Subsequent enforcement, too, has emphasized the proximate injury to consumers as one of inaccurate disclosures, and not fully crediting work that erects systems that still protect personal data through other means, such as architecture segregation as a means of security, technology strategies that facilitate goals, and more importantly, not ingesting certain data in the first place that could otherwise be easily taken in. This, too, emphasizes for our industry the outcome—disclosures—rather than the input. Simply put, this undervalues the innovative steps companies take to minimize and protect data association with a given individual. As privacy professionals and participants in the data ecosystem, is this the true outcome we seek to create?
As individuals with deep knowledge of the tools that avail us, I believe we have enough data to reevaluate the philosophy of the Fair Information Principles (FIPs), and where certain principles may need prioritization in order to rebalance the framework for the intended goals. By returning to the fundamentals of data privacy and incorporating an awareness of today’s technology advances, adjusting the weighted priorities of the principles can then better create “an effective way to control (consumer) information.”
Here are two areas where we can move the needle on data privacy policy and practice this year:
1. Distinguish identity from identifiers
We need to spend more time understanding and distinguishing between “identity” and “identifiers.” Data privacy laws often assume identifiers equal identity, and that assumption is inaccurate and incorrect. Laws added certain identifiers like home address and IP address, but today, these identifiers do not equal identity. For example, a home address may have a family of individuals in residence, and an IP address can reflect an office of employees or a single person with 10 devices.
Further, all of the activity associated with an identifier no longer paints the complete picture of someone’s behavior, an assumption from identifiers. For example, what I do on my smartphone is very different from the activities I engage in on my laptop. My activities on my laptop can be split between my professional and personal lives. Relying on either my smartphone or my laptop identifier in isolation presents a very narrow view. These identifiers are one piece in the puzzle of identity, certainly not a complete picture of my multifaceted identity.
Contact tracing illustrates this point in finer detail. The technology for COVID-19 contact tracing proved that we can successfully reach individuals without confirming or validating their identity. Using AI and a web of identifiers carrying minimal or no identity, we can see where the virus spikes. This intel enables predictive models for further disease spread and informs those living in that radius accordingly. That geographic identifier targets me for the right message, for example, but that location data point does not equal my identity. In fact, it’s not the identity of anyone. This is one example of how technology can sit in the center as a result of an AI pattern and not because the company collected my data. Key to this success is prioritizing the collection limitation (data minimization) principle, and focus on the architecture and software overlays for purpose specification.
Technology will continue to improve accuracy between identified behavior and associating behavior with an identity, but the two are not the same. In a number of areas and functions in our industry, identity need not even be present to accomplish goals. Of course, they can be synonymous, but this goes to the heart of emphasizing e.g., data minimization over other FIPs goals. We need to do a better job of rethinking the assumption that identifiers equate to an identity.
2. Bring more individuals into the conversation
We need more voices in our debates, and more business voices in policymaking discussions. The examples I’ve shared above create real roadblocks for companies to balance both doing right by the consumer and staying in business. Those designing the tech also need to advocate and educate in layman’s terms on how they’re changing best practices and behaviors. Roundtable discussions, including cross-sections of our organizations, would go a long way in advancing our collective knowledge, expanding viewpoints, and creating more robust and applicable solutions.
I further challenge businesses to open up the data privacy conversation within your organization. The solution is multidisciplinary, meaning representatives from engineering, technology, product, marketing, and legal need a seat at the table. Without this multidimensional view, we’re missing out on the full picture, potential, and practice of privacy by design.
The start of a new year affords us an opportunity to reflect, recharge, and reset our priorities. I look forward to the robust data privacy conversations ahead and the learning that inevitably comes out. It’s time for us to pick up the story where we left off, revise where we can, and focus on writing a new chapter in data privacy.