Return to Blog Page

Customer anonymity can't be guaranteed, but differential privacy can help change the game


By Susan Raabthe CDP Institute  

The relationship between companies and consumers is increasingly complex, with concern about privacy growing among consumers and pressure increasing on marketers at companies to use customer data more to engage with and find new customers. The key question is how to achieve a workable balance between the two.

With consumer experience projected to become more important than price or product in ensuring customer loyalty, the more trust you engender the more likely you are to grow your customer base. Customers want to understand how their data is being used and to know how to participate in the process of protecting their privacy.

The first step on the part of the marketer is to recognize that data is at risk whenever it is shared. You’ve heard about studies that have shown that even anonymizing data doesn’t always work. Back in 2007, Wired magazine reported that, “Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems.” The data was anonymized removing personal details, but researchers at the University of Texas were able to de-anonymize that data using only a small set of public data from the Internet Movie Database (IMDb). In December, the New York Times’ Privacy Project showed how cell phone company data of GPS pings from the cell phones of 12 million Americans could identify almost anyone when matched with public address data.

This has been a topic in the privacy field for decades and has been done with all kinds of data, including medical, financial and genetic. It’s scary and seems overwhelming, yet companies are still being asked to work to meet legal requirements laid out in legislation, including GDPR and the new California Consumer Privacy Act (CCPA) and to show they are putting appropriate privacy protections are in place.

While the nature of data security is that it can inevitably be breeched, regulators want to know that a company is being proactive in their compliance and consumers value companies that are transparent with their process.

This doesn’t mean you can’t use data for anything, but it does mean that you have to be careful to anonymize it effectively. Experts say there is no guarantee, but one powerful approach is a statistical technique called differential privacy, which is used by Apple, Google and other big tech companies via an algorithm that “adds random data into an original data set” (Digiday, April 2019). In September 2019, Google announced that they were allowing access to their differential privacy library to help developers at other companies achieve this

Robin Röhm, chief executive officer of apheris AI GmbH, a start-up that develops artificial intelligence algorithms for biomedical data says, “what we’re addressing is really an operational question about whether a company has defined a top-down logic to ensure semantic-enabled data privacy protection. This means everyone in the organization needs to adhere to the process. Then you have privacy by design, in which the process is incorporated into the company’s architecture with algorithmic rules customized to its needs.”

We’re in early stages of understanding how best to ensure individual privacy both tactically and legally, but those who are focused on this area agree that it takes collective knowledge and ongoing diligence to evolve strategies that work in the present and over the long-term.

Get in touch: 
The CDP Institute

Follow Them: 
Twitter: @CDPInstitute
LinkedIn: The CDP Institute