How to make privacy compatible with personalization

AI ethics are in the media headlines today. There seems to be a conflict in our society to avoid the development of artificial intelligence becoming the Big Brother from George Orwell in 1984. We would like to throw a beam of light and hope amid this commotion.

In this article, we will show how delivering a customer experience that is personalized to the selected traits of a customer is respectful of the right to privacy of the customer. Let’s start by understanding the common elements around privacy. We will then take a closer look at personalization and will close by combining these three elements together and giving a policy recommendation to use AI with confidence.

Transparency is pivotal to ensuring that financial analytics tools perform at their best – especially given the scrutiny and regulatory oversight finance functions are subjected to. The problem is that transparency is often only a secondary consideration when tools are built. The success of AI tools is either inhibited or tools are abandoned before they even get off the ground.

Privacy

The first primary legislation on privacy in the new era of Big Data was the General Data Protection Regulation (GDPR) from the European Parliament and the Council of the European Union. On point 26 of the preamble, we find a good global benchmark on where the limits of people’s privacy are regarding data: “The principles of data protection should apply to any information concerning an identified or identifiable natural person. Personal data which have undergone pseudonymization . . . should be considered to be information on an identifiable natural person. To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, . . . [S]uch as the costs of and the amount of time required for identification, . . . The principles of data protection should therefore not apply to anonymous information.”

Hence, we first need to treat pseudonymized and anonymized data differently and take the customer’s consent as it is considered private. The data owners are responsible for the encryption and security of Personal Identifiable Information. They must encrypt the personally identifiable information before delivering the data set to the agents working on analytics and solutioning.

There are two categories of encryption, pseudonymization, and anonymization. Pseudonymized data is data that has been encrypted using a Pseudo Identity without exposing privacy-related information. Like substituting a real name (John Smith) with a code (ID12345). Events, traits, actions, and behaviors are mapped to a user without letting the agent identify exactly “who” the user is. Only the data owners could reidentify the users.

On the other hand, anonymized user data has been detached from its privacy-related information, like deleting the real name variable of the data set. Depending on the anonymization technique, events, traits, actions, and behaviors can still be mapped to a user (depending on the anonymization technique) without letting the data agents or data owners identify precisely “who” the user is. Taking GDPR as a benchmark, a data set is deemed anonymous when the amount of effort needed to reidentify the data is beyond reasonable.

Personalization

Personalization can be referred to as displaying relevant content to customers based on their behavioral traits and demographic parameters. The personalization spectrum is broad and can range from Zero – Segment based – Hyper-Personalization.
• Zero – “One Size fits all”
• Segment Based – “Small, Medium & Large”
• Hyper-Personalization – “Custom fit for every user”

These are pre-existing concepts that are now being re-applied in digital space to increase customers’ relevancy, comfort, and satisfaction.

Compatibility

Now it is time to combine the two takeaways to find how we can make personalization compatible with privacy. In the table below, we summarize how, by using the appropriate policy, one can get the desired personalization without compromising the user’s privacy.

Policy recommendation Hyper personalization Segment level personalization
Pseudonymization V V*
Anonymization X V

*Possible but not recommended by default, unless justified.

Segment-based personalization requires data by different parameters at a rolled-up level; hence, privacy concerns and PII protection are simpler. In the case of hyper-personalization, customization is unique for every user, and therefore security concerns are higher. Also, the timeline for implementation will be higher and more complicated in hyper-personalization than segment-based personalization.

It is worth saying that there are different techniques to anonymize Personal Identifiable Information. Not all of them will keep the needed information for personalization. Hence, get the right capabilities to deliver the customer experience you want.

Authors

Ritesh Thakur

Ritesh Thakur

Principal Consultant
Sagar Shah

Sagar Shah

Client Partner, Fractal Dimension