The availability of vast amounts of personal data on the internet combined with the ability to find correlations and create links between this information, can allow elements of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.
This exponential surge in data analytics, machine learning and artificial intelligence is facilitating the creation of profiles and automated decision-making. The automated processing of personal data is increasingly applied in banking, healthcare, insurance services, marketing and across other sectors, with a view to achieving business efficiencies and improving customer service.
A major issue arising as a result of this trend is the ability to make automated decisions affecting data subjects which entail a high degree of privacy risk. Such profiling may result in discrimination, for instance denying opportunities or access to employment or credit or perpetuate existing stereotypes and social segregation. It can also box a person into a specific category and restrict them to their “suggested preferences”, undermining their freedom of choice.
The General Data Protection Regulation (the “GDPR”) regulates and addresses risks arising from profiling and automated individual decision-making in a very specific manner. The Article 29 Working Party (“WP29”) Guidelines on Automated individual decision-making and Profiling (the “Guidelines”) give guidance on how the GDPR’s provisions should be interpreted.
Profiling
Profiling refers to the automated processing of personal data to evaluate personal aspects relating to a natural person and in particular to analyse or predict aspects concerning their performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.
The Guidelines identify three ways in which profiling may be used:
(i) general profiling;
(ii) decision-making based on profiling (which may incorporate an element of automated means); and
(iii) solely automated decision-making.
Whether the classification of individuals amounts to profiling will depend on the purpose of the classification. For example, statistics-driven classifications devoid of drawing conclusions on an individual’s behaviour would not be characterized as profiling.
The Guidelines also give the following example to distinguish between (ii) and (iii) above. Where an individual applies for a loan online, and:
(a) a human decides whether to agree to the loan based on a profile produced by purely automated means (this would fall under (ii) above);
(b) an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any prior and meaningful assessment by a human (this would fall under (iii) above).
Automated decision-making
Automated decision-making may partially overlap with or result from profiling, but it can also be entirely detached from profiling (and vice versa). Controllers can carry out profiling and automated decision-making provided they can meet all the GDPR principles and have a lawful basis for the processing.
Of pivotal importance to this discussion is Article 22 of the GDPR, relating to decision-making which is solely automated. Article 22 sets out a general prohibition on fully automated individual decision-making, including profiling, that has a legal or similarly significant effect.
The WP29 Guidelines explain that for data processing to significantly affect someone the impact of the processing must be sufficiently great or important to be worthy of attention. The Guidelines specifically note that the decision must have the potential to significantly affect the circumstances, behaviour or choices of the individuals concerned; have a prolonged or permanent impact on the data subject; or at its most extreme, lead to the exclusion or discrimination of individuals.
There are exceptions to the Article 22 prohibition which, when applicable, are subject to measures being in place that safeguard the data subject’s rights and freedoms and legitimate interests. As such, processing data through solely automated decision-making should not be undertaken, except where the decision is:
(i) necessary for the performance of or entering into a contract;
(ii) authorised by Union or Member State law to which the controller is subject and which sets out suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(iii) based on the data subject’s explicit consent.
Special Categories of Data
Under the GDPR, processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation is prohibited, unless one of the conditions in Article 9(2) of the GDPR is satisfied as well as a condition from Article 6. This includes ‘special category’ data derived or inferred from profiling activity.
Crucially, as highlighted in the Guidelines, profiling can create special category data by inference from data which is not in itself special category data but becomes special category data when it is combined with other data. For example, it may be possible to infer someone’s state of health from the records of their food shopping combined with data on the quality and energy content of foods.
Correlations may be discovered that indicate something about individuals’ health, political convictions, religious beliefs or sexual orientation, as demonstrated by the following example. If sensitive preferences and characteristics are inferred from profiling, a controller should make sure that:
(i) the processing is not incompatible with the original purpose;
(ii) they have identified a lawful basis for the processing of the special category data; and
(iii) they inform the data subject about the processing.
Automated decision-making that is based on special categories of data warrants extra considerations, namely that there is an applicable exemption and that paragraphs (a) or (g) of Article 9(2) apply. Furthermore, the controller must also put in place suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.
Data subjects’ rights
The GDPR set out a data subject’s rights, which are actionable against the controller engaged in profiling, as well as the controller making an automated decision regarding a data subject (with or without human intervention). Amongst other rights, a data subject has the right to object to the processing of his or her personal data for the purposes of direct marketing, including profiling to the extent that it is related to direct marketing.
The GDPR provides that the data subject be informed of the use of automated decision-making and that information be provided on the purpose and potential consequences of such processing. Data subjects also have the right to access details of any personal data used for profiling, including the categories of data used to construct a profile.
Compliance
Profiling attracts significant privacy risks and businesses should conduct a thorough assessment and implement appropriate compliance processes prior to launching an activity.
Conducting an impact assessment before undertaking any processing that presents a specific privacy risk by virtue of its nature, scope or purposes is itself a requirement under the GDPR (Article 35).
Businesses should also ensure they fully understand the GDPR’s definition of profiling and automated decision-making. Drawing from these definitions and the WP29 Guidelines, a business should identify all instances where it employs automated decision-making, including profiling, that produce legal effects concerning a data subject, or similarly significantly affects a data subject.
Businesses should establish safeguards to protect the data subject’s fundamental rights when using profiling or automated decisions and put in place procedures enabling data subjects to exercise their rights.