This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Marc Dautlich

Partner, Bristows

Quotation Marks
“It is important to understand the legal test for whether data is considered anonymous under the GDPR is not absolute.”

Data Anonymisation: Considerations for the ICO's revised guidance

Practice Notes
Share:
Data Anonymisation: Considerations for the ICO's revised guidance

By

Marc Dautlich and William Hewitt assess the requirements needed for data privacy during the ICO's upcoming revision of its guidance

While data anonymisation is an effective tool for businesses looking to retain datasets for secondary purposes, it should not be seen as a ‘silver bullet’ or a simple way to use data outside of the General Data Protection Regulations (GDPR).

Unlike personal data, anonymised data does not relate to an identified or identifiable natural person. Therefore, the processing of anonymised data takes place outside the scope of the GDPR.

It is important to understand the legal test for whether data is considered anonymous under the GDPR is not absolute. Organisations are not required to eliminate possible risks of re-identification to conclude a dataset is anonymous. Instead, the GDPR takes a risk-based approach. Its rulebook states an organisation should consider “all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments” before concluding whether a dataset is anonymous.

It is also a common misconception that the process of anonymisation is a permanent one. For the majority of datasets, there will always be a probability the data can be re-identified. As re-identification techniques become more sophisticated over time, the level of identifiability is liable to change.

Unlike anonymous data, pseudonymous data is still considered personal data under the GDPR. However, it is important to note the boundaries between these categories can be blurred and identifiability should be seen as a spectrum.

Whether anonymisation as opposed to pseudonymisation is operationally or commercially appropriate for an organisation will depend on the context and purpose of the processing. For example, generalising the date of a hospital visit to a year, such as putting just 2020 instead of 4 July 2010, will be acceptable for some purposes, such as calculating how many patients were seen in a year but render the data useless for others, like seeing which months are busiest. Anonymisation typically involves a compromise because while it reduces the burden of compliance, it will often not satisfy considerations of utility and usability.

ICO revising its legal guidance on anonymisation

The Information Commissioner's Office (ICO) has recognised questions, whether data should be considered anonymous are challenging and more important to organisations now than ever before. The ICO is currently reviewing proposed changes to its previous legal guidance, the Anonymisation Code of Practice, published in 2012 in order to reflect changes in law and to provide organisations with increased clarity. The European Data Protection Board (EDPB) has also included preparing revised guidelines on anonymisation and pseudonymisation in its 2021/2022 Work Programme.

We believe there are two major challenges in particular need of being answered in the ICO’s ongoing review of the 2012 guidance. 

The role of environmental controls

If an organisation holds an anonymous dataset, along with the information required to identify the subjects of that data, is it reasonably likely an individual can be identified or should the “environmental controls”, meaning the contractual, technical and organisational controls implemented by the organisation, also be considered?

There are a number of issues the ICO will need to consider in addressing this question in its updated guidance. The ICO 2012 guidance notes re-identification risk “will vary according to the local data environment and particularly who has access to information” and therefore factors other than those related to the data itself should be considered. However, the guidance provides few examples of what controls should actually be considered.

The UK Anonymisation Network (UKAN), established through ICO funding to advance anonymisation best practices recommends the data situation, meaning the “aggregate set of relationships between some data and the set of their environments” needs to be considered in assessing re-identification risk under the GDPR.

In contrast, the EU Article 29 Working Party’s opinion 05/2014 (the “A29WP Opinion”) does not provide any specific guidance on the relevance of environmental controls. While it notes “importance should be attached to contextual elements,” the A29WP Opinion focusses on the elements required to re-identify the dataset and omits useful commentary on the importance of environmental controls more generally.

The case of Vidal-Hall, decided under the Data Protection Act 1998, provides limited additional guidance on what environmental controls would be considered effective to achieve anonymisation. The court concluded simply segregating the information required to identify the said data was not enough, even where the organisation had no intention of recombining such data. While this case was decided under the Data Protection Act 1998, the same conclusion would likely be reached under the GDPR.

However, it leaves the question of what environmental controls would be considered sufficient to ensure anonymisation. Factors we believe the upcoming guidance should consider include:

-        Human involvement, whether access to the datasets is restricted to nominated individuals.

-        Technical measures, what security controls have been implemented.

-        Organisational measures, any information security policies in place.

-        Contractual measures, confidentiality obligations on employees or limitations on transfers within a group of companies.

Sharing an anonymous extract of a dataset containing personal data?

If an organisation anonymises an extract of a dataset and shares the anonymous extract with a third party, can it be considered anonymous where the disclosing organisation retains the ability to re-identify the data subject to whom the data relates?

This question, of great interest to practitioners in a wide range of sectors, including healthcare, financial services and all sorts of research projects, is the subject of the next chapter of the updated ICO guidance. The guidance will need, among other things, to wrestle with apparently diverging UK and EU positions.

The ICO 2012 guidance, following UK case law on the point, allows organisations to assess identifiability from the data recipient’s position. The ICO therefore states “where an organisation converts personal data into an anonymised form and discloses it, this will not amount to a disclosure of personal data. This is the case even though the organisation disclosing the data still holds the other data that would allow re-identification to take place.”

Conversely, the A29WP Opinion states “…it is critical to understand when a data controller does not delete the original (identifiable) data at event-level, and the data controller hands over part of this dataset (for example after removal or masking of identifiable data), the resulting dataset is still personal data.”

While CJEU case law has cast doubt on whether the A29WP Opinion remains correct under the GDPR, the interpretation has not been expressly rejected. It is to be hoped the ICO’s forthcoming guidance at least clarifies the UK supervisory authority’s approach to this difficult area, even though it cannot unilaterally reconcile the UK and EU approaches.

Marc Dautlich is a data protection expert and partner at Bristows. Will Hewitt is an associate at law firm Bristows bristows.com