When is Privacy Impact Assessment (PIA) or Data Protection Impact Assessment (DPIA) necessary and mandatory according to the General Data Protection Regulation (GDPR)? So far there has been a lot of ambiguity surrounding the issue.

I previously wrote about the DPIA guidelines (and its challenges) suggested by the Privacy Commission of Belgium. The details are now made even more clear with the Working Party’s 29 publication of “Draft Guidelines Data Protection Impact Assessment (DPIA)”.

This publication will be very impactful, as it will influence how Data Protection Authorities (such as ICO or CNIL) will be “deploying” DPIA guidelines in practice.

First, the important notes In a “TL;DR” format:

  • The document is very interesting, but it is still “guidelines” - it is subject to change, and it will - for sure!
  • The document in certain part is worrying, and may result in watering down the concept of PIA. Not sure why Working Party 29 has followed this path. The criteria for a basic PIA are pretty weak and actually somewhat inconsistent with other parts of the document
  • The document provides interesting insight; where DPIA is required, where it might not be necessary? It’s difficult to say when a DPIA is not needed - and someone must make the decision.
  • Virtually all processing activities will require a “small DPIA”, executed to assess if an actual DPIA is necessary ("Data Protection Threshold Assessment", as opposed to "Privacy Threshold Analysis")
  • The DPIA may need to be made available publicly
  • Organizations are free to choose the most adequate DPIA methodology there are some
  • The document creates a risk that “the checklist method” will be accepted as valid basic DPIAs
  • Brexit requires conducting DPIAs?

Okay, the last point is remarkable, so I expand on it immediately.

Brexit increases regulatory uncertainty, WP29 says

The opinion of WP29 is the first official EU-level document highlighting the difficulties created by the fact of United Kingdom’s leaving from the European Union, also known as Brexit. The document suggests - pretty explicitly, literary - that companies based in UK most likely will need to perform a DPIA on a standard basis, and fast.

I agree. Perhaps the same comment should apply in case of European Union companies dealing with those in UK?

Furthermore, these uncertainties are definitely even further elevated by the currently-in-works ePrivacy Regulation.

Now to the actual analysis! (you can find the PDF of the Guidelines here)

First things first - there is a lot of ambiguity in the differences between PIA or DPIA. But the WP29 notes that a DPIA is essentially equivalent to the concept of PIA.

Analysis of WP29 first DPIA guidelines

The document starts with a very important phrase: “DPIA is a process designed to describe the processing, assess the necessity and proportionality of a processing and to help manage the risks to the rights and freedoms of natural persons resulting from the processing of personal data“

So first:

  • DPIA is a process (not a check-list or product)
  • DPIA is an accountability tool: can show compliance, but should also be helpful in decision making on the design phase. Good DPIA is not only about compliance!
  • DPIA shows that appropriate measures are followed, and appropriate solutions are in place

In essence, DPIA provides an answer to a potential future question an organisation or a company might be asked at some point. For example - when a mass privacy data leak occurs, someone might be compelled to ask some questions (with a possible risk of paying fines up to 10,000,000 EUR). WP29 says explicitly that DPIA is a process helping to build and demonstrate compliance. When reading “building” literary, It’s not just “demonstrate”. It’s not a check-list. The process must be conducted in a thorough and complete manner,

The WP29 guidelines suggest that DPIAs will be able to link to other DPIAs: “the data controller deploying the product remains obliged to carry out its own DPIA with regard to the specific implementation, but this can be informed by a DPIA prepared by the product provider, if appropriate“.
So when an organisation deploys a product, it does not need to go into the internals of the product - and only focus on the actual deployment and configuration aspects, applying to the actual ways that the system will be used in practice. This point of the opinion includes an example of smart reader’s manufacturers providing a DPIA to utility companies. In those cases, utility companies would be using the DPIAs from smart reading companies to conduct their own DPIAs.

When a DPIA is necessary

In short, when the processing is “likely to result in a high risk to the rights and freedoms of natural persons”.
In this point we focus on the right to privacy, but also: freedom of speech, freedom of thought, freedom of movement, prohibition of discrimination, right to liberty, conscience and religion. Namely, WP29 says that PIA/DPIA is not merely measuring impact on privacy, but also other Fundamental rights, as seen from the Charter of Fundamental Rights, and the recent influential judgments by ECJ - specifically, data retention, protection of identifiers

WP29 is attempting to distill some general rules, let’s continue.

When analysing the guidelines from the Belgian Privacy Commission, I have indicated that it’s a challenge to identify places where a DPIA is not necessary (this is a risk in itself).

Examples of operations subject to a DPIA

It’s difficult to provide such a list. WP29 also had this difficulty. So how WP29 proceed was to identify “ten points of interest” and decide that if at least two of the points apply, then a DPIA is mandatory. The points of interest are below (I’m taking them directly from the WP29 guidelines):

  • Profiling is in use. Example: bank that screens its customers against a credit reference database, a biotechnology company offering direct to consumer genetic testing to assess and predict the disease/health risks, or building behavioural or marketing profiles based on navigation on websites.
  • Automated-decision making with legal or similar significant effect. Example: when processing leads to the potential exclusion or discrimination of individuals. Processing with little or no effect on individuals does not match this specific criteria.
  • Systematic surveillance. Processing used to observe, monitor or control data subjects.
  • Sensitive data. Including GDPR Article 9, such as: information about individuals’ political opinions, as well as personal data relating to criminal convictions or offences. Examples: a general hospital keeping patients’ medical records. Other sensitive private data of note: electronic communication data [ePrivacy regulation is also taking care of these], location data, financial data.
  • Large scale data processing. WP29 is not sure what does “large-scale” mean, but suggests the following criteria: the number of data subjects concerned, the volume of data and/or the range of different data items being processed [is it high-dimensional data? This means that “Big Data” processing always falls into a DPIA category], the duration processing, the geographical extent [this WP29 guideline is very ambiguous]
  • Linked databases - in other words, data aggregation. Example: two datasets merged in one, that could “exceed the reasonable expectations of the user”. This point specifically touches data merging and also possibly advertising technologies such as cross-device tracking, or cookie syncing, even.
  • Data concerning vulnerable data subjects, especially when power imbalance arise, e.g. employee-employer, where consent may be vague, data of children, mentally ill, asylum seekers, elderly, patients. This point refers to the broad sense of “power imbalance”.
  • New technologies in use”. WP29 mentions “Internet of Things” is explicitly in scope for a DPIA.
  • Data transfer outside of the EU
  • Unavoidable and unexpected processing”. For example, processing performed on a public area that people passing by cannot avoid. My example: wifi tracking [ePrivacy is also regulating that point], WP29 example: processing that aims at allowing, modifying or refusing data subjects’ access to a service or entry into a contract. Example: bank screening its customers against a credit reference database in order to decide whether to offer them a loan.

The more points from above are “met”, the more chances there are that a DPIA is needed. WP29 says that if less than two points are met, the system potentially may not need a DPIA - unless it does need it after all, of course. But we at least know that if at least two of the points above are met - a DPIA is always needed. The decision “Do/Don’t” should always be documented, with appropriate reasons.

Simple decision ruleset:

  • Two or more points are met: needs as DPIA
  • Less than two points are met: maybe does not need a DPIA?

Very helpful, indeed.

Example 1

As a side note - do Points 1 and Points 3 sometimes overlap? For example “building behavioural or marketing profiles based on navigation on websites” and processing used to observe, monitor or control data subjects, including data collected through “a systematic monitoring of a publicly accessible area”. Are these equivalent? If so, are two points from the list of 10 met, or not?

Example 2

Points 1 and 2. Do they overlap? “building behavioural or marketing profiles based on navigation on websites” and processing used to observe, monitor or control data subjects - may sometimes be functionally and effectively equivalent. This means that there would be a risk of assigning this functionality just to a single point, and under measure the risks

In point 10, WP29 explicitly says that processing on a public area where people passing by cannot avoid - is subject to a DPIA. This is very interesting in light of one of recitals in the currently proposed text of ePrivacy Regulation. This ePrivacy draft text specifies that this kind of processing can take place with a “passive consent”, even if the users cannot avoid
One of the recent examples of such kind of processing was Transports for London experiment in wifi tracking, where the “consent mechanism” was just a sign that Tube users could see.

The guidelines then present a table with examples and clarification “the threshold” - whether the data processing is fulfilling the requirements of falling subject to a mandatory DPIA.

For example
* “The gathering of public social media profiles data to be used by private companies generating profiles for contact directories” - requires DPIA. * “An e-commerce website displaying adverts for vintage cars parts only doing some limited profiling based on past purchases on certain parts of its website “ - does not require a DPIA

The second example is interesting. It’s the application of profiling on a very limited and strictly defined scale (purpose limitation strictly defined). It’s also clearly stating that the profiling is done by a first party, the e-commerce website, and based on the information about the user activity on that particular site.

This means that all uses of third-party processing may likely be subject to a DPIA.

The actual processing actions subject to DPIA will be determined by DPA’s of EU Member States. So for UK it’s the ICO, for France it’s CNIL.

The list(s) above are intended to guide the national DPAs. Privacy Commission of Belgium has already suggested some (link) - in a number of points, stricter than the ones of WP29.

Small DPIA

WP29 acknowledges that according to GDPR, even if DPIA is not necessary, the data controller still must perform certain tasks, specifically “maintain a record of processing activities under its responsibility”.

Such actions - let’s call them a “small DPIA” or a “DPIA Threshold Assessment” need to be documented, and performed regardless whether a full-scale DPIA is made.

This decision - with an explanation - must be written down, even if an organization decides not to carry a full-scale DPIA.

One observation is that if an organisation is not sure if it needs to conduct a DPIA, it might be as good idea to do one anyway. It is also “strongly recommended”.

When a DPIA is not required?

There are specific points to answer this question
Specifically:
* the processing is not "likely to result in a high risk to the rights and freedoms of natural persons" (Article 35(1)); * when a DPIA has already been conducted * when a DPIA has been conducted in a case of “very similar processing”, in this case, the already conducted DPIA might be re-purposed to the “new processing” - provided that the “very similar” is met. Someone should, of course, assess the DPIA similarity - during the threshold assessment. * when there is another legal basis for data processing (for example due to the EU or Member State law), and where the DPIA “has already been carried out as part of the establishment of that legal basis” * when the processing is on the list of operations for which DPIA is not required. That’s only applied if the processing is strictly falling into the items on the “No DPIA List”

WP29 notes that DPIA are subject to regular reviews. The reviews should be carried on whenever a change in operation (or the surrounding legal environment) changes. In this light, not reviewing DPIAs is a breach of the DPIA process, directly. Reviews should be made at least every three years - often, sooner.

Existing operation

There is a lot of questions whether DPIAs are needed only for the “new systems”. WP29 clarifies that: all processing operations (“new systems”) deployed after May 2018 are subject to the requirement. When the deployed systems are subject to significant change or update (“significant change”), they also fall under this requirement.

Same applies if there is a change to the “risk”. As we know from the risk-based approach, risk may also be subject to change due to environmental aspects - the surrounding ecosystem. So the changes might not only be the ones related strictly to the system itself.

Therefore it should be noted that the revision of a DPIA is not only useful for continuous improvement, but also critical to maintain the level of data protection in a changing environment over longer time

Additionally, WP29 strongly recommends performing a DPIA for processings/projects undertaken prior to the GDPR Day (May 2018). There are regulatory clauses that could justify the verification of the systems deployed even prior to May 2018. And for large and significant systems, potentially even a day after the GDPR Day someone could argue that the “environment has changed” and ask an organization for a DPIA. Then what?

Brexit Clause - Increased Uncertainty

WP29 is explicitly referencing the process of leaving the European Union by the United Kingdom:

Finally a DPIA may also become necessary because the organisational or societal context for the processing activity has changed, for example because the effects of certain automated decisions have become more significant, new categories of natural persons become vulnerable to discrimination or the data is intended to be transferred to data recipients located in a country which has left the EU.

Brexit is a good enough reason to perform a DPIA. It is fair, as it’s currently unknown how the situation will look like during the next two years. Sounds like just about any businesses based in the United Kingdom, or one that is related to the UK, will be very interested in conducting a DPIA, and doing this quickly? National DPAs may have good justification for enfocing the “DPIA Brexit Clause” fast, even a single day after the GDPR Day.

Oddly, it's now more than a week after the WP29 Guidelines are published, but so far apparently nobody has noted this pretty significant thing?

How to conduct a DPIA

WP29 guidelines provide hints about the actual carrying of the DPIA. In other words: how to carry a data protection impact assessment.

When to start conducting a DPIA?

Prior the processing is started, in line with privacy by design. DPIA should be included in the actual development process, at its beginning. Even if some details of the processing are not yet known. At this stage, a DPIA may obviously not be fully completed. In my opinion, a DPIA should be updated during the development, and the recommendations made by the DPIA during the design and development stage - should be included. DPIA is a process, not a product; so is privacy, as well as security.

In other words, you cannot deploy a new high-risk system without a DPIA - it’s already a breach of the DPIA process.

DPIA is a process

I agree with this remark: “Doing a DPIA is a process, not a one-off exercise“. DPIA is an ongoing process. The DPIA should be adequately tailored and well-designed for a task; it’s a custom-made process.

Oh, if I haven’t said that yet - privacy is a process, too.

Who should conduct a DPIA (technically)

The WP29 makes the matter clear:
The controller is responsible for carrying out the DPIA (Article 35(2)). Carrying out the DPIA may be done by someone else, inside or outside the organization

Organisations are accountable for conducting DPIAs - they can make them in-house or using people outside the organisation, such as consultants (think in terms of external hired “penetration testers”).

If the organisation has assigned a Data Protection Officer (DPO), he should be consulted. The DPO, if assigned, is the person who monitors the DPIA process. The DPO is not the person who is obliged to conduct the actual DPIA.

Public consultations

The data controller may sometimes need to consult external stakeholders, for example users, employee representatives, etc.

The persons involved

Privacy Teams

WP29 acknowledges that in-house business units (“Privacy Teams”?) of an organisation may be tasked with a responsibility of conducting a DPIA. In this case, they might either do the DPIA or provide input for it, depending on who is actually conducting the DPIA. But the specific team must then be involved in the validation process.

Independent Advise

Finally, “where appropriate”, the WP29 recommends to seek independent advice of external experts, such as lawyers, technology experts, security experts, sociologists, ethical experts, etc. This point is very important, unprecedented and - let's note this - it decreases the risk of using DPIA in a checklist manner.

Independent experts will be able to assess if the project assumptions, risks, or the DPIA itself - make sense. This is a major point of the guidelines that should be endorsed as a good standard introducing openness and transparency. This point should also be welcome by organizations; their DPIAs will be validated by external(s) who will put their professional credibility at stake.

The DPO may suggest specific processing operations that should be subject to DPIA, and should be able to identify an appropriate methodology, as well as evaluate the relevance of risk assessments and whether the proposed solutions are adequate. If appointed, the Chief Information Security Officer (CISO), or the IT department, should assist and could propose to take care of the dPIA on specific operations in regards to security aspects, as needed.

DPIA Methodology

Now the very important part - “how to do the DPIA” in practice.
There are methodologies, there are guidelines. It’s clear that certain fields or technologies are specific enough to warrant dedicated methodologies.

First: “the DPIA under the GDPR is a tool for managing risks to the rights of the data subjects, and thus takes their perspective, whereas risk management in some other fields (e.g. information security) is focused on the organization”.

This is just a long sentence communicating the following: DPIA is focused on privacy. Of course, information security is an integral part, too. But not only.

WP29 acknowledges different methodologies, but reminds that common criteria should be used. It’s acknowledged that organizations are free to choose the best methodologies, provided that the DPIA is a genuine assessment of risks, allowing to address them.

WP29 provides a list of possible methodologies, the upcoming Privacy Impact Assessment ISO/IEC 29134 is also referenced. Organisations are free to use whatever method they wish - provided that common criteria are met.

Sector-specific DPIAs

WP29 provides a good point: some sectors are specific enough (types of data, technologies) to warrant specialized methodologies. The threat models or actual threats might be very specific - consequently, the risks, as well as the impacts. So particular sectors of technologies may benefit from using specialized methodologies.

Should I publish a DPIA

Yes. WP29 encourages making DPIA public - at least partially. Since DPIAs are often technical documents, containing internal data about security or/and privacy controls, and other sensitive details - the published DPIAs may be sanitized to omit the sensitive parts. Such a public DPIA may be a summary of the actual DPIA. That’s also the reason that some of the privacy impact assessments you may find in public rarely contain technical details (at least let’s hope so).

This recommendation is a intended to build trust.

My DPIA indicates high-risk. What now?

Analyse your risks. If in doubt, consult your favourite local Data Protection Authority. Attach a DPIA to your communication with the DPA.

Criteria for a DPIA

The document contains an annex - the “criteria for a DPIA”. This annex is meant to provide the basic “requirements for a DPIA”. This is meant to make sure that DPIAs or methodologies to conduct DPIAs are in line with GDPR. The document discusses the things that a DPIA should contain, such as the types of assets or processed data, the organizational ways to deploy the transparency side of GDPR (e.g. purpose limitation, access to data or right to be forgotten), as well as the more technical side such as risks and threats arising from illegitimate access, undesired modification, and disappearance of data.

I acknowledge that the more technical part is quite broad (and a document as such could not discuss the threats in more detail!), and widely speaking references the Confidentiality-Integrity-Availability triad, known from information security.

It’s good that this kind of basic blueprint is compiled. However, I would like to highlight that this particular wording and selection of items creates a possible risk of making the idea of a DPIA to drift to a checklist type operation. For example to have “Q1: Do you process personal information? A: yes” or “Q23: Do you protect user data? A: Yes, we use encryption”. I’d immediately follow: oh really? Tell me more.

Another issue is that this kind of criteria aren’t assessing the internal and external factors, such as the availability of public data, or possible databases merging. It’s clear for long that identifiers (in the web realm - even cookies) are personal data and this particular item is strictly technical. But how the study of identifiers would fall into “illegitimate access, undesired modification, and disappearance of data” frame of thinking is difficult to assess.

How to address the issue? Include more generic terms or wordings such as “such as”?

Of course this is just an example, and the true differentiator will be the market and the actual suppliers of DPIA services. Some will provide the most basic assessments (“checklist”),

Problems with the guidelines:

  • Understanding DPIA as a merely compliance tool, while a PIA process should be understood as going way beyond - as good practice, in line with Privacy by Design and built into organisational cybersecurity and privacy strategy.
  • WP29 guidelines provide very simple examples. This is dangerous because certain industries (e.g. smartgrids, or web services, or IoT devices) are specific enough to require its own specific methodologies. WP29 acknowledges the need for specialized methodologies
  • In some aspects, processing is clearly risky. For example, when genetic data are concerned, a DPIA should always be carried on. The rule “the more checkmarks you place next to the 10-point lists” will cause problems The Annex 2 (“Criterions”) creates a risk that the GDPR DPIA will, in fact, end up being a simple checklist. Was it the intention?
  • The Annex 2 (“Criterions”) are also not Guidelines do not focus on data leaks or the protection of identifiers; merely focusing on “confidentiality-integrity-accounting” misses the point. The guidelines aren’t even referencing technological practices of “privacy engineering”
  • The guidelines are not really referencing ePrivacy and the potential of influencing the actual requirements of DPIA

WP29 will definitely be changing their guidelines.