One of the most discussed and often introduced as controversial additions of the General Data Protection Regulations are the high fines.

Maximum fines of €10.000.000 (or 2% annual worldwide turnover) or €20.000.000 (or 4%) are definitely significant. They could cripple an entire company. But can or will the maximum GDPR fines ever be used in practice? Yes, and this is not only due to the fact of recent trends in the consumer protection fines. This does not mean maximum fines will be used all the time. Spreading fear and uncertainty here should be avoided. Instead, let’s focus on the mechanics of setting fines.

European DPAs goal is to have them balanced and somewhat comparable among EU Member States (along these Guidelines document, which are admittedly really general).

GDPR specifies that all cases need to be assessed individually - on a case-by-case basis. The regulation provides some guidance, albeit it is rather very general. Among the points taken into account are the nature, gravity and duration of abuse or breach. Whether the breach has resulted due to intentional or unintentional acts. Whether the data controller detected the breach, attempted to remediate (or not), and so on.

Example? GDPR instructs that the past behavior of the controller is considered, too. When a controller is breaching user data habitually, this is of course a different story than a one-off event. Investing in good relations in the communication with a DPA might be a good idea, then. Indeed, an important thing is whether the controller is cooperating with the data protection authority, including the quality of this cooperation.

The case of issuing a fine following a data leak after a cyberattack is often discussed and sometimes interpreted as controversial. Why would a company be fined for falling a victim to a cyberattack? The simplest answer: cyberattacks are results of previous actions taken by the controller that are deliberate and often well assessed.

It’s not like someone magically breaks into a system. There has to be something to be “bypassed”. If it was bypassed, why and how did it happen? When faced with a question: “what did we do to have security and privacy in place”, how easy is to find the answer?

That said, nobody should expect to see a “menu” of fines, a fixed list. These things are always flexible and scalable! Even with an example of a potential cyberattack, falling a victim to phishing or SQL injection is a different league than being targeted by military intelligence operation, with the use of 0days. I put those two extreme examples together deliberately.

To speak in more concrete terms, let’s look at the recent case providing useful hints on to how GDPR fines might actually work in practice. The case is the 2015 Carphone Warehouse cyberattack which reportedly (2015) affected private data of consumers. The British DPA, Information Commission Office (ICO) has issued a £400,000 fine. This is nowhere near the GDPR maximum fines. But what is interesting is the justification!

Context of the attack

The attackers used standard penetration testing tools to identify security vulnerabilities. Notably, outdated software such as Wordpress. The post-mortem analysis revealed that this Wordpress vulnerability has not been the actually exploited entry point - because attackers somehow had valid credentials anyway; we’ll see later why Wordpress is important to us. Subsequently, the attacker moved laterally, and installed scripts allowing to view system contents. Fortunately for the attacker, clear-text passwords have been found in one of the files. Consequently, private data stored in databases were now available, accessed and likely exfiltrated.

In the report associating the issuing of a fine, ICO has concluded that there was not a single identified cause of the attack. The most reasonable way forward was taking into account the whole situation, with all the details. This analysis was pretty technical.

Weak points

The identified weak points that impacted on the decision to issue the fine, and on its level:

  • Outdated systems were present in the system. Sometimes 6 years old. Never updated Wordpress installation. It did not matter that Wordpress vulnerabilities weren’t exploited. This was purely as an indication point for ICO. When a system has vulnerable entry points, this is a signal of potentially poor state of security.
  • In the overall - inconsistent security patch management in the organisation
  • Lack of organisational measures helping to detect unauthorized use of credentials. The unauthorized access has been detected only 15 days after it happened
  • Risk management not in effect. The assessment was conducted, but never applied in practice. Risk management is not something you do just for the sake of doing, and GDPR will put this requirement to the next level
  • Internal security scans did not reveal any vulnerabilities, and specifically did not reveal outdated software. The fact that the system was owned and that easy-to-find vulnerabilities were identified does not speak well of the internal scans.
  • No security audits done on a regular basis, specifically within 12 months prior to the attack
  • No web application firewall installed. Here ICO isn’t assessing the actual capability of WAF to prevent the attack (which is something that could be debated). It’s more about security infrastructure in place, in regards to the size of a company
    ICO identifies lack of antivirus on the server, as a weakness. While I’m skeptical here, ICO says this was against internal organisational policy. I did not review Carphone security policy, so can’t tell.
  • Administrator password were identical on many servers, and were known to 30-40 people
  • Systems contained old and unnecessary user transaction data. The organisation argued that they did not even know about it. Which was read by ICO negatively.
  • Historical transaction were data encrypted. But plaintext password was present in an application source code. So go figure, effectively no encryption was used.

ICO concluded that considering the company size, its user base, and the sensitivity of user data stored, that the number of identified deficiencies was high, that the industry-standard good practices weren’t followed, and the deficiencies persisted for a long time.

Considering the size of the database with user data (3 million records), its sensitive nature (personal data, credit cards; high risk for users that could cause a substantial distress), the technical measures applied were inadequate.

In summary, the following points were considered as the aggravating circumstances when deciding on the fine and its level:

  • the size of organisation, its industry experience and its field, including the expected standards in that field
  • the technical security infrastructure in place, in regards to the above
  • the full size of stored user private data, its types
    lack of control over historical (logs, archives) data
    type, quality and frequency of security tests are made, considering the size of organisation and sensitivity
  • the duration of breach, its extent
    remediation actions taken
  • quality of communication with ICO
    ineffective risk assessment
  • the organisation policies, whether those policies are followed
  • the apparent lack of motivation to implement security measures, even when those were reachable
    lack of adherence to industry compliance schemes (i.e. PCI-DSS)

Positive side

Now on the positive size, the attenuating circumstances:

  • there was program in place intended to build security of systems
  • risk assessment was done (even if not applied...)
  • remedial actions were made
  • no evidence that any stolen data were ever used in practice (however, I note that it is difficult for ICO to detect this)
  • lack of clarity in regards to how the attackers obtained credentials
  • the organisation notified ICO and co-operated voluntarily
  • it seems to me that the organization has really submitted a lot of information to the DPA

Some of the more interesting points from the final review: the extent of the breach is big and cannot be justified simply. No excuse was found to maintain this kind of a system in place - considering the size and the reputation of the organisation, including the caused public distress. The whole justification can be found here, and although my aim was distilling the most important and actionable contents, it might be a good idea to read it in detail - especially if your role is to manage organisation security or privacy.

In other words, ICO did not only focus on the subject of the data breach. This was an occasion to perform a wide review of security and privacy practices, including a type of a high-level security audit. What is important is that the scope of this review went well beyond those systems directly affected by the breach. Organisations should keep this in mind. One additional point of consideration is: always choose to cooperate with the data protection authority.

Organisations preparing for GDPR will need to build their strategies. For example when communicating a data privacy breach. This opportunity could be used to design the overall policy of communication with a DPA, too. When a breach is taking place, there might be no time for that.


In this post, I tried to show how GDPR breach cases will me weighed and assessed by Data Protection Authorities. It is important that organisations are prepared for identification, detection, prevention and remediation of risks and the actual happenings such as materialised breaches. It is also the matter of culture, for example making a technical Privacy Impact Assessment that is not a mere checklist or a filled form. It has to be technicala to be meaningful.

Finally, I also hope this post will be informative for Data Protection Authorities. Designing the actual workflows and strategies at DPAs is surely fascinating.

If you’d be interested in further discussing or help with some of the matters touched in this post, feel free to contact: