Technology platforms  create technologies with significant impacts on nations, governments, and societies. This impact is increasing, reaching new heights. Many feel that the rules of the game are unclear, insufficient, or non-existent. Without rules, one only wonders who would set out the road to the future. In other words, does anybody know what we are living for?

The European Commission/Union wants to tackle the issue with new rules (“regulating the responsibilities of digital services”). The latest is the Digital Services Act (DSA) and Digital Markets Act (DMA).  The lengthy proposals are here: DSA, DMA.

The proposals have an impact on very important technology and technology policy realms. I approach this problem from a technology cybersecurity, privacy, and transparency angle. We will not discuss ‘hosting’, ‘caching’, or liability for the content. We will not study the fascinating consequences of “the service provider shall not be liable for the information transmitted“ if it “does not select or modify the information contained in the transmission“. But we will have a look at some fundamental issues.

Online ads transparency is defined in Article 24:

“Online platforms that display advertising on their online interfaces shall ensure that the recipients of the service can identify, for each specific advertisement displayed to each individual recipient, in a clear and unambiguous manner and in real time”

This amounts to the obligation of informing that something is an ad, as well as including the trace information explaining what is the nature of the ad. Who directed it (its source), and who the ad was targeted to. This includes also the targeting transparency, so how the user was targeted (for example programmatically?) with particular ads content or messaging. There are good technical ways of how this could have been realized. Sadly, the technology vendors weren’t interested in more transparency, which helped in making some misuses more available (still the case even today). Sadly also, the DSA is not discussing this matter. The danger is, therefore, that the ways of providing such information will end up non-standardised, fragmented, obscured, or made more difficult to comprehend by the users.

I know the risks here first-hand. I researched the privacy and transparency aspects of Real-Time Bidding (dynamic targeting for ads in real-time). I also know that ad transparency is the primary tool to minimise misuses when such technologies are used to sway/influence public opinion, which is being done in practice, including before and during elections. It was possible to predict such misuses in advance, but the interest to do so was not existent.

Risk assessment framework - big impact issues?

This is the key part of this regulation. Large platforms will have to “identify, analyse and assess … at least once a year, any significant systemic risks stemming from the functioning and use made of their services”, so the technology.

Risk assessments similarly to privacy, data protection, human rights, or other types of impact assessments? I’m a big fan of such tools.

The lack of such risk assessments is part of the reason why technology is seemingly getting out of hand. This is also why the big platforms failed to predict some of the potential misuses of their technologies, in a systemic manner and concerted manner. For example, the fact that real-time bidding technology introduced unprecedented ways of impacting on societies, dividing or polarising the public opinion, or sometimes introducing “poisoned” elements to the public debate. Or in other words, how to hack elections (note that this article is from Summer 2016, sometime before this topic emerged to the interest of the public opinion). It was possible to see this and other misuses in advance, but there was no motivation to do so. Will there be now? This will revolve around the quality of the assessments.

The key question is who will be tasked with making such risk assessments. We know from cybersecurity/privacy that the likely answer in many cases is “lawyers/legal experts will”. But this may not always be well-adapted or optimal to the task, especially in the case of a big-picture, high-level analysis of complex aspects like we cover here. Especially in big-impact technologies, where you simply need to put skilled and experienced people to threat-analyse and possibly find the big picture or high-impact risks, often elusive or difficult to notice. In other words, it would be better not to treat it as a check-box exercise.

Why do I think this is a risk assessment thing that should be broad? Because the regulation mentions it! It says that this obligation is about:

the dissemination of illegal content through their services

(you can imagine many types of such…)

any negative effects for the exercise of the fundamental rights to respect for private and family life, freedom of expression and information, the prohibition of discrimination and the rights of the child

(these are very broad considerations about privacy, human rights, etc.)

intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security

(high-impact, even if elusive risks with negative effects, also directly about elections - so also disinformation channels)


Why is advertising technology so important?

Because “when conducting risk assessments, very large online platforms shall take into account, in particular, how their content moderation systems, recommender systems, and systems for selecting and displaying advertisement influence any of the systemic risks referred to in paragraph 1, including the potentially rapid and wide dissemination of illegal content and of information that is incompatible with their terms and conditions.”

Risk assessment is the gem of this regulation. Risks will have to be mitigated. Again even here the Regulation has: “targeted measures aimed at limiting the display of advertisements in association with the service they provide“. Finally, someone appreciated the systemic risks of unsanctioned and unprotected advertisement channels and content? It was a long road to get here.

There will also be a mandatory ads transparency repository, requiring the storage of all the ads displayed during the previous year, with at least this data:

the content of the advertisement
the natural or legal person on whose behalf the advertisement is displayed
the period during which the advertisement was displayed;
whether the advertisement was intended to be displayed specifically to one or more particular groups of recipients of the service and if so, the main parameters used for that purpose (parameters of targeting/micro-targeting)
the total number of recipients of the service reached and, where applicable, aggregate numbers for the group or groups of recipients to whom the advertisement was targeted specifically.


This introduces a potential issue. We are currently in the process of rapid and major evolution and changes in the ad display ecosystem. Such rules as in the above should be well aware and synchronized with those changes. So far, they might not be. For example, if certain of the needed information was not available due to the privacy-preserving solutions deployed by ad providers. The effect would be that the regulation could actually stifle the privacy-preserving changes. Then what?

Transparency of moderation

Transparency reports will have to be provided.

In many cases, big platforms like Apple, Google, Twitter, or TikTok already publish them regularly. But now the requirement will be standardised and codified. Transparency reports must list such things as the numbers of take-down orders issued by countries. This is about content moderation. So sensitive matters, on the edge of guarding the rules (according to some), censorship (according to others), or a compromise/tradeoff (still, according to others). In other words, this regulation speaks about content moderation and account suspensions.

Fines

Big fines are only meant for the big platforms, having a minimum of 45M users in the European Union. This means that all the big platforms you can think of likely fall in this regime.

“very large online platform concerned fines not exceeding 6% of its total turnover in the preceding financial year“

(and 1% fine in some other cases, but the max is 6%).

Additionally, the DMA includes 10% (of yearly turnover) fines for the ‘gatekeeper platforms’, those that among the other rules, have a significant impact on the internal market, and have at least 45M users in Europe.

Limits on big platforms

Digital Market Act notably prohibits the gatekeeper platforms  “from combining personal data sourced from these core platform services with personal data from any other services offered by the gatekeeper or with personal data from third-party services“. This means reduced rights of data re-use, which in turn is meant to limit the ability to quickly scale new services using existing data sets as the training input.

It also has this: “refrain from requiring business users to use, offer or interoperate with an identification service of the gatekeeper in the context of services offered by the business users using the core platform services of that gatekeeper”. Which means that big platforms are prohibited from requiring of using the platform services. For example, a gatekeeper like Apple may be prohibited to demand the use of their own ID/authentication system, from application developers?

Big platforms will also need to perform independent audits. Those audits will likely be made by the big audit providers, so nothing of particular interest here.

Summary

Interesting batch of technology policy rules. We will see where this leads (and how long it will take), but the links to cybersecurity, privacy, and transparency - are clear.

Did you like the assessment and analysis? Any questions, comments, complaints or maybe even offers? Feel free to reach out: me@lukaszolejnik.com