European Commission announced the “Democratic Action Plan”. It is a rather general strategic document. What stands out is the realization that New Digital Technologies  have great impacts on democratic societies. Both positive (i.e. like the ability to reach new groups of people) and negative (like the non-transparent campaigning, disinformation, etc). The Plan attempts to tackle the negative ones.

The plan wants to build on compliance with GDPR when it comes to the processing of political campaign data. Microtargeting and psychological profiling use data. The starting point should be that the data is appropriately collected or used, not hijacked or misused.

Regulations for microtargeted ads?

To put it simply, microtargeting involves the (1) identification of small groups based on demographic or interest-based traits, and (2) directing those groups with specific content such as online ads or political messaging (i.e. political ads). The technology layer  very often involves programmatic ads, like Real-Time Bidding.

In general, microtargeting process and the direction of content  is very often non-transparent to users. Users have no idea why they were targeted with such content, or how the political actor had the - any - data on them. The EU Plan would want to improve the transparency part first. While we do not know what the requirements will be enacted, technically improving ads transparency could be very simple, assuming that one wanted to actually offer a working solution. Should Ads Technologies want to be transparent, all ads served could be accompanied with appropriate metadata informing exactly about the targeting process. This way a simple click (or an action via a web browser extension) could allow the user to learn the underlying logic behind the targeting of the content, and the content nature. Sadly, the industry was famously reluctant to introduce transparency, especially in this case (even though the landscape admittedly slightly improved these days). Indeed, it really could have been as simple as annotating ads with appropriate informational-metadata! Just imagine that there was never any need for cumbersome transparency frameworks. It always could have been very simple. Provided that transparency was to be included in the design of technologies such as Real-Time Bidding. sadly it was not: 1, 2.  Real-time bidding infrastructures are the primary tools behind microtargeting, including political microtargeting and disinformation. I explained the risks here (and back in 2016 it was even picked there). In 2020 the European Parliament even called for a ban.

How to identify the political content?

The big problem facing any such regulation will also be the very definition of political content. It is straightforward if this is simple agitation (i.e. mentioning political parties directly). But what when the messaging is a bit subtler? What if no political party or no person is mentioned, but, say, a topic, even a potentially non-political one? When the topic is directly linked to political activities, this may be still discernible. But plenty of topics, while maybe being political, at the same time may not be exclusively political. Some even hold an extreme view that everything is politics. I do not know what your individual position is here. This view still highlights the difficulty of identifying when content should trigger regulatory oversight of political microtargeted ads.

Digital Services Act and risk assessment

Rules on online advertising will also be introduced in the upcoming EU Digital Services Act (DSA), but those will not focus on elections or democratic aspects per se. So the challenge of defining what the political content is is left for the future. The DSA will still impose new obligations on big digital platforms. Such as the obligations to study the potential harms and risks. This may be a fascinating and new type of an impact assessment risk study (“assess the risks their systems pose not only as regards illegal content and products but also systemic risks to the protection of public interests and fundamental rights, public health and security“). We’ll see.

Disinformation problem

In 2018 the European Commission was greatly worried about risks of disinformation in the following elections. Back then I quickly explained why the risk of a significant disinformation campaign was very low. But with disinformation being on the radar of top-level policymakers, it is understandable that any such risks are treated very seriously. Even if they are low (and subsequently do not happen, as in the case of 2019 elections). So the EU Democratic Action Plan now apparently comprehends the complex information and disinformation environment (it even contains a definition of an information influence operation!), and a dedicated approach to disinformation will be announced in the future. For obvious reasons, it should also consider political micro-targeting, as this is what should have been assumed as the primary disinformation candidate channel/vehicle (and not social media bots!). Real-Time bidding and microtargeting technology that was very likely used in Europe to bypass some of the country-level rules regarding the election silence, for example. It will be used again. As for the disinformation itself, any approach should be a careful one. Careful not to amplify obscure sources from some external countries. Sources that would probably never emerge to the consciousness of the European public opinion. If not by the very mode intended to “call out” the disinformation. In fighting disinformation, sometimes the well-suited approach may involve simply not mentioning some things. But the policy of being inactive, such as  shutting up (and doing nothing) may be difficult to respect if one instead prefers to be pro-active. Such is the fascinating realm of policies to fight harmful disinformation. Let’s trust that Europe picks the best approaches.

Did you like the assessment and analysis? Any questions, comments, complaints or maybe even offers? Feel free to reach out: