The European Union is regulating disinformation. Well, sort of. While the issue is indeed discussed in regulations such as the Digital Services Act, it seems that the “executive” arm is the Code on Disinformation, as of now strengthened. It builds on the previous 2018 version which I criticised here. The new version is greatly improved and it includes practical commitments to be implemented, such as the need to deploy systems to detect/take-down deepfakes, making deepfakes unusable in practice. This time, the disinformation Code is not skewed toward this useless deepfake threat. It seems that the awareness and expertise have improved.

This new document puts a heavy focus on digital ads. Out of 44 commitments, 12 concern digital/web ads or the associated technology! This clearly (and rightly so) highlights what is the critical matter of interest when tackling disinformation/propaganda. Big platforms, ad channels. That is the dissemination infrastructure, and this is acknowledged. In this way, disinformation/propaganda becomes less of a fluffy topic.

The first interesting thing is here:

(i) “Code of Practice aims to become a Code of Conduct under Article 35 of the DSA
(j) As indicated in the Guidance, Very Large Online Platforms need to take robust measures to identify (risk assessment) and address (risk mitigation) the relevant systemic risks ….”

This means that the Code of Practice will be a formal policy tool, “Code of Conduct”. This may help in demonstrating compliance with Digital Services Act. Later on, there are more links to other regulations, such as Artificial Intelligence Act.

Advertisements stuff

The prominent role of advertisements is quickly identified.

“(d) The Signatories recognise the need to combat the dissemination of harmful Disinformation via advertising messages and services.
(e) Relevant Signatories recognise the need to take granular and tailored action to address Disinformation risks linked”

In this line, the actual commitments are to be action-oriented. The measures are strongly related to ad-distribution channels. For example:

“Commitment 1. Relevant Signatories participating in ad placements, commit to defund the dissemination of Disinformation”
“... Signatories will set up a working group and work on developing a methodology to report on demonetisation efforts…”

Some of these measures go beyond strict disinformation focus, like: “​​Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising

Others will require the inspection of website content: “Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation”.

Should some of the web technologies that may be of help be standardised? If so, any standard should refer to this Code of Practice.

Further tasks go also deeper into measures guaranteeing “brand safety”. The ad-angle is strong:

“Commitment 2. Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.”

But is the disinformation code simply focused on ads? No. While I was among the first who identified the potential of using real-time bidding infrastructure in disinformation (I mean, there has to be some channel for dissemination of the content, it is not something that shows up magically…), in general, the code goes broader. But it obviously touches on the important aspect of political advertising.

What is political and issue advertising

“Commitment 4. Relevant Signatories commit to adopt a common definition of “political and issue advertising”.

This means that ads about direct political matters would have to be defined and the definition must be common among the regulated companies. So for example, Google, Twitter, Facebook, etc – will have a common definition to work with. Likely, this definition will apply to the whole world.

More interesting is the “issue advertising”. What is this creature? It’s the fact of speaking about policy or political matters, not naming any particular political party or so. For example, speaking about global warming, gender equality, coal, gun control, vaccines, etc, can potentially be identified as engaging in “issues”. These are likely those “issues”, which are never named in this Code. It is not surprising that these are not named. It might be uncomfortable for the European Commission to do so... So this uncomfortable task is delegated to the signatories. Impressive move, European Commission.

Further commitments prescribe that such ads (political/issues) will be marked or labelled in some way. That is to say - they will become “distinct” content: “ in a way that allows users to understand that the content displayed contains political or issue advertising”. For example, clear political matters will be labelled. How about those relating to, say, growing the awareness of issues such as, say, global warming or (in the US) gun control?

What this means

In the end, all this means that ad companies will not only have to tackle security, privacy, and transparency, but also disinformation and political/issue aspects of ads.

This means a requirement for more people, new tasks, and new tools.

Also, research and communication: “will publish relevant research into understanding how users identify and comprehend labels on political or issue ads”.

This is actually clear. One commitment assumes the allocation of budgetary measures

“Commitment 38. The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.”

Monitoring for disinformation, political advertising, etc – is now a matter of compliance. It’s not merely a voluntary choice of a company.

What all this means is that fighting disinformation will be a formal task of certain companies. This means that there must be people doing this, a formal team, for example.

Manipulative behaviour  

Those are to be restricted. I’m citing in verbatim the identified issues:

“- The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, - Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation. “

They of course also speak of “latest evidence on the conducts and tactics, techniques and procedures” (it must be cool to draft such a document and to use such terminology, isn’t it?).

But some of these tasks are already done by some platforms. This will now be a formal requirement, also directed broader.

Deepfakes are unwelcome

“Commitment 15. Relevant Signatories that develop or operate AI systems and that disseminate AI generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act. “

No comment needed. Deepfakes are a thing, though, with the ability to detect and take down that content, their risk becomes low (the hypothetical problem was greatly taken out of proportions, over the previous years, perhaps we can simply move on now?).

Research, media literacy, educating users, fact-checking

I skip the research and outreach-literacy measures. Less direct, though notable measures.

New ad-serving technologies and compliance

A separate issue arises due to the recent changes in ad infrastructures, for example in the case of privacy-preserving changes. Those systems (Apple's ad standards, Google's Privacy Sandbox, etc.) will have very different properties than the previous ones. Misuse analyses will have to be redone, and including broad risk models, including threats of uses in disinformation or propaganda. However, some technologies, which are privacy-preserving, will defer the choice of ads to the user’s platform (so that data does not needlessly leave user’s device).

Is it then possible to monitor for misuses in the ways that were always possible? Are remakes of 2016-grade abuses possible?

That will certainly be a fascinating topic. One possible way for implementors of the new schemes would be to consider these risks during the transition to the new infrastructures. But who’s doing that? Anyway, if you’ve heard of any, please reach out (me@lukaszolejnik.com), as this is very curious; feel free to contact me should you want any help in this topic, too.

The new Code of Practice is here.

Summary

Security and privacy are the key tasks of big platforms. Now they will also be tasked with tackling disinformation/propaganda, political/issues expression, etc. This means that the tools and the technology must be developed and maintained. That appropriate teams must be formed and maintained. This means that the matter becomes mandated by law as if compliance. In a sense, this is great, as that esoteric and arcane stuff becomes formalised. The risk will be whether these efforts won’t be hindered by the addition of bureaucracy.  

However, in the end, it is civilising the process. I am also happy that finally the role of information dissemination and distribution is clearly acknowledged. The 2018 Disinformation Code was premature and not well formed. The one in 2022 clearly is orders of magnitude better.

It was a conscious corporate choice not to analyse certain technologies with respect to societal risks. Maybe at least some angles will improve now. Let’s just hope that the right people will be put to the tasks.