There’s no question that disinformation, propaganda, and manipulation threaten the election process. The increased transition of societies to online interactions induces those vulnerabilities. Technical developments like generative AI content and the ability to reach wide audiences only help in the creation of digital propaganda. Fortunately, we are not exactly doomed. Basic news consumption hygiene can be of great help. Of additional aids are the adoption of slow-news cycles, not tracking information development in minute-by-minute or hour-by-hour cycles. Most news isn’t really urgent or breaking. Limiting the impulsive use of social media may also reduce risks. Okay, now let’s get serious, nobody would be doing this.

EU to the rescue?

European Commission released new guidelines about the protection of online elections. Unlike the largely preemptive or even inadequate recommendations with out-of-proportion fears of deepfakes in 2018, this time the guidelines are somewhat legally binding, based on the Digital Services Act. This time, the legal requirements are directed towards Very Large Online Platforms (VLOPs) and Very Large Online Search Engines. Meaning - big providers of tech/IT services such as social media communication. Think Google, Facebook, Instagram, TikTok, etc. “Pursuant to Article 35(3) of Regulation (EU) 2022/2065, the Commission may issue guidelines on the risk mitigation measures…”. And so they did. These are legally binding. Infringements may mean hefty fines (6% turnover, so billions of euros). Why were the guidelines directed now? Because of the “high number of elections taking place in the EU in 2024”, including the June European Parliament elections. Even though European Union institutions in Brussels are facing plenty of off-line challenges now:

Are propaganda or disinformation operations are likely during 2014 elections?

In 2018 I argued that coordinated influence campaigns against European Elections are very limited. And guess what, I was right - European Commissioner later  admitted that, perhaps much to his disappointment, there was no disinformation in those elections. This could be because those elections were  naturally protected by their very design: they happen in 27 States at the same time, often in different languages, and considering local differences (like culture, language, and internal events). This time it’s different. The differences are technological and political.

Technological risk angle

The technological dimension of the qualitative change is owed to AI. Large Language Models can quickly create plenty of propaganda content, and effortlessly translate them to many languages on the fly. The problem is the delivery, which is why the last line is so important now, and why VLOPs have to tackle the issue - they are maintaining the potential propaganda battle arena.

The political risk angle

The political dimension is also completely different from 2018. There are ways to unify common concerns despite cultural or language risks. This is due to two events. One is the war in Ukraine and the controversies expressed by growing numbers of (still small!) circles. The second risk point is the Green Deal, specifically, the growing hostility against the European Commission-advertised changes of lifestyle and increases in costs, as voiced by the massive pan-European farmer protests in 2024, supported by a great majority of societies in multiple EU Member States (so this is not merely about farmers). This means that two of those issues, geopolitical (Ukraine/Russia) and lifestyle changes (Green Deal) are the two big vulnerabilities in the European Union. Exploiting them is possible, and to some degree even likely. Furthermore, those topics will also likely be picked up by legitimate political parties in Europe, exacerbating the difficulties in fighting any so-called ‘illegal influences’. Because by definition, when political expression is voiced by legitimate political movements in the EU, they are not illegal, and European Court of Human Rights guarantees extremely high level of freedom of expression when it comes to political content. In other words, these will be risks in identifying actual abuses or misuses in campaigns, with the need to tune them in line with what is legal, to avoid the perception that big tech platforms are aligning with some political forces, and for example censoring political expression (which could even be perceived as reasons to question the validity of the election outcome, including in legal terms). Navigating this will be extremely difficult and goes much beyond the mere technology realm.

Do the new guidelines of the European Commission consider the two objective and most likely risk themes in 2024 elections? No.

But instead, they focus on some other issues, also relevant. So let’s turn to them now.

Platforms as debate town hall meetings

First, the guidelines correctly admit that “Online platforms and search engines have become important venues for public debate and for shaping public opinion and voter behaviour”. They consider issues of “heightened risk to election integrity”. It is, however, stressed, that any measures “freedom of expression and information”. And this is critical for the electoral process to be legitimate.

The guidelines do take into account domestic peculiarities, such as language, culture, or debates: “Internal processes should be reinforced to ensure that the design and calibration of mitigation measures is appropriate for the specific regional, local, and linguistic context in which they will be employed”. That is good and well-considered. This means that any decisions must also take into account the local context. This means that VLOPs desiring to duly comply with the guidelines should probably hire a few experts in international relations and also the political science of specific Member States. Let’s say, 2-3 people (so around 50 at each VLOP). Not to mention the need for providers to have “ adequate content moderation resources with local language capacity and knowledge of the national and/or regional contexts and specificities”. It is unlikely that a few such moderators per Member State are sufficient to manage millions of incoming messages.

Discussion of the most relevant recommendation points

  • Provide official information about elections or the process, for example in popups or banners. In other words, the European Commission is apparently interpreting the Digital Services Act as laws that justify the requirement to improve voter turnout. If so and whether that is really justified by the DSA text is for others to analyze.
  • Facilitate fact-checking; I'm skipping this issue due to the increasingly growing scientific and policy evidence pointing to the risks of misuse and lack of actual help in fact-checking
  • Tools and information to help users assess the trustworthiness of information sources, such as trust marks focused on the integrity of the source”. That is a very tricky recommendation. How to assess the trustworthiness of the source? If it’s a political news site that is, say, aligned with some political inclinations - not all users may consider such “trust labels” as “trusted”. And then how would they perceive such a news source being labeled as “trusted”? In other words, this may badly backfire, and it is not considered in the recommendations. Back-fire in what ways? Well, those people with different views could then consider such labels as social engineering or even manipulation. It was apparently completely not considered when designing the guidelines, and this point is among the major risks and weaknesses. How about a seal in the below? Looks pretty highly trusted.

KittenTrustedSeal97percent2

  • Establishing measures to limit the amplification of deceptive, false or misleading content generated by AI in the context of elections through their recommender systems”. This is a fair recommendation. It is the VLOPs that may fight the dissemination of such harmful content. They just need to identify it first, and also consider when an action is warranted. Recommender systems and moderation can then be used to make such content less popular (de-promote it), i.e. censoring it.
  • Recommendation to “label in a clear, salient and unambiguous manner and real-time” of any political advertising content, including easy access to sponsor identity, or the entity controlling the sponsor. Though it’s difficult to expect VLOPs to know that, say, some NGO is funded from Moscow or Beijing. It could be funded by another NGO, which in turn by another NGO, which in turn by a company, etc. etc.
  • “Maintain a publicly available, searchable repository of political ads, updated in as close as possible to real-time”. This already exists, and it makes great sense.
  • Influencers introduce some dangers: “Influencers can have a significant impact on the electoral choices made by recipients of the service”. European Commissions requests VLOPs to introduce labels for influencers sending political advertising content
  • Demonetization of disinformation content, that’s fair
  • Put in place appropriate procedures to ensure the timely and effective detection and disruption of manipulation of the service”, including bot detection or impersonation (e.g. candidates in elections): “preventing deception through impersonation of candidates, the deployment of deceptive manipulated media, the use of fake engagements
  • Information Operations teams of different VLOPs should cooperate and exchange information. This is an important point.

Generative AI

This is a timely issue.

  • The basics: labeling deepfake content (audio/video). Of course, this requires their detection So the guidelines consider that they are “to the extent that they are technically feasible”.
  • Recommendation to watermark the images created by generative AI. Good luck with that, there surely will be plenty of tools to remove the digital watermarks. Specifically, no solution based on metadata would work. Perhaps cryptographic marking may be better but it would still have to be embedded in the whole image, altering it and ensuring that for example, format conversion retains the watermark signal. This may be sometimes doable, but still difficult. The alternative is to allow the publication of images that are generated by approved sources, containing an appropriate digital signature, and not accepting any such image. But this approach would fundamentally change how people interact online, reducing freedom. It would be very unpopular, at least today. This is not being considered. What is being considered in the guidelines is: “taking into account existing standards”. In other words: nothing for the time being (they don't exist in March 2024), and the time of the 2024 elections. This means that the guidelines could simply skip the watermark point as non-existing.
  • Clearly label, or otherwise make distinguishable through prominent markings, synthetic or manipulated images, audio or videos that appreciably resemble existing persons, objects, places, entities, events, or depict events as real that did not happen or misrepresent them, and falsely appear to a person to be authentic or truthful (i.e., deepfakes).”. What’s crucial here is that political staff are legally required to mark creations as AI-generated. To do so it would be best to use some standard, visible labels/seals. This should be built into graphical software. “recommends that providers of VLOPs and VLOSEs apply efficient labels, easily recognized by users”.

Labeling advertisements using generative AI

Ads must be labeled. Now this is new and important on the intersection of advertising and generative AI content: “The Commission recommends that providers adapt their advertising systems, for example by providing advertisers with options to clearly label content created with generative AI in advertisements”. In other words, ads created with generative AI will have to be labeled. Would that eventually become the case for all ads…?

Protection of political expression

=We’re speaking of censorship but that’s not the only point of view to be considered. Content cannot simply be scrubbed or purged arbitrarily. In the European Union, political expression is protected. So it all has to be balanced. “when providers of VLOPs and VLOSEs address legal but harmful forms of generative AI content that can influence voters’ behaviour, they should give particular consideration to the impact their policies and measures may have on fundamental rights, notable freedom of expression, including political expression, parody and satire”. This is the task for fundamental rights impact assessment: “when developing policies on what type of deceptive generative AI content a provider of a VLOP or VLOSE does not allow”

Summary

This topic is on the intersection of technology, law, policy, international relations and European affairs. Which makes it so exciting! The recommendations are good. In some places they suffer from technological impossibility, sometimes from the previous-years of spinning some approaches to disinformation are useful. But in general, they are good. The text considers the need to compose a fundamental rights impact assessment to balance interventions and freedom of political expression, which is famously very high in Europe, so scrubbing may cause some legal issues, even…