Today, disinformation is a broad problem touching national, international, and cyber security policies, as well as domains such as social sciences and technology, including technical cybersecurity and privacy. Different tactics are used by state and non-state actors, both internal and external. Various protective measures can deliver different outcomes, for the good, but also the bad. Let’s look on one of the latest ones.

In August 2016 I published a research-based assessment how modern technologies enabling the direction of messages (i.e. real-time bidding, 1, 2) could be abused in election influencing. Soon afterwards, disinformormation has been found in use throughout the globe. It is a paradox that today, when an important event takes place, disinformation is expected, even if this is not the case. This already creates a potential for inadvertently causing distrust.

Countries struggle to respond. Is it a question of internal, external, or maybe military affairs? There are also many approaches and solutions proposed. Some may be more or less efficient. Others, even inadequate. The fact of the matter is there are still only single cases where concerted disinformation events (utilising new technologies) may have had an actual effect. Drawing conclusions based on isolated and limited events makes foresight especially difficult. There is a lot of noise. A lot of misinformation and disinformation about disinformation.

The old problem of our times

Countries and international blocks struggle with the public opinion’s pressure on doing something. The latest proposed activities are moves by the European Union. This document is very well referenced, this is clear. It is also strict on diagnosis. It names many actors (but singles out only one; guess who). It acknowledges that the tools and techniques are changing fast, so there is a need for adaptation.

The immediate motivation of the Plan is to protect the European elections in 2019, and other elections in long term. The short term goal is ambitious. The elections are in 6 months. There is little doubt that it may be too late now to deploy meaningful solutions.

The document starts with the definition of disinformation:

as verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm

Makes sense and frames the context. That’s a good start. The Action Plan acknowledges that “Evidence shows that foreign state actors are increasingly deploying disinformation strategies to influence societal debates”. This is a perfect assessment.
What about the solutions?

The plan expects disinformation operations in 2019 Elections

By the sole announcement of the plan, and specifically including the expectation of the increase in the levels of disinformation, a warning is provided. The warning is meant to increase the awareness of the problem. However, psychological background in the efficiency of disinformation is important. Being alarmist might risk inducing expectations of certain things to happen.

The document is too alarmist on deepfakes for 2019

I see this as a major issue of the Action Plan. It was tempting to have something fancy like “deepfakes”, the manipulated video content mentioned (example). I also saw how the high-level policymakers gave into admiring the beauty of deepfake technology. But the problem is this technology was actually and meaningfully used so far, and will likely not be in use in 2019 in European elections on a broad and effective scale. The technology is currently very limited. While an important risk, it may be concerning that so much focus appears to be put on deepfakes, rather than on the existing and real problems, and platforms. Fortunately, the “old” (interesting , the current, today and tomorrow) techniques like at least remarked in the end:

“Techniques include video manipulation (deep-fakes) and falsification of official documents; [...] more traditional methods such as television, newspapers, websites and chain emails [...]”)

Falsification of documents and image manipulation are rather ancient and already well known techniques. It is very amusing to see the official Action Plan including them as something novel. It is striking.

Seeing how the current operations look, the “paradox” is that the actual need for this technique in practice might be limited. Sure, it may sound and look cool, and dangerous (imagining abuse scenarios is not too difficult). But to achieve an actual goal you’re not going after cool. You go after efficient. To achieve the objective, what you need most is the right message. Streamed via right delivery channel, to the right group, and at the right time. It might be as simple as a modified image, or even a true message (TrueNews) taken out of context, at the right time.

Techniques actually in use not mentioned

What is striking is that the risk of microtargeted content and Real-Time Bidding is not even remarked of in the document, not even once. While advertisement content is mentioned, the underlying technology facilitating the actual targeting is not. It looks like an omission.

New resources

The Plan speaks of more funding and resources (human-power, analysts) engaged in the short- and medium term. This is very good. Additionally, the Plan foresees cooperation with country-based task forces (having the right local context and understanding). This is good especially because there are quite a lot of languages in use in European Union. It is all about the local context. This is a good item.

Rapid Alert System

The problem with disinformation is that it is a fast-paced phenomenon, with quick evolution rate. The idea of making 24/7 monitoring centers and rapid notifications of incidents is therefore good.

The challenge here is how to grade the severity of a campaign to avoid ringing overly alarmist bells. In traditional computer security, such severity and likelihood metrics are well established. It is not so in case of disinformation operations, yet. Metrics simply do not exist. In itself, this is a cause of concern because it becomes difficult to classify, understand and communicate well. This is a significant risk.

Example

If you’ve got a 50-strong Twitter botnet in operation, and you’ll take this "incident" out of proportion, you risk delivering and amplifying the botnet owner's message. If you do not deliver the message itself, you still risk delivering a message potentially undermining public confidence (“something was wrong”).

Internet platforms transparency

The Plan intends to compel internet platforms (think Twitter, Facebook, Google) to boost ads transparency. This is need is long overdue, the platforms famously hesitant. Fortunately, these days things appear to change. The devil is of course in the details: how the transparency layers will be implemented. No established transparency standards exist. There is a risk that users will struggle with actually understanding things. I know this well from security and privacy research perspective, where users generally have a hard time understanding how technology works, and security/privacy user interfaces are not a solved problem. They are even less of a solved problem in disinformation sphere.

Self-regulation is limited

European Union intended to make internet platforms to self-regulate (and hope it work, remarking of regulations if this test fails). We know how this (did not) worked in data protection and privacy. For a recent example, look at the struggle around ePrivacy regulation. It was supposed to be a landmark privacy framework. No there is a risk of watering it down. Technology companies often resist regulation, only to send mixed signal soon after thast in fact they want to be regulated. Why would self-regulation work in case of disinformation?

The positive aspect of self-regulation is it can happen fast, especially around sensitive matters like election interference. The big bet is to be fast enough for 2019 elections. In the long run, this might be an unsustainable way. It might even introduce business uncertainty for the companies themselves.

New research initiatives

The document speaks of strengthening the role of academic and independent researchers. This is very good. Resources, funding, and access to data would be welcome. Sadly, no details are provided. Will that be millions to big consortiums (inefficient), or maybe 50-200k to dynamic, small teams, possibly even individuals (the DARPA style)?

Furthermore, countries will also be encouraged to create “multi-disciplinary independent fact-checkers and researchers”. We know how this will sadly/likely ends. There will be teams composed out of people with expertise from different domains. Rather than having teams formed out of people coming from multiple distinct disciplines, what you need is teams with actually multi-disciplinary skilled people. This is a problem because “interdisciplinary research” in Europe (and in some countries in particular) if not really popular (the preferred way are strict divisions).

Fact-checkers

The Plan is heavily on fact-checkers, supporting their strengthening and cooperation. However, recent research points the existing limitations of fact-checkers. Conversely, a bunch of well deployed bots can outmaneuver and outpace even the most refined end exquisite boutique fact-checking service. This is obviously a problem.

Disinformation is a complex problem. We’ve reached a world where some types of information about the current events or even scientific results are being read differently by different people, based on their personal opinions, beliefs, and agendas. No amount of fact-checking will be able to help if nobody is looking for fact-checked content.

Summary

I am a little concerned if it’s not a little late for this Action Plan to make an actual difference in the 2019 elections. The plan would work especially well if there were no real threat actors with actual intent of making significant disinformation operation. Luck can also become handy. Following the elections, this success story could then be advertised as resulting from the Action Plan.

However, there are also some good news for the 2019 elections to the European Parliament. They are notoriously difficult target to the existing and currently feasibly disinformation tactics. There are many countries with many different languages spoken, different local contexts, traditions and dynamics. Broad and concerted disinformation campaign and exploitation simply sounds really ambitious.

The Action Plan document you can find here.

Did you like this analysis? Feel free to reach out: me@lukaszolejnik.com