European Union regulating AI - allowing the use of deepfakes

After the success of the GDPR, Europe is doubling down on setting the standards in Artificial Intelligence. It should be clear to everyone, especially after a version of the “REGULATION ON A EUROPEAN APPROACH FOR ARTIFICIAL INTELLIGENCE” project leaked. While it contains interesting AI governance ideas, I will withhold further assessment for the final version. Here I limit myself to a small observation.

[Edit 21/04/21: I slightly update this post to reflect the officially released thing]

European Union will allow the use of deep fake technologies. Yes, the technology that can put someone’s face in realistically looking images or videos, or synthetically create audio content, to make anyone appear to be saying anything.

This is evident from Article 1 of this leaked regulation: “It also lays down harmonised transparency rules for AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content“.

Which is retained in the officially-released proposal:

"harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content"


But it is even more evident in Article 43 (“Transparency”): “users of AI systems who use the same to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a reasonable person to be authentic [or truthful], shall disclose that the content has been artificially created or manipulated. This obligation shall not apply where necessary for the purposes of safeguarding public security [and other prevailing public interests] or for the exercise of a legitimate right or freedom of a person and subject to appropriate safeguards for the rights and freedoms of third parties.”.

Which is rephrased in the official proposal:

"Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence."
...
Users of an AI system that generates or manipulates image, audio or video content existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
However, the first subparagraph authorised by law to detect, prevent, investigate and prosecute criminal offences (...)



What’s that? Yes, the regulation of the European Union is allowing the use of deepfakes.

This is also present in Recital 68: “(...) Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic ...

Which is retained in Recital 70 of the official release:

"Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin."


So again, European Union is foreseeing, and allowing the use of deepfakes.

The use of deepfake technology must be disclosed: “providers of AI systems shall ensure that AI systems intended to interact with natural persons are designed and developed in such a manner that natural persons are notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use,”

We  know how such disclosures might end up looking. Some tiny-font disclaimer? Perhaps not even overlaid on the media but kept in some “transparency policy” on the provider's website? Sadly, people are in practice very bad at detecting manipulated content.

Risks of deepfakes

Is it a risk? I’ll let you be the judge but let’s imagine how this could be used or misused:

  • In automatic, microtargeted commercials where a “famous face” is synthetically speaking in a fixed script. Think Einstein selling pizza.
  • In the industrial-grade creation of political messages or spots. So in PR, political PR, speech generation. Think of a political leader or a Famous Face sipping coffee and eating cheesecake, meanwhile hundreds and thousands of multimedia content are being automatically manufactured and then (micro)targeted and delivered to the viewers. Such a Famous Face probably does not need to even be alive anymore. Perhaps this makes political PR or disinformation, a bit simpler?
  • In the Enemy Leader scenario issuing a message that Your Country Forces are obliterated and so Prepare to Surrender, or  not.

It doesn't need to go this way

While some allegations/rumors were flying that deepfakes are already being used by cybercriminals, these turned out to be fake. At least one politician used deep fake technology to create video material in his political campaign to speak in different languages. So yes, it is possible to use the technology in legitimate ways. Meanwhile, security authorities are increasingly concerned with deepfakes being in use for nefarious purposes (disinformation). Facebook and Twitter banned the use of deepfakes. The US Department of Defence is prohibiting their production. In 2018 Europe was concerned with deepfakes being present in the U.S. 2019 elections, back then I explained why this was not an issue.

The use of deepfakes has the potential to transform how societies, and the political debate, function. This might be a route to a point with no return.




Did you like the assessment and analysis? Any questions, comments, complaints, or offers for me? Feel free to reach out: me@lukaszolejnik.com