Artificial Intelligence and AI Governance are hot topics in this decade. European Union has a pretty ambitious attempt to regulate AI (project here). In this post, I have a look at the proposal through the technical lens, including paying attention to cybersecurity and privacy.

The goal of the regulation is “ the development, marketing and use of artificial intelligence in conformity with Union values“. But what is AI? AI is the algorithmic processing of data to reach some conclusions, feedback, or classification. The definition is here:

“‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”

Can you regulate the use of math? Sure (well, the uses of, in some. contexts). The following artificial intelligence techniques or approaches uses are subject to regulation (annex):

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimization methods.

It may seem as if someone took a random book on machine learning and copied the titles from the table of contents. Not that I complain. I leave it to the reader to establish whether simple heuristics of the kind if(random()>0.5) { … } are also subject to this regulation (“logic-based approach”?).


The regulation is envisaging special treatment in the case of the so-called high-risk AI. In this case, very specific requirements must be in place. There are also cases exceeding this mark (prohibited AI) and uses that are not subject to the regulation. For example, some uses of AI by law enforcement. Or AI used in military settings (“This Regulation shall not apply to AI systems developed or used exclusively for military purposes“), like in weapons systems. Or low-risk AI. Most work is needed in the case of high-risk AI, the focus of this post. But here’s an example of forbidden systems:

“market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities “

So it’s about the uses. Another noteworthy prohibition is the social score:

“The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited“

Social scoring is prohibited in Europe.


List of high-risk AI systems

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, workers management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

Sometimes with interesting sub-details. For example in law enforcement is “AI systems intended to be used by law enforcement authorities to detect deep fakes

Some other examples of high-risk AI systems:

“machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices … the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity … AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property “.

In the case of law enforcement uses, the following applications are examples of high-risk AI: “polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes

Transparency in AI systems.

This regulation requires the ability to explain to the user if/when AI is used, but also how it’s used, including how the individuals were made (logging requirements). It does not mention explainable AI directly. Figuring this out will likely be up to the implementor/user unless the European AI Board will issue guidance.

“a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately"

Concerning the technical documentation needed (Article 11), it’s fortunately made clear that the onus here is put on technical, rather than purely legal aspects. The list of points to consider is very long and detailed. This is not only

“the versions of relevant software or firmware and any requirement related to version update“, but “the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices “
“the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system “, “data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection); “, “metrics used to measure accuracy, robustness, cybersecurity “, “the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system“


This assessment and transparency exercise will be heavily technology-driven and informed.


Prohibited AI

These uses of AI are prohibited (i.e. placing on the market/etc):

“AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm
placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”


Social scoring is prohibited

“classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score“

AI assessment

High-risk AI will have to undergo a very specific type of assessment, including:

“identification and analysis of the known and foreseeable risks estimation and evaluation of the risks that may emerge when the high-risk AI system”

This also includes validations, tests, etc. Does this sound like a complex process?

Risk management must be applied. Such analysis must be technical and made with knowledge of AI methods. Think of this as an advanced data protection impact assessment. Advanced, because it is less about legal aspects and more about technical aspects.

Details of technical assessment

Further grounds for advanced technical assessments.

“the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider; ... the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used ... where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics“

How to do this in practice? There are no standards for making such advanced transparency reporting like required by this regulation. One thing is describing how the AI system is designed and how it works, which may be done with e.g. solution such as model cards (or the toolkit):

“Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups … Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information“

Quality of data

On technical grounds, this regulation requires a technical approach to ensuring that quality data and testing is used:

“High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system … training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting”

Very often ML engineers simply get data and run their magic. They do not necessarily need to delve into the details of the data, its nature, representativeness, and so on. But the regulation explicitly includes data pipelines:

“training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria … Training, validation and testing data sets shall be subject to appropriate data governance and management practices.”.

Article 10.2 lists the needed steps of the analysis and its angle: “the relevant design choices; data collection; relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation; the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; a prior assessment of the availability, quantity and suitability of the data sets that are needed; examination in view of possible biases; the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed

This is a lot of work. But perhaps as a result of this regulation the assessment of AI (privacy, cybersecurity, robustness, ethics) will be less about hand-waving, and more about an actual assessment? This also means having appropriate technical documentation (Article 11), use logs (Article 12), etc. Article 15.2 makes it clear that the following must be accounted for: “The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use

AI is about responsibility:

“The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws. “

Note about adversarial AI

Data poisoning attacks or the newer data reordering attacks, demonstrating the possibility of integrity or availability attacks, and the risk of backdooring models. While poisoning is included, data reordering attacks are not included in the Regulation text, because they are new - this highlights how quickly the regulation is going out of date over techniques. It happened in a matter of days between the announcement of the regulation and the disclosure of the new attack techniques attacking the pipeline layer providing data in the training of ML.

But was it a matter of being up to date or merely using some fashionable keywords? The regulation does not reference any privacy attacks, such as data inference: when private data is directly retrieved from the learned model. It’s difficult to justify such an omission?

catdog

Adversarial attacks include techniques of placing tampered training (or input) data to distort the operation of the AI/ML system. For example, to make the model classify cats as dogs. Indeed, AI/ML systems may be biased in many ways. While the AI Regulation wanted to be up to date and speaks about cybersecurity risks, privacy risks such as inferring data from the model (model reversal, i.e. inference attacks, or model stealing) are not mentioned for some reason. Perhaps this highlights the bias of the AI Regulation authors. Turns out it is not only AI models that may be biased?


Deep fakes

The regulation allows the use of deep fakes, subject to some transparency measures like the annotation of such synthetic content. This is still filled with risk. I’ve got a devoted analysis.

Is it a risk? I’ll let you be the judge but let’s imagine how this could be used or misused:
In automatic, microtargeted commercials where a “famous face” is synthetically speaking in a fixed script. Think Einstein selling pizza.
In the industrial-grade creation of political messages or spots. So in PR, political PR, speech generation. Think of a political leader or a Famous Face sipping coffee and eating cheesecake, meanwhile hundreds and thousands of multimedia content are being automatically manufactured and then (micro)targeted and delivered to the viewers. Such a Famous Face probably does not need to even be alive anymore. Perhaps this makes political PR or disinformation, a bit simpler?


The application is broad

Article 2.1(c) makes it clear that this Regulation is applying to the whole world: “providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union; “. Specifically, even if the use of AI happens in the third country, but the output/effects reach Europe, such use of AI is subject to this regulation. This is even clarified in a recital: “...that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law“. This means that the use of effects/outcomes of AI processing/operation may be prohibited even when done in third countries.


AI requirements may differ

“Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used ... to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within”

This use intention means that the data properties (i.e. statistical representativeness, etc.) may differ in various applications, for example intended for different markets. The data needs might be different in the case of France, or for example Poland.

AI incident notification

Incidents related to AI will need to be notified to an AI authority.

‘serious incident’ means any incident that directly or indirectly leads, might have led, or might lead to any of the following:
(a) the death of a person or serious damage to a person’s health, to property or the environment,
(b) a serious and irreversible disruption of the management and operation of critical infrastructure.


Penalties

AI Regulation will give grounds to issuing administrative fines for non-compliance.

  • 30 000 000 EUR or up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. For “non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5” (“prohibited AI”), or Article 10, “data governance”,
  • 20 000 000 EUR or 4% for non-compliance with any other point in the Regulation
  • 10 000 000 EUR or 2% for lying (providing incomplete or misleading information) to the national competent authorities

Such penalties, especially the 30M/6% for the need to ensure data quality are high. Such a regime may likely transform how AI systems are developed or used.

Summary

This is the first in the world attempt to regulate AI. In this sense it’s an ambitious and impressive undertaking, demonstrating how AI governance may look like in the future.

The regulation is pretty good, even though it is biased and does not reference any data protection and privacy risks that may exist in AI systems, for some reason.

We should be happy that AI Regulation offers details as to the demands of the technical assessments. Such assessments will not have a formal name, we may simply call them AI assessments. What matters is these will be technical and rather advanced.

I am certainly interested in the assessment aspects.

Did you like the assessment and analysis? Any questions, comments, complaints, or offers for me? Feel free to reach out: me@lukaszolejnik.com