Antitrust, Public Procurement and State Aid Law

The Regulation of the Use of Artificial Intelligence in the European Union

Written by

Susanne Zühlke

Dr. Matthias von Kaler

Draft Regulation of the European Commission

The European Commission has presented a draft regulation for the regulation of the use of artificial intelligence (AI) in the European Union (Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) COM/2021/206 final). The following is a brief summary of the main contents of the draft regulation.

Purpose of the regulation

The purpose of the Regulation is to improve the functioning of the internal market by establishing a single legal framework for the development, marketing and use of artificial intelligence‚ in accordance with the values of the Union‘. Specifically, it aims to protect the health, safety and fundamental rights of citizens and businesses. 

Subject of the regulation

The draft regulation consists of five elements: 

  1. harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union,
  2. prohibitions of certain artificial intelligence practices,
  3. specific requirements for high-risk AI systems and obligations for operators of such systems,
  4. harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content, and 
  5. rules on market monitoring and surveillance.

Scope of the regulation

The regulation shall apply to three groups:

  • providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; 
  • users of AI systems located within the Union; 
  • providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

Definition of an AI system

According to the draft Regulation, an ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I to the regulation and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Annex I to the regulation defines the following techniques and concepts:

  • machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning,
  • logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems,
  • statistical approaches, Bayesian estimation, search and optimization methods.

Prohibited practices in the field of AI 

The following AI practices are to be banned:

  • AI systems that use subliminal influence techniques in which physical or psychological harm is or may be inflicted on a person,
  • AI systems that exploit a weakness or vulnerability of a specific group of persons due to their age or physical or mental disability to influence behaviour in a way that causes or is likely to cause physical or psychological harm,
  • the so-called ‘social scoring’ - the use of AI systems by public authorities to determine the trustworthiness of persons is to be prohibited if this inadmissibly places persons or groups of persons in a worse position or puts them at a disadvantage. 
  • Finally, with a few exceptions, the use of biometric ‚real-time remote identification systems‘ in publicly accessible spaces (‘facial recognition’) for law enforcement purposes should also be inadmissible. 

Classification of certain AI systems as high-risk systems 

AI systems that fulfil certain characteristics are to be classified as high-risk AI systems. Annex III defines the respective system characteristics that lead to a classification as a high-risk AI system for a variety of different areas of life, from biometrics and law enforcement to the administration of justice and democratic processes.  It is expected that this list will grow continuously with the increase of AI applications.

AI systems that are defined as high-risk AI systems must meet certain additional requirements:

  • establish, document and maintain a risk management system,
  • high-risk AI systems which make use of techniques involving the training of models with data must be developed with training, validation and testing datasets that meet certain quality criteria, 
  • preparation of technical documentation of a high-risk AI system before this system is placed on the market or put into operation, 
  • high-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of operations and events during the operation of the high-risk AI systems (‚logs‘),
  • requirement of ‚transparency and provision of information to users‘, 
  • human supervision: High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

Further requirements relate to ‘accuracy, robustness and cyber security’.

Obligations of providers and users of high-risk AI systems

The regulation contains an extensive catalogue of obligations for the providers of high-risk AI systems. They must ensure that their high-risk AI systems comply with the stated requirements for the AI system, establish quality management, prepare technical documentation, carry out conformity assessment, keep automatically generated logs, take corrective actions where necessary, notify certain risks to the authorities and cooperate with them. In addition, the proposal also imposes obligations on product manufacturers, distributors, importers, and users.


The proposal provides that each Member State shall ensure the designation or establishment of an authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring and includes rules on the notification procedure.


The proposal provides for the establishment of a ‘European Committee on Artificial Intelligence’ to advise and assist the Commission.

EU database for stand-alone high-risk AI systems

Under the proposal, the Commission, in cooperation with the Member States, shall establish and maintain an EU database containing certain information on registered stand-alone high-risk AI applications.


The reactions of experts to the draft were varied, as a public hearing of the German Bundestag's Committee on Digital Affairs revealed last autumn. Among other things, it was criticised that the definition of "artificial intelligence" is too broad. This would also subject software to the regulation that had nothing to do with AI, such as calculators. The Commission is currently revising the draft. The jury is still out on whether it will succeed in balancing the facilitation of the development of AI and regulating AI sufficiently to prevent harm.