Public Business Law

The new EU AI-Act and its impact on the public sector

Written by

Dr. Nicolas Sonder

The European Union has taken a significant step towards a harmonised regulatory framework for Artificial Intelligence (AI). On 14 June 2023, the European Parliament adopted the AI Regulation, which was already proposed by the EU Commission in April 2021. This regulation, which should be finalised by the end of the year through an agreement in the EU Council, aims to create a comprehensive legal framework for the use of AI systems within the EU. Especially in the public sector, the AI Regulation opens up new possibilities and opportunities. This blog post looks at the impact of the regulation on the public sector and its potential for modernisation.

A risk-based approach

The AI Regulation takes a risk-based approach, aiming to analyse and classify AI systems according to the risk they pose to users. Depending on the level of risk, AI systems are subject to varying degrees of regulation. In the public sector, AI systems offer enormous opportunities and possibilities, especially in the context of smart cities. By analysing data collected by networked systems, AI systems can learn new things, optimise their algorithms and make better decisions. Given the risk potential of such intelligently networked government and administrative actions, they will likely need to be subject to a comprehensive assessment under the provisions of the AI Regulation. It is conceivable that a legally compliant design of such systems would require a strong limitation of the autonomous decision-making power of the AI system. For example, it would only be allowed to offer suggestions for improvements and solutions that will then require human review and validation. Furthermore, AI systems such as chatbots and personal voice assistants can significantly speed up and improve administrative processes. According to the proposed regulation, such AI systems only need to ensure compliance with transparency and legality requirements.

Creation of a legally secure experimentation space

The EU regulation aims to make AI systems safe, transparent, accountable, non-discriminatory, environmentally friendly and human-supervised. This creates a legally secure experimental space that should consolidate trust and acceptance of AI on the part of citizens and business. This is particularly important in the non-risk-averse public sector. Furthermore, the feasibility of AI also requires a careful analysis of the existing legal framework as well as the scope for design. This brings key challenges, including determining legal platform governance, protecting data and safeguarding against liability risks. In addition, AI-based interventions must be integrated into security law, taking into account the protection of fundamental rights and the responsibilities of different authorities. In mass procedures, legal conditions must be guaranteed to ensure data protection, equality and efficient administrative procedures. These challenges show that defining the legal framework for AI in the public sector is a complex task. It requires a careful balancing of the opportunities and risks of AI as well as a continuous adaptation of the legal framework to technological developments.

Implementation at national level

The success of the AI Regulation depends largely on its implementation at national level. Member States must take measures to establish, for example, a competent supervisory authority and a national AI transparency register. In addition, they should continuously work on the development of industrial standards and norms. Under these conditions, the proposed law offers good opportunities for the public sector to modernise within a legitimised but flexible legal framework.


The European Union AI Regulation establishes a harmonised regulatory framework for the use of AI in the public sector. It is based on a risk-based approach that provides for different regulations depending on the level of risk. The regulation offers opportunities for the modernisation of the public sector, but requires careful analysis of the legal framework and implementation at national level. Successful implementation will enable a legally secure experimentation space and boost confidence in AI.