The EU’s new “European AI Office” – a focus on financial services
RegCORE Client Alert | EU Digital Single Market
QuickTake
On 24 January 2024, the European Commission (the Commission) published a DecisionAvailable here.Show Footnote establishing the “European Artificial Intelligence Office” (European AI Office or EU AIO), which is tasked with coordinating (without duplicating) activities of the national competent authorities (NCAs), bodies and offices and agencies of the EU in the supervision of AI systems, as contemplated by the EU’s “AI Act”, which has a number of potential transformational effects on financial services and market participants (see separate Client Alert on this EU Regulation).
The Decision entered into force on to 21 February 2024 and the EU AIO is beginning to take operational shape.Although, at the time of writing hereof, much beyond the following website (last updated 8 March 2024) was not available.Show Footnote In terms of hierarchy, the EU AIO forms part of the administrative structure and the annual management plan of the Commission’s Directorate-General for Communication Networks, Content and Technology (aptly shortened to DG-CNECT).
As discussed in this Client Alert, the European AI Office’s responsibilities apply to the Single Market as a whole but its operations may also have a specific impact on supervised firms, markets and their counterparties, clients and customers.
While the EU AIO’s responsibilities and tasks may grow over time, and its name may change too, for now, the EU AIO shares a perhaps unintended tonality that may allude to the children’s song classic “Old Macdonald had a farm”. In some ways the EU AIO will have to ensure that all the different stakeholders it has to harmonise are able to hum to the same tune so as to ensure the AI Act and the EU’s approach to AI yield benefits. Effective delivery of this goal may of course be comparably more complex to orchestrate.
Key responsibilities of the EU AIO
The overarching mission of the European AI Office is to “support the development and use of trustworthy AI, while protecting against AI risks” and thus support the “EU’s approach to AI”. In delivering that mission, the EU AIO, aims to play a key role for and on behalf of the Commission and in engagement with various other Directorate Generals – including DG-FISMA, in implementing the AI Act. It will do this by coordinating and supporting the governance bodies in EU-27 Member States in their tasks. The EU AIO will also enforce rules for general-purpose AI (GPAI) models and may also do so with other enforcement bodies and their mandates.
The European AI Office is empowered to directly exercise powers that the AI Act allocates to the Commission – these include the ability to perform assessments of GPAI models, solicit data and metrics from providers of the models, draft or contribute to rulemaking instruments, guidelines, technical support and input into supervisory expectations as well as to impose penalties. To effectively carry out all tasks based on evidence and foresight, the European AI Office aims to continuously monitor the AI ecosystem, technological and market developments, but also the emergence of systemic risks and any other relevant trends.
The European AI Office states that it will have the following tasks:
- Supporting the AI Act and enforcing general-purpose AI rules – The European AI Office will make use of its expertise to support the implementation of the AI Act by:
- Assisting the Commission in the preparation of relevant Commission Decisions and of implementing and delegated acts;
- Contributing to the coherent and uniform application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange as well as assisting the Commission in the preparation of standardisation requests, evaluation of existing standards and the preparation of common specifications for the implementation of the EU AI Act;
- Developing tools, methodologies and benchmarks for evaluating capabilities and reach of GPAI models, and classifying models with systemic risks;
- Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts;
- Investigating possible infringements of rules, including evaluations to assess model capabilities and requesting providers to take corrective action;
- Preparing guidance and guidelines, implementing and delegated acts, and other tools to support monitoring of compliance with the AI Act;
- Strengthening the development and use of trustworthy AI across the Single Market – The European AI Office, in collaboration with relevant public and private actors and the startup community, contributes to this by:
- Advancing actions and policies to reap the societal and economic benefits of AI across the EU;
- Providing advice on best practices and enabling ready-access to AI sandboxes, real-world testing and other European support structures for AI uptake;
- support the accelerated development, roll-out and use of trustworthy AI systems and applications that bring societal and economic benefits and that contribute to the competitiveness and the economic growth of the EU. In particular, the European AI Office shall promote innovation ecosystems by working with relevant public and private actors and the startup community;
- Aiding the Commission in leveraging the use of transformative AI tools and reinforcing AI literacy;
- Fostering international cooperation – contributing to a strategic, coherent and effective EU approach by:
- Promoting the EU’s approach to trustworthy AI, including collaboration with similar institutions worldwide;
- Fostering international cooperation and governance on AI, with the aim of contributing to a global approach to AI; and
- Supporting the development and implementation of international agreements on AI, including the support of Member States.
In delivery of the above, the European AI Office will have to collaborate with a diverse range of national and EU-level institutions, experts and stakeholders. At an EU institutional level, the European AI Office will work closely with the European Artificial Intelligence Board, formed by Member State representatives, and the European Centre for Algorithmic Transparency (ECAT)See here.Show Footnote of the Commission. Moreover, the Scientific Panel of independent experts is supposed to ensure a strong link with the scientific community. Further technical expertise is gathered from members in an Advisory Forum, representing a balanced selection of stakeholders, including industry, startups and SMEs, academia, think tanks and civil society. Further initiatives to foster trustworthy AI development and update across the EU are mapped in the “Coordinated Plan on AI.”See here.Show Footnote
In addition to these standing channels of communication, the European AI Office may also partner up with individual experts and organisations. It will also create fora for cooperation of providers of AI models and systems, including GPAI, and similarly for the open-source community, to share best practices and contribute to the development of codes of conduct and codes of practice.
On wider-reaching outreach channels, including into the EU financial services community, the European AI Office will also be tasked in overseeing the “AI Pact”.See here.Show Footnote The AI Pact, which is part of the European AI Alliance,See here.Show Footnote predates the implementation of the AI Act, aims to facilitate engagement between businesses and the Commission and other stakeholders in sharing best practices and joining activities.
The impact on financial markets participants
The European AI Office’s activities are likely to have an impact on those financial markets participants and how they use AI, in accordance with the EU AI Act as well as the supervisory expectations on such topics as set by the combined breadth of NCAs and EU-level bodies and agencies that are part of the European System for Financial Supervision (ESFS), which include the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA) and the European Insurance and Occupational Pensions Authority (EIOPA).
As discussed in our Client Alert on the AI Act’s impact on the EU financial services sector, the EU AIO will become relevant “when AI meets financial services” and its impact might be significant in shaping the direction of supervisory scrutiny of the ESFS in particular where it concerns how to or alternatively directly monitoring the compliance of financial services providers and market participants with the EU AI Act and the supervisory expectations on AI-related topics, such as data quality, governance, transparency, accountability and ethics.
The EU AIO’s activities may also have a specific impact on how financial services providers and market participants use AI in their business models, products and services. For example, the EU AI Act requires high-risk AI systems to undergo a conformity assessment before being placed on the market or put into service, which may entail additional costs and delays for the developers and users of such systems. The EU AI Act also imposes obligations on the providers and users of high-risk AI systems to ensure adequate human oversight, information provision, accuracy, robustness, security and protection of fundamental rights. These obligations may affect the design, functionality and performance of AI systems, as well as the liability and redress mechanisms for any harm or damage caused by them.
The EU AIO’s role is not only to supervise and enforce the EU AI Act, but also to support and promote the development and uptake of AI in the EU, in line with the EU’s broader strategy and objectives on digital transformation, innovation and competitiveness. The EU AIO will also have to engage with international partners and stakeholders, such as third countries, international organizations, industry associations, civil society groups and academia, to foster cooperation and dialogue on AI-related issues and challenges as it applies generally but also to financial services.
The bigger picture and outlook ahead
While the European AI Office is perhaps more than just a taskforce within the Commission’s DG-CNECT, questions remain on whether it may have sufficient autonomy and budget to fully act and challenge those EU-level and national authorities that it will have to cooperate with so as to deliver on its mission. Although the European AI Office is at the start of its institutional journey, the EU is committed to implementing substantial measures, including those that go beyond the AI Act, to enhance the framework of supervision and assistance to national authorities and their mandates in how AI can be advanced for the benefit of the EU’s Single Market overall and in particular “when AI meets financial services.”
In summary, just like Old Macdonald had different animals on his farm, each with their own sound and function, the EU AIO will have to deal with different types of AI systems in financial services, each with their own features and risks. For example, the EU AIO may encounter AI systems for credit scoring, fraud detection, robo-advice, algorithmic trading, insurance underwriting, risk management and compliance, among others. Just like Old Macdonald had to take care of his animals and make sure they were healthy and happy, the EU AIO will have to ensure that the AI systems in financial services are safe, trustworthy and beneficial for the users, the customers and the society. Just like Old Macdonald had to orchestrate and sing along with his animals, the EU AIO will have to coordinate with the NCAs and the ESFS and try to have them cooperate on a harmonised supervision of AI systems as used in financial services. And just like Old Macdonald had a farm that was part of a larger ecosystem and community, the EU AIO will have to operate within the EU's legal and policy framework and engage with the global AI landscape.
About us
PwC Legal is assisting a number of financial services firms and market participants in forward planning for changes stemming from relevant related developments. We have assembled a multi-disciplinary and multijurisdictional team of sector experts to support clients navigate challenges and seize opportunities as well as to proactively engage with their market stakeholders and regulators.
Moreover, we have developed a number of RegTech and SupTech tools for supervised firms, including PwC Legal’s Rule Scanner tool, backed by a trusted set of managed solutions from PwC Legal Business Solutions, allowing for horizon scanning and risk mapping of all legislative and regulatory developments as well as sanctions and fines from more than 1,500 legislative and regulatory policymakers and other industry voices in over 170 jurisdictions impacting financial services firms and their business.
Equally, in leveraging our Rule Scanner technology, we offer a further solution for clients to digitise financial services firms’ relevant internal policies and procedures, create a comprehensive documentation inventory with an established documentation hierarchy and embedded glossary that has version control over a defined backward plus forward looking timeline to be able to ensure changes in one policy are carried through over to other policy and procedure documents, critical path dependencies are mapped and legislative and regulatory developments are flagged where these may require actions to be taken in such policies and procedures.
If you would like to discuss any of the developments mentioned above, or how they may affect your business more generally, please contact any of our key contacts or PwC Legal’s RegCORE Team via de_regcore@pwc.com or our website.