Agreement on the EU AI Act - what we know and what it means...
- Adam Smith

- Dec 14, 2023
- 7 min read
On Saturday, 9th December 2023, the EU's institutions came together to announce (somewhat triumphantly) that the previous day they'd reached full political agreement on the content of the EU's so-called AI Act. The landmark legislation looks set to be the first piece of AI-focused legislation that makes it across the line in any major nation or bloc...although as the final text of the regulation is likely some weeks away, technically there's still time for a competitor to come along and steal that particular accolade.
Without the final text, it's impossible to provide definitive advice to companies who currently use or are contemplating the introduction of artificial intelligence systems within their businesses. That said, between them the European Commission, Parliament and Council have provided sufficient information to at least begin to prepare for compliance.
What we know
The AI Act will take the form of a regulation, meaning it will apply directly across all EU member states and the additional EEA states of Norway, Iceland and Liechtenstein, just like the GDPR. The aim of the legislation is to balance regulation with innovation, protecting individuals' rights, the rule of law and democracy while providing a landscape that enables companies - with particular focus on assisting SMEs - to develop the systems that the EU hopes puts the EU at the forefront of the AI sector globally.
Definition and scope
The European Council explains that the definition of an 'AI System' will play it safe and align with the definition proposed by the OECD:
An AI system is a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The regulation will not seek to apply itself outside the scope of EU law, and it is not intended to affect EU member states' primacy on national security and defence matters. Likewise, R&D will be exempt, as will the use of AI systems for 'non-professional' (personal?) reasons.
Prohibited applications
Without even thinking about the potential End-Of-Days scenarios many (quite rationally) suggest AI could bring about, the EU institutions have agreed that certain applications pose unacceptable risks to citizens' rights and democracy, and have thus decided to prohibit:
biometric categorisation systems using sensitive characteristics such as special categories of personal data (racial/ethnic background, sexual orientation, health status etc.);
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behaviour or personal characteristics;
AI systems that manipulate human behaviour to circumvent their free will; and
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Two-tier approach to regulation
The AI Act will regulate AI systems according to the level of risk they pose. The Commission has noted that the vast majority of AI systems pose minimal risks and thus will benefit from a "free-pass and absence of obligations". AI systems determined to be in the high-risk category, for example those determining access to educational institutions or in recruitment, or medical devices, will be subject to stringent requirements, from ensuring human oversight to providing detailed documentation and undertaking impact assessments.
'General purpose AI systems' and foundation models
So-called ‘general purpose AI systems’, which can be utilised for various purposes, will be subject to specific transparency requirements, which will include drafting technical documentation, complying with EU copyright law and providing detailed summaries of the content used to train the models. Where they carry systemic risks, additional obligations will apply, including evaluations, assessment, testing and risk mitigation.
Foundation models - those large-scale models trained on a broad range of generalised and unlabelled data and capable of performing a vast range of general tasks such as understanding and conversing in normal language, and creating text and images - will also need to comply with specific transparency obligations before being placed on the market, with stricter requirements for ‘high impact’ models.
Promoting innovation
With MEPs seeking to ensure that businesses, particularly SMEs, can develop AI systems without excessive pressure from industry giants controlling the value chain, the AI Act will promote the use of regulatory sandboxes, which aim to provide a controller environment for development and interrogating cutting-edge systems, and real-world testing. These are to be set up by national authorities to develop and train innovative AI systems before being placed on the market.
Law enforcement exceptions
Subject to appropriate safeguards, the AI Act will permit certain practices that would otherwise be banned when used for the purposes of law enforcement. One of the most controversial areas is perhaps the use of real-time remote biometric identification systems, which will be permitted for law enforcement purposes only where: (i) there is a foreseeable and expected threat of terrorist attack; (ii) searching for victims; and (iii) prosecuting serious crime.
Governance
A new governance framework will be introduced under the Act. An AI Office within the European Commission will have oversight over the most advanced AI systems and will help establish standards and testing practices, taking advice from a panel of independent scientific experts.
An AI Board comprised of member state representatives will be formed, providing a platform for coordination and serving as an advisory body to the Commission. It will contribute to the design of codes of practice for foundation models. Although details are currently sketchy, it sounds like the AI Board may be similar in its makeup and role to the European Data Protection Board under the GDPR. An advisory forum will also be created for stakeholders including industry representatives, SMEs, academia and civil society, which it is hoped will allow European society to help shape AI regulation as technology develops.
Penalties
Financial penalties under the AI Act are steep. Violations regarding banned AI applications will be subject to GDPR-busting sanctions of the higher of €35 million or 7% of global annual turnover. Breaches of obligations under the Act could attract fines of the higher of €15 million or 3% of global annual turnover, and the supply of incorrect information could result in fines of €7.5 million or 1.5% of global turnover, again whichever is the higher. More proportionate caps are expected in relation to fines for SMEs and start-ups.
Timing
Friday’s political agreement will need to be documented in a final version of the AI Act, which would then be subject to formal approval by the Parliament and Council, and will enter into force 20 days after its publication in the EU’s Official Journal. Most provisions in the Act will apply two years after its entry into force, meaning we can expect compliance to be necessary from early 2026. There are a couple of areas where application will be expedited: the prohibitions will apply after 6 months, while the rules for general purpose AI will apply after one year.
What it all means
Those with a background in EU data protection may have guffawed a little at Commissioner Breton's press conference statement that the EU always looks to regulate "as little as possible but as much as necessary", and without the text of the AI Act it is difficult to make an effort at appraisal. From the information available, however, those not engaging in high-risk activities will be tempted to breathe a sigh of relief...provided that the text provides sufficient detail on where the line between low-risk and high-risk systems lies.
Although two years has so far proved to be a long time in the progression of AI and the AI Act won't fully come into force until 2026, the EU seems convinced that it has future-proofed the legislation so that it will still be fresh when it begins application. It also must be said that, while there will always be differences in balance and emphasis, the EU's approach is broadly coherent with the US approach, as documented in President Biden's recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. And while the UK may be tempted to finally make something of its 'Brexit Benefits' by taking a more hands-off approach to regulation, its white paper, A Pro-Innovation Approach to AI Regulation makes the same noises about balancing innovation and regulation, and providing business incentives while addressing risks and protecting fundamental rights and values. Consensus may not be far off.
What can companies do now?
Companies whose systems may fall within jurisdiction of the AI Act should review their existing AI governance regime and assess whether there are any gaps or weaknesses that are likely to require remediation under the Act. Fundamentally, they should consider whether any of their systems are on the prohibited list and, if so, make plans to withdraw or amend for the EU market.
Although specifics may be included in the final text of the AI Act, it is not too early to begin working on the required documentation and impact assessments, albeit they will need to be reviewed against the requirements of the AI Act and any guidance from relevant authorities as and when published. This also applies to the assessment of whether any of the AI systems the company develops or uses will be defined as high-risk, general purpose, or foundational models. It is also worth considering the technical implications for compliance and working with technical departments on compliance. Key stakeholders across the business should be involved from the outset.
Getting a head start will not only serve to make compliance in time for introduction of the AI Act a much less painful exercise; it may also serve as a reputational advantage when promoting the company and its AI products to a consumer base that understandably has concerns over the potential power of artificial intelligence.
AI Pact
The EU is keen for companies to begin their compliance efforts as soon as possible, with the Commission introducing an AI Pact that is designed to give companies the opportunity to demonstrate and share commitment towards the objectives of the future AI Act and prepare for implementation. It encourages companies to share knowledge about safeguards and steps towards compliance.
Application of other laws
Importantly, companies developing and using AI systems must not lose sight of the laws that are already in place and remain relevant to the industry. The risks inherent in many AI systems mean that various legal instruments on subjects as diverse as employment, consumer law, data protection, equality and intellectual property must all be respected when developing systems. When engaging in compliance efforts to prepare for the AI Act, it will be essential to ensure that these laws continue to be considered.




Comments