The AI Act is coming, here's what you need to know

Siyanna Lilova

CEO of CuratedAI

Calendar

December 10, 2023

Timer icon

5 min

European Union policymakers have reached an agreement on the Artificial Intelligence Act(AI Act), setting a global benchmark for AI regulation. After a grueling 36-hour negotiation marathon concluding on 8 December 8 2023, the trialogue between the Commission, Council and Parliament reached a consensus, overcoming substantial disagreements. The final text of the AI Act will be published in January but here's an overview of the main aspects of the new regulation.

Defining Artificial Intelligence:

  • The AI Act adopts a broad definition of AI systems in alignment with OECD guidelines. It encompasses machine-based systems that infer from inputs to generate outputs such as predictions, content, recommendations, or decisions, influencing both physical and virtual environments.
  • The scope of the Act's application is extensive, covering various sectors. However, it explicitly excludes AI usage in military and defense sectors, as well as research and non-professional applications, acknowledging the unique requirements of these domains.

Classification of AI Systems:

  • All AI systems, irrespective of their risk level, are subject to basic transparency obligations. These obligations are designed to ensure a minimal level of clarity and understanding of AI functionalities and implications.
  • High-risk AI systems are held to more comprehensive regulatory standards. This includes a well-defined apportionment of liability among developers and users and mandates developers to assist users in meeting high-risk AI system assessment criteria.
  • The criteria for classification, including computational power metrics like FLOPS, are adjustable and subject to revision over time, allowing for adaptability as AI technology evolves.

Prohibited Practices and Exemptions:

  • The Act prohibits certain AI applications deemed detrimental to individual rights and societal values. These include manipulative techniques, random collection of facial recognition data, use of emotion recognition systems in workplaces and educational settings, deployment of social credit scores, and certain applications of predictive policing targeting individuals.
  • Exemptions are provided for law enforcement uses of AI, subject to stringent conditions and prior judicial authorization. These exemptions apply in scenarios like targeted searches for victims of serious crimes or prevention of specific, present terrorist threats.

High-Risk Use Cases and Law Enforcement Exemptions:

  • The Act categorizes certain use cases as high risk due to their significant potential to harm people's safety and fundamental rights. These areas include education, employment, critical infrastructure, public services, law enforcement, border control, and the administration of justice.
  • For law enforcement purposes, the Act allows narrow exceptions for the use of real-time remote biometric identification (RBI) in publicly accessible spaces, subject to prior judicial authorization and strictly defined lists of crimes.

General Purpose AI and Foundation Models:

  • General-purpose AI systems and foundation models are subject to specific obligations under the Act. These include transparency requirements, such as technical documentation and compliance with EU copyright law, as well as detailed summaries about the content used for training. It's important to note that providers of foundation models must give a significantly detailed summary of the training data irrespective of where the training has taken place.
  • High-impact general-purpose AI models with systemic risk must conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the Commission, ensure robust cybersecurity, and report on their energy efficiency.

Governance and Compliance:

  • The AI Act establishes a European AI Office responsible for monitoring the most complex AI models. This office will collaborate with national authorities and integrate perspectives from various stakeholders through a scientific panel and advisory forum.
  • The role and powers of local AI authorities and the potential political friction between different local authorities are significant considerations. The AI Act aims to reduce the risk of inconsistent approaches across the EU, drawing parallels with the experiences under GDPR.

Penalties and Support for Innovation:

  • The Act establishes a system of penalties based on companies’ global turnover or a predetermined amount, whichever is higher. Fines range up to €35 million or 7% of global turnover for violations of banned AI applications, €15 million or 3% for other violations, and €7.5 million or 1.5% for providing incorrect information.
  • To balance regulation with technological advancement, the Act includes provisions for SMEs and startups, offering limited penalties. It also promotes regulatory sandboxes, enabling businesses to test AI solutions in real-world environments without undue regulatory pressure.

Timeline and Implementation:

  • The AI Act specifies a precise timeline for its applicability, with transition periods of six months for the introduction of bans, one year for foundation models and AI systems for general use, and two years for other AI systems.
  • The final text of the AI Act will be published in the Official Journal of the European Union in January 2024, marking the beginning of the implementation timelines.

List of useful resources:

  1. Council of the European Union Press Release - Dec 9, 2023
  2. European Parliament Press Release - Dec 9, 2023
  3. Midnight Press Conference Video -Dec 8, 2023
  4. European Union squares the circle on the world’s first AI rulebook – EURACTIV.com - Dec 9, 2023 
  5. AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement – EURACTIV.com - Dec 7, 2023