Understanding General-Purpose AI in the AI Act

Olena Barda


April 11, 2024

Timer icon

3 min

The discourse surrounding general-purpose AI (GPAI) models has been vibrant and contentious throughout the legislative journey of the AI Act, culminating in significant regulatory milestones in March when the EU Parliament voted on the AI regulation.

The latest version of the AI Act includes a specific paragraph dedicated to the GPAI. Previously mired in confusion, the legislative text has evolved to offer clearer differentiation and regulatory direction for GPAI, removing ambiguous provisions that blurred the lines between general AI systems and foundational models and adding specific GPAI provisions.

The path to consensus was fraught with challenges, underscored by intense debates among EU member states. Key countries like Germany and Italy initially resisted more stringent controls, while France sought to block the regulation altogether. Nevertheless, a compromise was reached, paving the way for harmonised rules for GPAI models.

Definition of GPAI models

A GPAI model is defined as "an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the EU market and that can be integrated into a variety of downstream systems or applications".

Additionally, the AI Act now establishes a distinct classification for GPAI,  following the risk-based approach, specifically the GPAI with systematic risk. The classification of GPAI models with systemic risk is outlined in Article 52a of the AI Act. A GPAI model is considered to pose a systemic risk if it has high impact capabilities or is identified as such by the Commission.

The EU Commission is tasked with assessing whether a GPAI model is considered to pose systemic risks. This is the case if a model reaches a certain technical threshold of computational resources or has foreseeable negative effects.

New Requirements for GPAI in the AI Act

Providers of all GPAI models are mandated to adhere to a series of obligations, ranging from documentation transparency and cooperation with regulatory authorities. Additionally, providers of GPAI models with systemic risks are also required to ensure robust cybersecurity measures, perform evaluations of their models which include adversarial testing, and evaluate and minimise potential risks. Furthermore, they must record and report any incidents.

What is next?

The AI Act will likely be adopted in the next month, with its provisions coming into effect two years thereafter. Nonetheless, the stipulations concerning GPAI models, along with their governance and penalties, will become enforceable in 12 months. The European AI Office, a newly established regulatory body by the EU Commission, will oversee the enforcement and monitoring of GPAI model providers. Additionally, a scientific panel will be established with the authority to communicate with the AI Office.