02-11-2025Article

Update IP, Media & Technology No. 114

The practical guide to general-purpose AI models – an overview of the content, guiding principles and regulatory structures of the first two drafts

On December 19, 2024, the EU Commission published the second draft of a Code of Practice for general purpose AI models (GPAI).

It represents a snapshot in an intensive elaboration process: the first draft was published on November 17, 2024, the third draft is to follow in the week of February 17, 2025 and the final version of the guidelines must be available by May 2, 2025 at the latest. Until then, the draft will be continuously developed by agreement between the four working groups made up of independent experts and around 1,000 stakeholders.

This article is intended to provide an overview of what is likely to remain. An outline of the contents, guiding principles and regulatory structures of the first two drafts should make it easier to start working with the final Code of Practice from May 2025.

I. Contents

The Code of Practice is intended to make it easier for providers of GPAI models to implement their obligations under the EU Regulation on Artificial Intelligence (AI Act).

1. Introduction: GPAI models and their providers

In addition to AI models such as AlphaZero, which only have one purpose (in this case, playing chess), there are general purpose AI (GPAI) models that can do much more, e. g. research, program, create images and write newspaper articles. These include GPT-4, Dall-E and Midjourney.  The AI Act distinguishes between two groups of GPAI models: "normal" models and models with systemic risk, where there is a risk of particularly negative effects on public health, democratic processes or public safety.

A provider is anyone who develops a GPAI model (or has one developed) and puts it into operation under their own name and anyone who places a GPAI system on the market. An AI model is the algorithmic core, e. g. GPT-4, and an AI system is the enclosing structure that makes the AI model practically applicable through user interfaces, e. g. ChatGPT. For example, the provider of ChatGPT is OpenAI.

2. Provider obligations

Accordingly, the Code of Practice is divided into two sections: (1.) Obligations for providers of GPAI models and (2.) Obligations for providers of GPAI models with systemic risk. The rules for GPAI models without systemic risk apply generally, i. e. also to GPAIs with systemic risk. The riskier the GPAI model, the more strict the provider obligations under the AI Act.(an illustration is shown in the CHART).

a. Obligations for GPAI providers

For providers of GPAI models, Art. 53 para. 1 AI Act stipulates a number of obligations aimed at transparency, copyright protection and ongoing responsibility "along the value chain" (i.e. from the manufacturer to the processor to the end user). Only providers of open source models are not obliged (Art. 53 para. 2 sentence 1 AI Act).

In order to create transparency and responsibility along the value chain, providers must collect important information about the functioning of the GPAI model and make it available to authorities and/or third parties that further utilize the model (Art. 53 para. 1 lit. a, b AI Act). The second draft contains an overview of the specific information that must be disclosed (p. 12 et seq.). Accordingly, an Acceptable Use Policy (AUP) must be submitted for the respective GPAI model - the draft also specifies the essential elements for this (p. 18 et seq.).

In order to ensure the protection of copyright, which can be infringed by training an AI with copyrighted works, providers must also submit a copyright compliance strategy (Art. 53 para. 1 lit. c AI Act). The second draft contains formal and substantive requirements for this from p. 20 onwards.

b. Obligations for providers of GPAI models with systemic risk

Providers of GPAI models with systemic risk are subject to the stricter requirements of Art. 55 AI Act. The particularly serious risks of these GPAI models are to be countered with various (preventive) measures, in particular risk assessment and risk mitigation measures, measures to ensure cybersecurity and reporting obligations in the event of serious incidents (Art. 55 para. 1 AI Act).

The second draft contains indicators for the existence of a systemic risk, in particular use cases and typical sources of risk (p. 29 et seq.) In addition, the draft contains specifications for a safety and security framework that providers can use to demonstrate compliance with their obligations pursuant to Art. 55 para. 2 sentence 1 AI Act on p. 33 et seq. Finally, specific risk assessment and risk mitigation measures are proposed to protect against systemic risks. These include technical and governance-related measures, e. g. the establishment of a risk committee in the company and the performance of background checks on employees who have access to unpublished training parameters.

II. Guiding principles

The Code of Practice is to be developed deductively: First, general guiding principles are defined, then specific measures are developed along these principles.

The most important guiding principles of the Code of Practice are:

1. Proportionality

According to Art. 56 para. 4 AI Act, the authors of the Code of Practice should ensure that the targets are clearly defined, that the measures are suitable and necessary to achieve these targets and that the needs and interests of all interested parties at Union level are taken into account.

The density of regulation depends in particular on the

  • Probability and severity of the imminent risk: For example, the requirements are to be weakened, if GPAI providers can prove that the probability of damage occurring is low;
  • Size of the provider: small and medium-sized providers of GPAI (especially start-ups) should be privileged due to their typically lower financial resources.

2. Effectiveness

The Code of Practice should fulfill its guiding function in the long term, through being

  • Concrete and not ambiguous: ambiguous or vague expressions should be avoided, instead specific "if – then" rules are to be established;
  • Open to development so as not to be left behind by the rapid pace of technological progress..

3. Harmony

Compliance with the requirements of the AI Act, European primary law and international approaches to AI regulation (e. g. standards developed by AI safety organizations).

4. Safety and innovation

Finally, the aim is to ensure that a safe and innovation-promoting environment for GPAI is created in the EU. In particular, GPAI providers should be able to exchange and support each other, e. g. by using common best practice standards or safety guidelines.

III. Regulatory structures

The regulations are based on a clear structure of "Measure – Sub-Measures – Key Performance Indicators/KPIs". Accordingly, each (possibly still abstractly formulated) measure is to be broken down into concrete sub-measures, which in turn are made measurable by KPIs. This results in the formula "The provider must take measure A by implementing the sub-measures B, C and D, which are measured using the KPIs E, F and G". This can look like this: "The provider implements a copyright protection strategy (measure) by drafting, updating and publishing such a strategy (sub-measure), which can be measured by presenting its strategy in a document, documenting all changes and uploading a summary to its website (KPIs)" (an illustration is shown in the CHART).

IV. Conclusion

The second draft of the Code of Practice for general purpose AI models is an important step towards a clear and practicable set of rules for providers of GPAI models. Even if some changes are still to be expected before the final version is published by May 2, 2025 at the latest, the cornerstones of the Code of Practice are likely in place:

Content: The Code of Practice specifies the provider obligations for GPAI models and GPAI models with systemic risk. For providers of GPAI models, it offers more detailed specifications for the development of a strategy to protect copyright and which information must be disclosed in order to ensure transparency and responsibility along the value chain. For providers of GPAI models with systemic risk, it provides indications of the existence of systemic risk, requirements for a security policy and various other risk identification and mitigation measures.

Guiding principles: All measures should be proportionate (above all, appropriate, taking into account the probability and severity of the impending risk and the size of the provider), concrete, clear and open to development, in line with applicable EU law and, as a result, create an environment for GPAI in the EU that is both conducive to innovation and safe.

Regulatory structures: The individual provisions follow a clear structure of "Measure – Sub-Measures – Key Performance Indicators/KPIs", with each element concretizing the previous one.

This article was written in collaboration with our research assistant, Lena Rosenau.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.