Update Data Protection No. 204
GDPR compliance when using ChatGPT & Co.
The EU's new AI regulation is currently a much-discussed topic. The use of artificial intelligence (AI) offers companies numerous opportunities to increase efficiency and innovation. However, especially when using AI from third countries, such as ChatGPT from the USA or DeepSeek from China, companies must comply with a large number of legal requirements. These arise not only from the AI Regulation, but also from the General Data Protection Regulation (GDPR), as personal data is almost always processed when AI is used.
Scope of application of the GDPR: Also opened for training data
The Bavarian State Commissioner for Data Protection (BayLfD) addressed this topic in its brief information (in German only) at the end of March 2025 and summarized the data protection requirements for the use of AI from non-EU countries. Although the brief information is expressly only aimed at the Bavarian administration, the corresponding requirements also apply to all other companies that use AI.
The BayLfD clarifies that the scope of application of the GDPR is not only opened if personal data is entered during the productive use of the AI, but already if only the training data is personal. The Hamburg Commissioner for Data Protection and Freedom of Information recently took a much more differentiated view (in German only) of this. According to this, personal data of employees who use the AI system would also be processed. This applies, for example, to login data or input data or prompts. When using large language models (LLMs) such as ChatGPT, Gemini or DeepSeek, companies must therefore always ensure that they also comply with the GDPR.
General requirements: Companies as operators and responsible parties
According to the BayLfD, the following applies with regard to data protection responsibility: If a company uses AI systems and is therefore an operator within the meaning of the AI Regulation, it is also the controller within the meaning of the GDPR for all data processing that takes place in connection with the use of AI. This applies in particular to the input of personal data into the AI, but also to personal data of employees that is processed at the instigation of the controller or for the controller's own training data.
If the AI has already been trained with personal data beforehand, the provider of the AI system is generally responsible for this from a data protection perspective and must ensure that all data protection regulations have been complied with.
In such cases, the data controller must comply with the GDPR. For example, any processing of personal data must be based on a valid legal basis (e.g. the consent of the data subject), the data may only be processed for the predetermined purpose and every person whose data is processed by the AI systems must be informed of this in advance (Art. 12 et seq. GDPR). In addition, data subjects have the right to access, rectify or erase their data (Art. 15 et seq. GDPR). The implementation of these rights is more difficult if personal data is also used as training data and it is no longer possible to delete the aggregated data from the training data set. However, many AI providers provide a settings option that can be used to deactivate the use of personal data for training purposes.
It must also be determined how the AI provider is to be assessed under data protection law. If it is an order processing pursuant to Art. 28 GDPR, a corresponding order processing contract must be concluded with the AI provider. This must actually be able to ensure sufficient data protection in accordance with the GDPR.
The BayLfD recommends that Bavarian public bodies refrain from commissioning providers that do not provide transparent information about the further processing of personal data. In case of doubt, the on-premise operation of AI is always the safest option.
Special requirements for AI from third countries
When using AI applications from third countries (including the USA!), additional requirements from Art. 44 et seq. GDPR come into play, which regulate the lawfulness of the transfer of personal data to third countries. This also expressly applies if only metadata or telemetry data is transferred. Before using such AI systems, companies must ensure that there is either an adequacy decision by the European Commission for the data transfer - for the USA, this means that the specific company must have submitted to the EU-US Data Privacy Framework - or that other suitable guarantees in accordance with Art. 46 GDPR are in place. This is usually the agreement of Standard Contractual Clauses (SCCs) in accordance with the European Commission's draft. However, if a data transfer to the USA is based solely on the Data Privacy Framework, this should be treated with caution, as it cannot be ruled out that the EU Commission will suspend this adequacy decision due to current political developments in the USA.
The BayLfD also outlines the obligations of AI providers from third countries. This applies in particular to the appointment of a representative in the EU (Art. 27 GDPR). It also points out that AI providers from third countries that act as processors must ensure data processing in accordance with the GDPR. However, if an AI provider collects, for example, login data and input prompts without a corresponding instruction from the user or uses them for its own purposes, such as the further training of the AI, it is itself considered a controller within the meaning of the GDPR and must implement the corresponding obligations.
Increased risk of warnings for GDPR violations
As if these requirements were not already difficult enough to implement, the Federal Court of Justice has recently strengthened the rights of consumer associations and competitors in a total of three recent rulings (I ZR 186/17, I ZR 222/19, I ZR 223/19), which can also issue warnings for GDPR violations and assert claims for injunctive relief and damages.
A lack of GDPR compliance is therefore not just a threat of fines from the supervisory authorities or lawsuits from data subjects. It is to be expected that consumer associations and competitors will become more active in the future and issue warnings for easily verifiable issues such as the fulfillment of information obligations on the Internet (usually via the privacy policy) or missing and inadequate consent forms.
In this context, it is to be expected that companies will be asked to submit cease-and-desist declarations with a penalty clause. In our experience, companies are quick to issue such a declaration, as the matter appears to be settled for the time being. However, if the infringement is then not remedied or is repeated, there is a rapid threat of severe contractual penalties, sometimes in the five-digit range.
In order to be prepared for this, the externally verifiable GDPR compliance in particular should be up to date. With regard to your own website, it is important to consider the following points, among others:
- Current and complete data protection information;
- Obtaining any necessary declarations of consent;
- if applicable, information on joint responsibility pursuant to Art. 26 GDPR;
- Information about video surveillance in the outdoor area, if applicable
AI competence
There is also a need for action due to the AI Regulation: Since February 2, all companies that use AI systems must ensure that their employees have the necessary AI literacy through appropriate training measures (Art. 4 AI Regulation). You can find more information in this article. We are currently conducting many AI literacy training courses for our clients.