05-15-2024Article

Update Data Protection No. 178

Data protection compliant use of artificial intelligence – Data protection authorities publish guidance

The use of artificial intelligence (AI) offers companies numerous benefits, from automating processes to improving customer service. However, with these benefits also come challenges, particularly in the area of data protection. The recently published guidance from the German Data Protection Conference provides companies with valuable guidelines on the data protection-compliant use of AI applications. This article summarizes the key points of the guidance and provides concrete guidelines for implementation in practice, sorted chronologically according to the areas of application in the company.

1. Conception of the use and selection of AI applications

According to the guidelines, before implementing an AI application, companies should explicitly define which fields of application are intended for the AI and what purpose it should fulfill. This is the only way to check whether the processing of personal data is necessary. In practice, such a determination is made by creating an internal company AI guideline.

The new requirements of the AI Regulation, which prohibit certain (for example manipulative) AI systems and impose strict conditions on other (high-risk) AI systems, should be taken into account as part of this definition.

As part of the preliminary assessment, the authorities also believe that it should be clarified at an early stage whether personal data must be used for training and, if so, whether there is a corresponding legal basis for this. Errors during training must not have a negative impact on data processing in the company.

A legal basis under data protection law is therefore required for any processing of personal data using AI. This can vary depending on the area of application (for example human resources, healthcare).

According to the guidance, automated decisions with legal effect may only be made by humans. The purely formal involvement of a human being is not sufficient.

When choosing an AI system, it is recommended to opt for closed systems, i.e. those that work in a restricted and technically closed environment. This is because open systems that are accessible to an undefined group of users via the internet, for example, harbour the risk of input data being further processed for other purposes or becoming accessible to unauthorized third parties.

The DSK points out that sufficient information on how the AI works must be provided before AI systems are used in order to meet the transparency requirements of the GDPR. Users must also be informed with regard to the use of their training data, with the option of rejecting such use. In the opinion of the authorities, the storage of input history should be optional.

With regard to data subject rights, companies must ensure that such rights (such as rectification or erasure) can also be exercised, which requires suitable organizational and technical measures.

It is explicitly pointed out that both data protection officers and works or staff councils should be involved in decision-making processes regarding the use of AI.

2. Implementation of AI applications

With regard to the implementation of AI systems, the authorities also call for the definition of clear responsibilities, ideally within an AI policy; with clear guidelines for the use of such applications.

Where there is a high risk to the rights and freedoms of natural persons, reference is made to the need to carry out a data protection impact assessment.

If employees are to use AI applications for work purposes, it must be ensured that such use is only possible via devices and accounts provided by the employer in order to avoid the creation of profiles.

According to the guidance, AI applications must also meet the general IT security requirements set out in Art. 32 GDPR in addition to data protection requirements.

Employees should be made aware of the data protection-compliant use of AI through training and guidelines.

Legal and technical developments regarding the use of AI must be regularly reviewed by those responsible and internal guidelines adapted if necessary.

3. Use of AI applications

With regard to the use of AI systems in the company, data subjects (especially employees, but also customers) must be informed transparently in advance about the use of their data.

The processing of particularly sensitive data is generally prohibited and only permitted under the conditions of Art. 9 GDPR. The processing of health data or data on political opinions in AI systems therefore generally requires the prior consent of the data subjects.

To ensure data accuracy, the data protection authorities believe that the results of AI applications must be critically scrutinized and checked for accuracy in order to avoid unlawful processing.

Furthermore, the results of AI applications must not have any discriminatory effects. Those responsible must ensure that the results are acceptable within the legal framework.

4. Conclusion and checklist

The guidance provided by the data protection authorities offers comprehensive guidelines for the data protection-compliant use of AI applications. Companies must proceed carefully at every stage – from conception to implementation and use - in order to protect the rights of data subjects.

The following steps are recommended for the planned use of AI systems in the company:

  1. Define fields of application and purposes: Clearly define the fields of use and purposes of the AI application, ideally through an AI policy.
  2. Check the legal basis: Ensure that there is a legal basis under data protection law for any processing of personal data.
  3. Ensure transparency: Inform data subjects comprehensively and comprehensibly about data processing.
  4. Ensure data subject rights: Implement organizational and technical measures to guarantee data subject rights.
  5. Carry out a data protection impact assessment: Evaluate the risks of data processing and carry out a data protection impact assessment if the risk is high.
  6. Ensure data security: Comply with the general IT security requirements as stipulated in Art. 32 GDPR.
  7. Sensitize employees: Train your employees in the data protection-compliant use of AI.
  8. Ongoing review: Follow legal and technical developments and regularly adapt your internal guidelines.
  9. AI Regulation: Observe the new requirements of the AI Regulation (AI Act), which should come into force in a few days.
Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.