11-11-2024Article

Update Data Proctection No. 190

Draft bill of the Employee Data Protection Act (BeschDG): New regulations for the use of artificial intelligence in the employment relationship

The Federal Ministry of Labor and Social Affairs (BMAS) and the Federal Ministry of the Interior and Homeland (BMI) have presented a new draft bill for an Employee Data Protection Act (BeschDG) as of October 8, 2024, as reported in our Update No. 188.

The entry into force of the General Data Protection Regulation (GDPR) in May 2018 reignited the debate on data protection in the employment relationship. This led to the introduction of Section 26 of the German Federal Data Protection Act (BDSG) (available here), which currently sets out regulations on data processing for the purposes of the employment relationship. This provision does not contain any special regulations for the use of artificial intelligence.

I. Innovations in the field of AI through the draft bill

However, the importance of AI in the employment relationship is steadily increasing and is increasingly shaping decision-making processes in companies. For example, AI-supported systems are used to automatically screen applications and select suitable candidates, or to optimize employee productivity by analyzing work data. However, these technologies also raise data protection issues, particularly with regard to the transparency and fairness of such decisions.

In order to regulate the responsible use of AI, the European Union adopted the AI Regulation (Regulation on harmonized rules for artificial intelligence) this year, which creates a comprehensive legal framework for the use of AI technologies.

At the same time, the government's plan to fill the existing gaps in Section 26 BDSG with an independent BeschGD has picked up speed again with the publication of the draft bill.

The following provides an overview of the main innovations in the draft with regard to the use of AI systems and the associated obligations:

1. fundamentals of the AI Regulation

The AI Regulation came into force on August 1, 2024 and not only obliges the manufacturers of AI systems, but also the operators (users) in addition to importers and distributors, especially when using so-called high-risk AI systems (we reported on this in Data Protection Update No. 162, No. 146, No. 121 and No. 94)

High-risk systems include many AI systems that are used in the employment relationship, such as systems for automated decision-making or for evaluating employees. In future, employers who operate such systems must implement suitable technical and organizational measures in accordance with Art. 29 of the AI Regulation and ensure that AI decisions are monitored by qualified personnel. In addition, regular cybersecurity checks and the use of representative data are required.

The monitoring of AI operations and the immediate reporting of incidents to providers and authorities in the event of a threat to security or public interests are also mandatory. In addition, employee representatives must be consulted and a data protection impact assessment must be carried out. If decisions affect natural persons, they must be informed. Before a high-risk AI system is used for the first time, a risk impact assessment must also be carried out that takes into account fundamental rights and risks for vulnerable groups.

2 Regulations of the Employment Data Act

The Employee Data Protection Act (BeschDG) is intended to build on the provisions of the AI Regulation and specify them specifically for the use of AI in the employment context. In particular, the Act addresses the processing of employee data by AI systems and automated decision-making processes.

a) Scope of application (Section 1 BeschDG)

The scope of application of the BeschDG covers the processing of personal data of employees by employers in connection with an employment relationship. It applies to both public and private employers and covers all types of employment relationships, e.g. employment contracts, training relationships and application procedures.

Data processing that is necessary for the performance, establishment or termination of an employment relationship is particularly relevant.

b) Protective measures (Section 9 (1) no. 11 BeschDG)

According to Section 9 (1) No. 11 BeschDG, employers who use AI systems to process employee data must take comprehensive protective measures to protect the fundamental rights of employees. These are

  • Regular evaluation of input and output data: Employers must regularly check which personal data is relevant and necessary for processing by the AI system. It must be ensured that only the necessary data is processed and that no superfluous or irrelevant information flows into the process.
  • Anonymization of the results: As far as technically possible, the results and interim results generated by the AI system must be anonymized. This is to prevent the data from being traced back to individual employees. It must also be ensured that this data is not used inappropriately, i.e. that it is only used for the specified processing purpose.
  • Checking for discriminatory and incorrect results: The employer is obliged to regularly check the AI system for discriminatory or incorrect results, insofar as this is technically possible. This serves to protect employees from incorrect or unfair decisions that could result from algorithmic distortions or incorrect data processing.

c) Transparency and information obligations (Section 10 (2) and (3) BeschDG)

Section 10 (2) and (3) of the BeschDG sets out further obligations for employers with regard to transparency when using AI systems.

According to Section 10 (2) BeschDG, the employer is obliged to inform the employees concerned that an AI system is being used to process their personal data at the latest when processing begins. This information must also refer to the employees' right to information in accordance with paragraph 3.

Pursuant to Section 10 (3) BeschDG, the employees concerned have a right to information on two key aspects:

  • Meaningful information on the functioning of the AIsystem: employees are entitled to detailed information on the functioning of the AI system, including how their personal data is processed within the system.
  • Protective measures pursuant to Section 9 (1) No. 11: In addition, employees must be informed of the protective measures taken by the employer to meet the requirements of Section 9 (1) No. 11 BeschDG.

d) Profiling with AI (Section 25 and Section 26 BeschDG)

§ Section 25 and Section 26 BeschDG represent a more specific regulation compared to the general provisions of Section 10 BeschDG and specifically regulate the employer's information obligations and the employee's right to information when using AI systems in the context of profiling.

Here too, employees have a right to information and employers have a duty to provide information as to whether an AI system is being used and what protective measures have been taken in accordance with Section 9 (1) no. 11 BeschDG.

II Conclusion and outlook

After a long period of uncertainty, the BeschDG is now intended to create clear regulations for the use of AI systems and the handling of employee data, thereby modernizing employee data protection.

The German government plans to bring the law into force by August/September 2025. However, there are concerns within the coalition, particularly among the FDP, about excessive bureaucratic burdens. In view of this internal resistance, it is questionable whether the bill will be passed in the current legislative period.

For the time being, affected employers must therefore base their data processing on the current regulations, in particular Section 26 BDSG and, outside the employment context, Article 6 (1) (b) GDPR. In addition, the new provisions of the GDPR must be observed and implemented.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.