Update Data Protection No. 146
Artificial Intelligence: The European Parliament clears the way for the new EU Regulation (AI Act)
Following the submission of the first Draft of the Artificial Intelligence Act (”AI Act”) on April 21, 2021 by the European Commission for regulating artificial intelligence (AI) (we reported here), the European Parliament also published its final position on June 14, 2023. Thus, the next step now is the trilogue between the Parliament, the Council and the Commission. Since the European Parliament is to be re-elected in less than a year, the regulation is expected to come into effect quickly and with only minor changes. After successful parliamentary agreement, companies can now draw up more concrete plans and put their (planned) applications to test based on the new regulations. This article is intended to assist you in this regard. 1. Concept of AI SystemsThe scope of application of the AI Act is very broad and can cover numerous applications which you would not expect to be AI-related at first glance. This is more the case after the European Parliament had changed the definition of Artificial Intelligence Systems (AI Systems) once again. Nevertheless, not every software must meet the requirements of an AI System. This covers rather “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Thus, the European Parliament adopts the AI definition developed by the OECD in 2018. 2. Levels of riskWhether a company that uses an AI System must take action and implement comprehensive measures depends on which risk is associated with the use of the respective application. To this end, four levels of risk are defined in the AI Act that are distinguished based on the following criteria: a) AI with unacceptable riskArt. 5 AI Act defines practices in the area of Artificial Intelligence which are associated with such a high level of risk for persons, security and fundamental rights that they simply prove to be unacceptable for the legislature. Here the European Parliament recently made further improvement so that more AI Systems are prohibited than was the case in the previous draft. Accordingly, systems with the following functions are impermissible:
b) AI with high level of risk (high-risk AI)High-risk AI may be used only if comprehensive requirements are met such as transparency obligations, approval for the European market as well as a risk and quality management system (see Clause 3 lit. a). Due to this high level of requirements, it is important to recognize high-risk AI and take appropriate measures. Otherwise, there is a risk of fines of up to EUR 20 million and/or up to 4 % of worldwide annual revenues, depending on which value is higher.
In addition, the classification as high-risk AI with the associated obligations under Art. 6(2) AI Act requires that the system poses significant risk for health, safety and fundamental rights of natural persons. When assessing the significant risk for fundamental rights, it is particularly important to pay attention to the use of AI e. g. threats to curtail the freedom of expression which can justify the classification as high-risk AI. In any case, AI applications for the medical diagnosis and treatment are usually included. c) AI with low riskIn the case of the use of AI with low risk, certain transparency requirements must already be complied with. This concerns, for example, systems which interact with natural persons such as chatbots or deepfakes. d) AI with minimal or no riskThe use of AI with minimal risk should, however, be possible without restriction. This includes e.g. search logarithms, spam filters or AI-supported video games. In addition, many AI systems used in productive companies fall under this category e. g. systems for predictive maintenance or industrial applications which do not process any personal data or make predictions with influence on natural persons. e) Generative AIFor the first time, the current Draft also contains regulations expressly dealing with basic models on which so-called Generative AI is based, including e. g. ChatGPT and LaMDA language models (Art. 28b AI Act). The regulations apply above all to the providers of these systems and contain the fulfillment of transparency obligations, an obligation for registration, appropriate protection measures against unlawful content as well as measures for protection of the freedom of expression. Moreover, the generative systems must disclose if AI contents are generated. 3. Obligations for companiesa) High-risk AIIn addition to the provisions that apply for AI with low or minimal risk, the users of high-risk AI have comprehensive obligations under Art. 29 AI Act to ensure the secure use of the systems:
|
If the AI system is placed on the market or operated in the name of the company, the following obligations apply additionally:
b) AI with low risk & AI with minimal riskAccording to Art. 4a AI Act, the minimum requirements which should in principle be complied with by all AI providers also apply to AI with low risk:
Compliance with these principles can also be ensured by complying with harmonized standards, technical specifications and codes of conduct in accordance with Art. 69 AI Act. These codes of conduct can be set up not only by the individual AI provider but by the interest groups as well. In addition, the AI systems intended for interaction with natural persons must disclose that the user has been informed that AI is being used, unless this is already obvious. For AI with minimal or no risk, however, there are no other requirements, but it is recommended to draw up a code of conduct for this AI as well. |
|
Step 1 – Is this an AI system?
- machine-supported system (in particular software);
- autonomous development of results based on specified (including implicit) objectives (in particular the output of values based on probabilities);
- due to the broad scope, it should be usually accepted in case of doubt.
If no: End of the check; no measures according to the AI Act are required.
If yes:
Step 2 – Is this an AI with unacceptable risk?
This is the case if the AI has one of the following functions:
- subconscious influencing of the behaviour of persons;
- the exploitation of persons’ weaknesses and vulnerabilities;
- biometric categorization based on sensitive or protected features or traits (e. g. gender, ethnicity, nationality, religion, political orientation);
- social scoring;
- biometric real-time remote identification;
- predictive policing;
- creation of facial recognition databases via web-scraping;
- emotion recognition systems, in particular at the workplace and in educational institutions;
- retrospective biometric evaluation of surveillance of public spaces.
If yes: End of the check; the AI system may not be used.
If no:
Step 3 – Is this a high-risk AI?
This is the case if the AI has one of the following functions:
- applicant selection; access to studies and training, assessment of examination results in studies and training, for assessing the appropriate level of education as well as for detection of prohibited conduct during tests;
- credit checks;
- asylum and visa checks;
- supporting judges and authorities in application of the law, assessment in connection with law enforcement and similar measures restricting freedom;
- influencing elections and voting decisions (e. g. by controlling political campaigns);
- biometric identification; natural persons (not: mere identity check);
- operation of critical infrastructures such as water, gas and electricity supply and transport;
- recommendation system of very large online platforms (VLOPS – very large online platforms);
and
- the system poses significant risk for health, safety and fundamental rights of natural persons.
If yes: High-risk AI, which must comply with specifications presented under Section 3 lit. a) and b) (see above).
If no: This is an AI with low or no risk. When needed, the specifications under Section 3 lit. b) must be complied with. If the subject is a chatbot or deepfake, it must be labelled accordingly.
5. Conclusion
It is possible that the above Draft to the AI Act, which has now been approved by the European Parliament, will largely correspond to the final version. Therefore, it is advisable to carry out an internal assessment of company applications now based on this Draft. The above checklist should be helpful in this regard. In this context, one should also take into account the other current legislative initiatives of the EU Commission relating to EU digital law such as e. g. the Digital Services Act or the new IT security provisions from the latest DORA regulation and the upcoming Cyber Resilience Act (see relating Article).