Prohibited Uses of AI in the New EU AI Regulation

30.5.2024

The European Union is the first in the world to adopt comprehensive legal regulation of artificial intelligence through the groundbreaking AI Act. This regulation lays the foundation for the regulation of the development, market introduction, and use of artificial intelligence (AI) within the EU. AI is a tool that has the potential to facilitate work in nearly all fields of human endeavor. However, as broad as AI’s benefits can be, equally broad are the possibilities for its misuse. Therefore, some uses of AI are prohibited or permitted only under strict conditions.

Firstly, Article 5(1)(a) and (b) of the Regulation prohibits the use of AI systems that exploit the vulnerabilities of specific groups of people due to their age, physical, or mental disability, with the intention of influencing their behavior in ways that may cause psychological or physical harm. For example, this includes banning various AI algorithms that personalize advertising based on the analysis of text written by the user or their current emotional state (e.g., from facial expressions captured during a video call).

AI can also analyze large amounts of data about an individual and use this to identify their vulnerable points or preferences for subliminal signals aimed at manipulating user behavior, which should also be banned under the new regulation.

Social Credit

A currently highly debated topic is the concept of social credit used in the People's Republic of China. Simply put, the social credit concept means that a public authority uses computer programs aided by AI to monitor and evaluate how people behave in public spaces, what their opinions are, and predict their future behavior or even personality traits. Based on the collected data, the program assigns a social score, which influences access to loans, job opportunities, or state support, thereby affecting people's quality of life.

The use of this concept of social credit is prohibited by Article 5(1)(c) of the AI Act, especially for public authorities. Its use is forbidden in cases where the resulting social credit leads to disadvantageous or adverse treatment of individuals in areas unrelated to the original context of the collected data. For example, a system may acquire information about a traffic violation and, as a result, deny the perpetrator housing benefits.

Clause ii) then prohibits the use of AI systems for the social credit concept that results in disadvantageous or adverse treatment of individuals that is unjustified or disproportionate to their behavior or its severity. An example is a situation where a person crosses a street on a red light and, due to a lowered social credit score, is denied a housing loan.

Biometric Identification

The last prohibited practice is the use of real-time remote biometric identification systems in publicly accessible spaces, as stated in Article 5(2) to (4) of the AI Act. Real-time remote biometric identification systems use cameras and sensors in public places, along with AI, to capture unique biological features, primarily faces. These data are compared with databases, allowing the identification of individuals within seconds. Unregulated use of this technology poses high risks to individual rights to freedom and privacy.

The AI Act, considering the numerous risks of biometric identification, restricts its use to cases where it is necessary to find missing victims, prevent an imminent threat to life, safety of persons, or terrorist attacks, or to detect, locate, identify, or prosecute perpetrators of certain serious crimes.

Even for use in these extreme cases, the regulation sets conditions that must be met. The severity of the situation and possible harm if the system is not used must be considered. Additionally, the impact on the rights and freedoms of all individuals affected by the use of biometric identification must be weighed.

Each individual use of real-time remote biometric identification systems in publicly accessible places also requires authorization from a judicial or independent administrative authority of the member state where the use is to take place.

Authorization is granted by the authority based on a justified request only if it is convinced, based on evidence or clear data presented, that the use is necessary and proportionate to achieve the objective stated in the request, which is not prohibited by the AI Act. In properly justified urgent situations, biometric identification can be used without prior authorization, which must be obtained during or after the use.

According to the academic work of NEUWIRTH, Rostam J., "Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence Act" (online). 2022 [cited 2024-05-23]. Available from: SSRN. University of Macau - Faculty of Law, these conditions for the use of biometric identification are too broadly formulated, making it very difficult to define the range of situations they apply to.

For example, the term "properly justified situation" in the case of using biometric identification without prior authorization of the competent authority shifts the interpretation of this phrase to police or other security bodies, which presents a significant risk of disproportionate use of this technology.

The Stanford Institute for Human-Centered AI article by LYNCH, Shana, "Analyzing the European Union AI Act: What Works, What Needs Improvement" (online). 2023 [cited 2024-05-23]. Available from: Stanford HAI, also points out that without clear and unambiguous definitions, there is a risk of uneven interpretation of these rules across the authorities of member states, which could lead to legal uncertainty and numerous complications. Therefore, it would be appropriate to detail the prohibited AI usage systems more specifically than merely in Article 5 of the regulation.

The AI Act is a groundbreaking regulation that seeks to ensure ethical and transparent use of artificial intelligence in the EU. By setting rules for AI use in high-risk areas, it aims to protect citizens from manipulative systems, exploitation of vulnerable groups, and invasions of privacy, including stringent conditions for biometric identification use. However, given the broadly formulated definitions, its misuse and differing interpretations among member states cannot be ruled out.

70+
countries

60+
advisors

15+
years of experience in the market