AI DEVELOPMENT – LINKING BETWEEN GDPR AND AI ACT

22.10.2024

Recently, AI technologies have undergone significant development and widespread incorporation into everyone's daily lives. This is also the reason why state authorities are interested in developments in this field. One good example of this practice is the General Secretariat of the Belgian Data Protection Authority, which has published a brochure specifically on linking AI to GDPR and AI Act requirements. This article outlines the basic requirements and the important links between these regulations in the context of data protection in the use and incorporation of AI.

AI systems in general

In general, it is necessary to clarify what the term “AI system” means. This term is outlined in the AI Act [Art. 3 (1)] but can be described as a computer system specifically designed to analyze data, identify patterns, and use that knowledge to make informed decisions or predictions.

To demonstrate that an AI system is not just ChatGPT or some sort of chatbot, but that you encounter it on a daily basis, here are a few examples. Starting from spam filters in email that distinguish between spam and legitimate emails, to series or movie recommendations on streaming platforms, virtual assistants that you can control with your voice, to AI-powered medical imaging analysis - these can all be considered AI systems.

AI systems are distinguished into four categories according to the degree of risk:

  • Systems with unacceptable risk;
  • High-risk systems;
  • Other systems that are neither high-risk nor unacceptable risk;
  • Systems not subject to regulation under the Regulation.

More details on the AI Act, its effectiveness, and penalties can be found in the article here.

What are the requirements of the GDPR and AI Act for AI systems?

  • Lawful, fair and transparent processing

The GDPR establishes six legal bases for processing personal data: consent, contract, legal obligation, vital interests, public interest, and legitimate interests. It is important to keep in mind that these same legal bases remain applicable to AI systems that are covered by the AI Act. In addition, the AI Act also prohibits certain AI systems (such as social scoring systems or AI systems for real-time facial recognition in public places).

Although the AI Act does not have a dedicated section titled “fairness”, it builds upon the GDPR's principle of fair processing as provided in Art. 5 (1)a GDPR. In terms of transparency the AI Act requires a baseline level of transparency for all AI systems (i.e., users should be informed that they are interacting with an AI system). Additionally, the AI Act requires a higher transparency level for high-risk AI systems.

  • Purpose limitation and data minimization

The GDPR requires purpose limitation and data minimization, which means personal data must be collected for specific and legitimate purposes and limited to what is necessary for those purposes. This principle is further strengthened for high-risk AI systems in the AI Act by emphasizing the need for a well-defined and documented intended purpose.

  • Data accuracy

The GDPR requires personal data to be accurate and, where necessary, kept up to date. Organizations themselves must ensure this. The AI Act builds upon this by requiring high-risk AI systems to use high-quality, unbiased data to prevent discriminatory outcomes (e.g., using AI systems to automate loan approvals, which could systematically deny loans to applicants from lower-income neighborhoods).

  • Storage limitation

The GDPR requires personal data to be stored only for as long as necessary to achieve the purposes for which it was collected. This requirement is not explicitly extended in the AI Act, even for high-risk systems.

  • Automated decision-making

Both of the regulations address the importance of human involvement in automated decision-making. The GDPR empowers individuals to object to solely automated decisions, while the AI Act requires proactive human oversight for high-risk AI systems to safeguard against potential biases and ensure responsible development and use of such systems.

  • Security of processing

Both the GDPR and the AI Act emphasize the importance of securing personal data. More specifically, the GDPR requires organizations to implement technical and organizational measures (e.g., data encryption, access controls) that are appropriate to the specific risks posed by the data processing activities. The AI Act builds on the GDPR’s foundation by ensuring robust security measures for high-risk AI systems (e.g., anomaly detection or human review of high-risk data points).

  • Data subject rights

The GDPR guarantees natural persons data subject rights, these include access (Art. 15), rectification (Art. 16), erasure (Art. 17), restriction of processing (Art. 18) and data portability (Art. 20). The AI Act emphasizes the importance of providing clear explanations about how data is used in AI systems to support these rights.

  • Accountability

The GDPR requires accountability for personal data processing through several measures, such as transparent processing, policies and procedures for handling personal data, documented legal bases for processing, and record-keeping, among other security measures.

The AI Act does not have a separate section on demonstrating accountability but builds upon the GDPR's principles. It requires organizations to implement a two-step risk management approach for AI systems, maintaining clear documentation, and ensuring human oversight in high-risk AI systems.

Summary

In conclusion, the link between the GDPR and the AI Act is key to fully understanding the regulation of AI systems. It is evident that these regulations are interrelated and cannot be considered in isolation. The AI Act refines and tightens the conditions for AI systems, especially for high-risk ones, building on the GDPR’s established principles.

Our law firm specializes in AI implementation and is known for its innovative approach to technologies. As a result, we are well-positioned to provide our clients with legal assistance regarding the implementation of AI systems.

Thanks to the use of AI, our law firm was awarded the prize “Legal Innovation of the Year 2023/2024 in the Czech office”.

If you have any questions regarding this topic or related issues, please do not hesitate to contact us. We would be happy to learn more about your case and provide you with our legal assistance.

Responsible lawyer: Mgr. Filip Ondřej, Kateřina Chaloupková contributed to this article.

70+
countries

60+
advisors

15+
years of experience in the market