ARTIFICIAL INTELLIGENCE REGULATION IN THE EU UNDER PRESSURE FROM GERMANY, ITALY AND FRANCE

17.1.2024

The draft European Union Regulation on Artificial Intelligence (AI Act) currently before the European Parliament is facing major challenges from Germany, France and Italy. Europe's three largest economies, have raised objections to the current form of the proposal, particularly regarding the scope of regulation of AI technology, self-regulation and the focus on the application of AI. These countries have circulated a non-paper that shows little room for compromise and goes in a very different direction to the current proposal.

Resistance to regulating the technology itself

Germany, France and Italy express fundamental opposition to an approach that focuses on regulating AI technology itself. They stress that too strict regulation of the technology could stifle innovation potential and technological progress. The concern stems in particular from the belief that technological innovation in AI should be allowed to develop with minimal restrictions so that it can deliver positive results and not be disadvantaged in the global marketplace. They argue that regulation should be more flexible and should focus more on specific applications and uses of AI rather than on the technology itself.

This approach reflects the view that the risks associated with AI arise from its use in specific contexts, rather than from the technology itself. For example, the same AI system can be used for innocuous purposes, such as spam filtering, or for controversial purposes, such as mass surveillance. "If we want to play in the world's top AI league, we need to regulate the application, not the technology," said Volker Wissing, Germany's minister for digital affairs.

Self-regulation as a key

Berlin, Paris and Rome strongly support the concept of self-regulation, which is generally an approach whereby industries and businesses themselves set and adhere to certain standards and rules of conduct. They propose so-called "mandatory self-regulation" through codes of conduct, which allows for greater flexibility in a rapidly changing technological environment.

To implement this approach, representatives of these countries suggest that creators of basic AI models, which are designed to produce a wide range of outputs, define model characteristics and technical documentation that summarises information about the trained models for the general public. "Defining model characteristics and making them available for each model is a mandatory element of this self-regulation," the non-paper states, stressing that these characteristics will need to include relevant information about the model's capabilities and limitations and will be based on best practices within the developer community. This would include, for example, the number of parameters, intended use, potential limitations, results of deviation studies and red-teaming (testing to assess safety).

The document also suggested that the AI governance body could help develop guidelines, have the ability to monitor the use of model characteristics and be required to provide an easy way to report any breaches of the code of conduct. "Any suspected breaches should be made public by the authority in the interests of transparency," the document goes on to say.

Another point that goes against the concept of the original proposal is the demand by Italy, France and Germany that sanctions should not be applied initially. According to them, a sanctioning regime should only be introduced after systematic breaches of codes of conduct and after a proper analysis and impact assessment of the misconduct detected.

The pitfalls of regulating only large AI models

During discussions in June, the European Parliament proposed that the Code of Conduct should initially be binding only on large AI providers, which are primarily from the United States. However, the governments of the three EU member states have said that this apparent competitive advantage for smaller European providers could result in a reduction in their credibility, leading to fewer customers. Therefore, they argue that rules of conduct and transparency should be binding on all.

Impact on the Legislative Process

This dissenting position represents a significant turning point in the AI Act approval process. While the European Parliament is prepared to continue trialogue negotiations, the objections of the above-mentioned states suggest that reaching agreement will require compromises and possibly significant changes to the draft Regulation itself.

Conclusion

The developments around AI Act exemplify the complexity of balancing regulation, innovation and economic competition in the modern technological era. Resolving the conflict between different approaches to AI regulation will have a major impact on the future face of Europe in the digital world.

Matěj Morávek collaborated on the article.

70+
countries

60+
advisors

15+
years of experience in the market