How to approach AI and artificial intelligence in healthcare from a legal and regulatory perspective

3.7.2025

We often find that discussions about AI in healthcare tend to veer toward two extremes. On the one hand, there is a utopian vision in which AI solves all of medicine's problems. On the other hand, there is a dystopian fear of faulty algorithms and the replacement of doctors. The reality that our lawyers at ARROWS see every day when dealing with these cases lies somewhere in between: in the complex cooperation between humans and machines. The real challenge for hospital management and technology companies is not deciding whether to use humans or machines. It is about setting clear rules, processes, and responsibilities between humans and machines. And this is where technology meets the law.

Author of the article: ARROWS (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)

A new era of medicine is here. Are you legally prepared for it?

Imagine a radiologist who, thanks to an artificial intelligence (AI) assistance system, detects a barely visible early-stage tumor on an X-ray that would otherwise escape the human eye.

Or imagine hospital software that optimizes bed occupancy and staff allocation in real time during a crisis, saving lives. This is not science fiction. This is the present reality of Czech and European healthcare, where artificial intelligence is already transforming diagnostics, personalizing treatment, and streamlining operations.

This technological revolution brings enormous opportunities: more accurate diagnoses, reduced workload for overburdened staff, and ultimately better and more accessible care for patients. However, these opportunities also come with a new, unprecedented level of legal and regulatory complexity. Regulation (EU) 2024/1689 of the European Parliament and of the Council, known as the AI Act, represents as fundamental a change in the rules of the game as the GDPR did in its day. Innovation and regulation have become two sides of the same coin. Ignoring one means jeopardizing the success of the other.

This report is your strategic guide to this new environment. It is not just a list of paragraphs. It is a practical map that shows you how to avoid risks, meet new obligations, and turn regulatory burdens into a demonstrable competitive advantage. The lawyers at ARROWS specialize in this area and help clients ensure that their innovative projects are not only functional but also legally watertight.

Part 1: AI as the new standard of care – Why is this debate relevant to your practice and business?

The debate about artificial intelligence in healthcare is no longer a futuristic vision. It is a tangible reality that directly affects your daily operations, whether you run a hospital, a clinic, or develop new medical technology. According to a survey by the Czech Association for Artificial Intelligence, 64% of Czech hospitals already use AI in some form. And it's not just large teaching hospitals; the technology is penetrating the entire sector.

The benefits are demonstrable and measurable. For example, at the AGEL Nový Jičín hospital, the use of AI to evaluate mammograms led to a five percent increase in cancer detection. Other systems help doctors analyze chest X-rays, alert them to potential findings, and thus reduce the risk of oversights in smaller hospitals where a radiologist is not always physically available, especially during night shifts. AI also dramatically streamlines administration. In the field of hygiene control, for example, AI can go through thousands of pages of records that would be physically impossible for a human to handle and identify areas at risk of infection.

This trend is also driven by the dynamic Czech ecosystem of technology companies. Companies such as Carebot (assistance with X-ray image analysis), MAIA (radiology solutions), Kardi Ai (detection of cardiac arrhythmias from ECG) and Aireen (detection of diabetic retinopathy) are proof that we have top experts and innovative products in the Czech Republic that are already being successfully tested and deployed in practice.

However, this rapid and often decentralized adoption of technology creates significant, albeit often hidden, risks. While individual departments (e.g., radiology) may enthusiastically implement the latest AI tools, it is very likely that organization-wide legal and ethical frameworks are lagging behind. This creates a “gap between adoption and governance”. It is highly likely that while doctors are already benefiting from AI, the hospital's legal department is not yet fully prepared for an audit under the strict rules of the AI Act. ARROWS lawyers encounter this situation and know that the first step towards safe innovation is not just advising on new projects, but conducting a compliance gap analysis of existing AI systems. Identifying and closing this gap is key to minimizing future risks.

Part 2: AI Act – New rules of the game you need to know

The Artificial Intelligence Regulation, known as the AI Act, which came into force on August 1, 2024, is the world's first comprehensive legal framework for AI. It is not merely a recommendation, but a directly applicable regulation with far-reaching implications for anyone who develops, deploys, or uses AI in the European Union. Knowledge of the AI Act is absolutely essential for the healthcare sector.

The AI Act introduces a risk-based approach and divides AI systems into four categories that can be likened to a pyramid:

  1. Unacceptable risk (prohibited practices): At the top of the pyramid are practices that are considered so dangerous to fundamental rights that they are completely prohibited. These include, for example, social scoring systems that could affect access to healthcare, or AI that manipulates behavior and exploits patients' vulnerabilities (e.g., their age or health status). These prohibitions will take effect six months after the regulation enters into force.
  2. High risk: This is the most important category for healthcare. It includes virtually all AI systems that could have a significant impact on the health and safety of individuals. The AI Act explicitly classifies as high risk medical devices subject to certification, as well as systems designed for triage in emergency situations or systems for assessing claims for healthcare benefits and services. These systems are subject to the strictest rules and obligations.
  3. Limited risk: This includes systems where transparency is the main obligation. Chatbots are a typical example. The user must be clearly informed that they are communicating with a machine, not a human.
  4. Minimal risk: The base of the pyramid includes most commonly used AI applications, such as spam filters or AI in computer games. These systems are not subject to any new specific obligations.
AI Act implementation timeline

Understanding the timeline is key to planning. Although the regulation is already in force, individual obligations are phased in gradually to give companies and institutions time to prepare:

  • August 1, 2024: The regulation enters into force.
  • ~ February 2025 (6 months after entry into force): Prohibitions on practices with unacceptable risk come into force.
  • ~ August 2025 (12 months after entry into force): Rules for general-purpose AI systems (e.g., large language models) come into force.
  • ~ August 2026 (24 months after entry into force): Most obligations for high-risk systems come into force.
  • ~ August 2027 (36 months after entry into force): Remaining obligations for high-risk systems that are also products subject to other EU regulations (e.g., medical devices) come into force.
Double certification: A challenge for innovators in MedTech

A specific and challenging situation is emerging for AI developers in healthcare. Their software is often both a medical device under the MDR (Medical Device Regulation) or IVDR (In Vitro Diagnostic Regulation) and a high-risk AI system under the AI Act. In practice, this means that they must undergo dual certification and comply with two parallel, complex, and demanding sets of rules.

This dual regulatory burden significantly increases the cost, time, and administrative complexity of bringing a new product to market. For small and medium-sized enterprises and innovative Czech start-ups, this can represent a significant competitive disadvantage compared to large multinational corporations with extensive compliance departments. ARROWS lawyers specialize in helping clients develop

integrated compliance strategies that effectively link the requirements of MDR/IVDR and the AI Act. The aim is to streamline the entire process, save resources, and enable even smaller innovators to successfully navigate this complex environment.

Key obligations for high-risk AI systems

If you are developing or deploying a high-risk AI system in your hospital or clinic, you must meet a number of strict requirements. These obligations apply to both providers (developers) and users (e.g., hospitals). The following table summarizes the most important ones.

Table 1: Overview of key obligations for high-risk AI systems under the AI Act

Obligation

Key requirement under the AI Act

Practical impact on your organization

Link to the AI Act article

Risk management system

Establish, implement, document, and maintain a risk management system throughout the AI lifecycle.

You must create and continuously update a living document that identifies, assesses, and mitigates all foreseeable risks to health, safety, and fundamental rights. This is not a one-time task.

Article 9

Data management and quality

Ensure that training, validation, and test data are relevant, representative, error-free, complete, and have appropriate statistical properties. Check for possible bias.

You are responsible for the quality of the data you feed into the AI. Providing poor-quality or unrepresentative data (e.g., with demographic bias) may lead to discriminatory results and your joint liability.

Article 10

Technical documentation

Create and maintain detailed technical documentation before the system is put on the market. The documentation must demonstrate compliance with all requirements.

You must be able to provide regulatory authorities with a detailed description of the system's functioning, its purpose, the data used, algorithms, testing, and measures at any time.

Article 11, Annex IV

Record keeping

Ensure that systems are capable of automatically recording events (logs) during operation.

Your AI system must create an audit trail that allows you to trace its operation and investigate any incidents or incorrect outputs. You must keep these logs (for at least 6 months).

Article 12

Transparency and information

Design the system so that its operation is transparent to users. Provide users with clear and understandable instructions for use.

A physician using AI must understand its capabilities and limitations. You must provide detailed instructions explaining how to use the system, how to interpret its outputs, and what the risks are.

Article 13

Human oversight

The system must be designed so that it can be effectively supervised by human beings. There must be a possibility for human intervention, review, and overriding of AI decisions.

AI does not replace doctors. You must implement processes that ensure that the final decision (e.g., regarding diagnosis or treatment) is always made or can be reviewed and reversed by a qualified professional.

Article 14

Accuracy, robustness, and cybersecurity

Achieve an appropriate level of accuracy, robustness, and cybersecurity throughout the system lifecycle.

The system must be resistant to errors and external attacks. You must perform testing and ensure robust protection against cyber threats that could compromise patient data or system functionality.

Article 15

Conformity assessment and registration

Perform a conformity assessment before placing on the market and register the high-risk system in the EU public database.

As with other certified products, you must undergo a formal assessment process and publicly declare that your system meets all legal requirements.

Article 19, Article 43

 

Part 3: Data – Fuel for AI and a nightmare for GDPR

Artificial intelligence is hungry for data. The higher quality and more extensive the data available for training, the more accurate and reliable the results. In healthcare, however, this data is extremely sensitive. This is where the world of innovation collides with the strict rules of the General Data Protection Regulation (GDPR).

Health data as a “special category”

The GDPR defines health data as a “special category of personal data” (previously referred to as sensitive data). This means that its processing is generally prohibited unless it falls under one of the precisely defined exceptions in Article 9 of the GDPR.

For the purposes of training and operating AI in healthcare, two legal bases are primarily applicable:

  1. Explicit consent of the patient (Article 9(2)(a) of the GDPR): Obtaining valid, free, specific, informed, and unambiguous consent from each patient whose data is to be used is the legally cleanest route. In practice, however, this is very difficult, especially with the large data sets required for AI training. The patient must know exactly what their data will be used for and has the right to withdraw their consent at any time.
  2. Processing necessary for the purposes of... (Article 9(2)(h) and (i) of the GDPR): Processing is permitted if it is necessary for the purposes of preventive or occupational medicine, medical diagnosis, or the provision of health care or for reasons of public interest in the area of public health. This title is relevant for the operation of AI in clinical practice, but its use for the development and training of new, commercial AI models is legally more complex and requires careful assessment.

In addition, legitimate interest (Article 6(1)(f) of the GDPR) may also be relevant for processing that does not involve special categories of data. The European Data Protection Board (EDPB) has acknowledged that legitimate interest may be a legal basis for AI development, but only if a rigorous three-step balancing test is carried out. The controller must demonstrate that its interest is legitimate, that the processing is necessary for it, and that the rights and freedoms of patients do not override its interest.

Obligation to carry out a DPIA (Data Protection Impact Assessment)

For any deployment of a new AI system that will process health data on a large scale, a DPIA is practically always mandatory. This is because such processing meets several high-risk criteria under the GDPR: it involves the large-scale processing of special categories of data, often includes systematic evaluation (profiling), and uses new technologies. A DPIA is a process that aims to identify and minimize risks to patients' rights and freedoms before processing begins. ARROWS regularly assists clients with the preparation and review of DPIA to ensure that this document stands up to any inspection by the Office for Personal Data Protection.

From data controller to data curator

The AI Act's requirements for the quality of training data fundamentally change the role and responsibilities of healthcare providers. It is no longer enough to be a “data controller” within the meaning of the GDPR, whose main responsibility is to protect data.

A hospital or clinic that provides data for AI development becomes a “data curator.” It thus bears a new responsibility for ensuring that the data set provided is of high quality, representative, and free of bias that could lead to discriminatory or otherwise harmful AI outcomes.

If a hospital provides a developer with a data set that is demographically biased and the resulting algorithm then diagnoses a certain group of the population less accurately, the hospital may bear joint responsibility for the harm caused. It is no longer just a matter of protecting data, but of actively ensuring that it is suitable for creating a safe and fair high-risk system. This opens up the need for new specialized legal services, such as audits of AI data sets and the preparation of data curation agreements that clearly define responsibility for data quality and the performance of the resulting model. ARROWS lawyers are ready to provide expert advice in this area as well.

Part 4: Who is responsible when an algorithm makes a mistake?

Imagine a scenario that is every doctor and patient's nightmare: an AI diagnostic system recommends the wrong treatment and the patient suffers harm. Who is legally responsible? Is it the software developer, the hospital that deployed the system, or the doctor who followed the recommendation? The answer is complex and involves a complex chain of responsibility.

Current legislation, both in the EU and in the Czech Republic, is based on the fundamental principle that ultimate responsibility lies with humans, not robots or algorithms. AI is seen as a tool, albeit a very advanced one. The final decision and responsibility for it remains in the hands of human experts.

In the event of misconduct, liability is examined at several levels:

  • Liability of the AI provider (developer): The software manufacturer is liable for defects in its product. If the damage was caused by an error in the algorithm, incorrect system design, or insufficient testing, the manufacturer will be primarily liable. The Czech Civil Code provides for liability for damage caused by a product defect (Section 2939 of the Civil Code), with software also considered a product. However, it is extremely difficult for an injured patient to prove a defect in a complex “black box” algorithm.
  • User responsibility (hospitals, clinics): The AI Act introduces specific obligations for “users” of high-risk systems. Hospitals are required to use the system in accordance with the instructions, ensure human supervision, monitor its operation, and have a risk management system in place. If a hospital neglects these obligations—for example, by failing to adequately train staff or ignoring obvious system error messages—it will bear joint responsibility. It may also be liable as the operator of the facility under Section 2924 of the Civil Code.
  • Responsibility of healthcare professionals (doctors): The use of AI does not relieve doctors of their professional responsibility to act with due professional care (lege artis). Doctors must not blindly accept AI outputs. They must critically evaluate them in the context of their knowledge and experience, and if in doubt, they must ignore the AI's recommendations or verify them using other methods. The final clinical decision remains theirs.

AI literacy as a new element of lege artis

With the massive advent of AI in medicine, the very interpretation of the standard of appropriate professional care will gradually change. Once the use of a particular diagnostic AI tool that demonstrably increases accuracy (e.g., by 5% in cancer detection) becomes the norm in the field, physicians will be expected to know how to use that tool.

This means that the professional duties of doctors will expand. They will no longer include only medical knowledge, but also “AI literacy”: the ability to correctly use, interpret, and, if necessary, critically evaluate and reject the outputs of standard technological tools. Failure in this area could be considered a form of professional misconduct in the future. This creates a new obligation for hospitals and clinics: not only to purchase these tools, but also to ensure and thoroughly document the training of their staff. ARROWS lawyers can help update employment contracts and internal regulations to reflect these new technological competencies and responsibilities.

Legal vacuum and the importance of contracts

The situation is further complicated by the fact that a draft specific European directive on liability for artificial intelligence, which was intended to make it easier for injured parties to prove their case, was withdrawn at the beginning of 2025. This has created a legal vacuum. In practice, this means that a precisely drafted contract becomes the key and essentially the only effective tool for allocating risks and responsibilities between the developer and the hospital.

Part 5: Practical compliance manual: How to prepare and avoid million-dollar fines

Theoretical knowledge is important, but in business, practical steps and the ability to manage risks are what matter. Failure to comply with the rules set out in the AI Act and the GDPR is not just an academic offense; it is associated with draconian financial penalties that can be devastating for a company or hospital.

Penalties that cannot be ignored

The AI Act, like the GDPR, sets fines based on the severity of the violation and the global turnover of the company. It is important to note that these risks can add up—a single mistake in AI implementation can lead to penalties under both regulations at the same time.

Table 2: Comparison of sanctions – Risks of non-compliance with the AI Act and GDPR

Violation

Maximální sankce dle AI Actu

Maximum penalties under the AI Act

Practical example

Use of prohibited AI practices

Up to €35 million or 7% of global annual turnover (whichever is higher).

N/A

The hospital would implement a system that prioritizes access to care based on the patient's social profile (e.g., income).

Failure to comply with obligations for high-risk AI

Up to 15 million EUR or 3% of global annual turnover.

N/A

The manufacturer will place diagnostic AI software on the market without performing a conformity assessment, without technical documentation, or without ensuring human oversight.

Violation of personal data processing principles

N/A

Up to 20 million EUR or 4% of global annual turnover.

The AI system is trained on patient health data without valid legal title (e.g., without consent) or the data is insufficiently secured.

Providing incorrect information to authorities

Up to €7.5 million or 1.5% of global annual turnover.

Up to 10 million EUR or 2% of global annual turnover.

During an inspection, the company submits false or misleading documentation about its AI system or personal data processing to the supervisory authority.

 

In addition to direct financial penalties, there are other serious consequences: orders to withdraw the system from the market, lawsuits for damages with injured patients, and, last but not least, enormous reputational damage.

Contract with the AI supplier: Your first line of defense

Given the uncertainties surrounding legal liability, the contract between you (as a hospital or clinic) and the AI solution provider is an absolutely essential risk management tool. As a user, you are required by the AI Act to vet your suppliers. Your contract must be your insurance policy. When preparing and reviewing these contracts, ARROWS lawyers place particular emphasis on the following clauses:

  • Scope of license and intellectual property rights (IP): It is necessary to clearly define what rights you acquire to the software itself and, above all, to the outputs it generates. Can you use them freely? Who owns them? Can the supplier use your data (even anonymized) for further training of its models?
  • Supplier warranties and representations: The contract must include an explicit statement by the supplier that its AI system is fully compliant with the AI Act, has undergone a conformity assessment, and has all the necessary documentation.
  • Indemnification: A key clause stipulating that if the hospital is penalized or sued due to a defect in the AI system, the supplier will indemnify it and cover all costs, fines, and damages.
  • Data Processing Agreement (DPA): If the supplier comes into contact with patient personal data in any way, a DPA is an absolute necessity under Article 28 of the GDPR. It must clearly define the technical and organizational measures for data protection.
  • Cooperation and audit obligations: The contract should require the supplier to provide you with all documentation and cooperation necessary for your own compliance (e.g., for DPIA) and allow for audits to be conducted.

Case study: How ARROWS helped Hospital XY safely implement AI for diagnostics

A fictional example based on real-life experience

A regional hospital decided to implement an innovative AI tool from a Czech startup for analyzing CT scans. Management was aware of the enormous clinical benefits, but at the same time had concerns about new legal risks. They turned to the ARROWS law firm.

  1. Compliance Audit: The ARROWS team first conducted an audit of the proposed solution. It confirmed that it was a high-risk AI system under the AI Act and also a Class IIa medical device under the MDR.
  2. Revision of the contract with the supplier: ARROWS lawyers completely reworked the supplier's draft contract. They added robust clauses on guarantees of compliance with the AI Act and MDR, a clear definition of liability for defects, and a commitment to compensate the hospital in the event of problems.
  3. DPIA preparation: ARROWS, in cooperation with the hospital's IT department, prepared a detailed data protection impact assessment (DPIA) that identified risks and proposed specific measures to minimize them.
  4. Internal guidelines and training: Based on ARROWS' recommendations, the hospital introduced new internal guidelines for the use of AI in clinical practice and organized mandatory, documented training for all doctors involved.

Result: The hospital was able to deploy the innovative technology with confidence that it met all regulatory requirements. It had a legally secure relationship with the supplier and minimized the risk of penalties and liability disputes. A potential legal nightmare turned into a safe and successful innovation.

Conclusion: The future belongs to those who are prepared. ARROWS is your partner on the path to safe and successful innovation.

Artificial intelligence undoubtedly represents one of the greatest revolutions in the history of medicine. Its potential to improve diagnostics, streamline treatment, and save lives is enormous.

However, as we have shown in this report, this technological wave brings with it a complex and uncompromising regulatory framework in the form of the AI Act and stricter GDPR requirements. For hospital management, clinics, and medtech innovators, it is no longer a question of whether, but how to deal with this new reality.

Ignoring these rules is not a strategy; it is gambling with the risk of millions in fines, lawsuits, and damage to the reputation you have built over many years. On the contrary, a proactive and strategic approach to legal compliance is becoming a key factor for success. It not only minimizes risks but also builds trust with patients and partners and gives you a decisive competitive advantage.

ARROWS Law Firm stands by your side in this new era. Our experts specialize in connecting the worlds of technology and law. We understand not only the law, but also the practical needs of your business and clinical practice. We don't just provide general legal advice; we offer concrete, tailor-made solutions that allow you to innovate with confidence and peace of mind. Our services include:

  • Comprehensive compliance audits with the AI Act and GDPR.
  • Preparation and review of contracts with AI solution providers.
  • Processing of data protection impact assessments (DPIAs).
  • Creation of internal guidelines and policies for the safe use of AI.
  • Representation in disputes or negotiations with regulatory authorities.

Artificial intelligence is revolutionizing healthcare. Don't let legal uncertainty slow down your progress. Contact us today to find out how ARROWS' team of experts can help you turn regulatory challenges into safe and successful innovation.