
We often find that discussions about AI in healthcare tend to veer toward two extremes. On the one hand, there is a utopian vision in which AI solves all of medicine's problems. On the other hand, there is a dystopian fear of faulty algorithms and the replacement of doctors. The reality that our lawyers at ARROWS see every day when dealing with these cases lies somewhere in between: in the complex cooperation between humans and machines. The real challenge for hospital management and technology companies is not deciding whether to use humans or machines. It is about setting clear rules, processes, and responsibilities between humans and machines. And this is where technology meets the law.
Author of the article: ARROWS (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)
Imagine a radiologist who, thanks to an artificial intelligence (AI) assistance system, detects a barely visible early-stage tumor on an X-ray that would otherwise escape the human eye.
Or imagine hospital software that optimizes bed occupancy and staff allocation in real time during a crisis, saving lives. This is not science fiction. This is the present reality of Czech and European healthcare, where artificial intelligence is already transforming diagnostics, personalizing treatment, and streamlining operations.
This technological revolution brings enormous opportunities: more accurate diagnoses, reduced workload for overburdened staff, and ultimately better and more accessible care for patients. However, these opportunities also come with a new, unprecedented level of legal and regulatory complexity. Regulation (EU) 2024/1689 of the European Parliament and of the Council, known as the AI Act, represents as fundamental a change in the rules of the game as the GDPR did in its day. Innovation and regulation have become two sides of the same coin. Ignoring one means jeopardizing the success of the other.
This report is your strategic guide to this new environment. It is not just a list of paragraphs. It is a practical map that shows you how to avoid risks, meet new obligations, and turn regulatory burdens into a demonstrable competitive advantage. The lawyers at ARROWS specialize in this area and help clients ensure that their innovative projects are not only functional but also legally watertight.
The debate about artificial intelligence in healthcare is no longer a futuristic vision. It is a tangible reality that directly affects your daily operations, whether you run a hospital, a clinic, or develop new medical technology. According to a survey by the Czech Association for Artificial Intelligence, 64% of Czech hospitals already use AI in some form. And it's not just large teaching hospitals; the technology is penetrating the entire sector.
The benefits are demonstrable and measurable. For example, at the AGEL Nový Jičín hospital, the use of AI to evaluate mammograms led to a five percent increase in cancer detection. Other systems help doctors analyze chest X-rays, alert them to potential findings, and thus reduce the risk of oversights in smaller hospitals where a radiologist is not always physically available, especially during night shifts. AI also dramatically streamlines administration. In the field of hygiene control, for example, AI can go through thousands of pages of records that would be physically impossible for a human to handle and identify areas at risk of infection.
This trend is also driven by the dynamic Czech ecosystem of technology companies. Companies such as Carebot (assistance with X-ray image analysis), MAIA (radiology solutions), Kardi Ai (detection of cardiac arrhythmias from ECG) and Aireen (detection of diabetic retinopathy) are proof that we have top experts and innovative products in the Czech Republic that are already being successfully tested and deployed in practice.
However, this rapid and often decentralized adoption of technology creates significant, albeit often hidden, risks. While individual departments (e.g., radiology) may enthusiastically implement the latest AI tools, it is very likely that organization-wide legal and ethical frameworks are lagging behind. This creates a “gap between adoption and governance”. It is highly likely that while doctors are already benefiting from AI, the hospital's legal department is not yet fully prepared for an audit under the strict rules of the AI Act. ARROWS lawyers encounter this situation and know that the first step towards safe innovation is not just advising on new projects, but conducting a compliance gap analysis of existing AI systems. Identifying and closing this gap is key to minimizing future risks.
The Artificial Intelligence Regulation, known as the AI Act, which came into force on August 1, 2024, is the world's first comprehensive legal framework for AI. It is not merely a recommendation, but a directly applicable regulation with far-reaching implications for anyone who develops, deploys, or uses AI in the European Union. Knowledge of the AI Act is absolutely essential for the healthcare sector.
The AI Act introduces a risk-based approach and divides AI systems into four categories that can be likened to a pyramid:
Understanding the timeline is key to planning. Although the regulation is already in force, individual obligations are phased in gradually to give companies and institutions time to prepare:
A specific and challenging situation is emerging for AI developers in healthcare. Their software is often both a medical device under the MDR (Medical Device Regulation) or IVDR (In Vitro Diagnostic Regulation) and a high-risk AI system under the AI Act. In practice, this means that they must undergo dual certification and comply with two parallel, complex, and demanding sets of rules.
This dual regulatory burden significantly increases the cost, time, and administrative complexity of bringing a new product to market. For small and medium-sized enterprises and innovative Czech start-ups, this can represent a significant competitive disadvantage compared to large multinational corporations with extensive compliance departments. ARROWS lawyers specialize in helping clients develop
integrated compliance strategies that effectively link the requirements of MDR/IVDR and the AI Act. The aim is to streamline the entire process, save resources, and enable even smaller innovators to successfully navigate this complex environment.
If you are developing or deploying a high-risk AI system in your hospital or clinic, you must meet a number of strict requirements. These obligations apply to both providers (developers) and users (e.g., hospitals). The following table summarizes the most important ones.
Table 1: Overview of key obligations for high-risk AI systems under the AI Act
Obligation |
Key requirement under the AI Act |
Practical impact on your organization |
Link to the AI Act article |
Risk management system |
Establish, implement, document, and maintain a risk management system throughout the AI lifecycle. |
You must create and continuously update a living document that identifies, assesses, and mitigates all foreseeable risks to health, safety, and fundamental rights. This is not a one-time task. |
Article 9 |
Data management and quality |
Ensure that training, validation, and test data are relevant, representative, error-free, complete, and have appropriate statistical properties. Check for possible bias. |
You are responsible for the quality of the data you feed into the AI. Providing poor-quality or unrepresentative data (e.g., with demographic bias) may lead to discriminatory results and your joint liability. |
Article 10 |
Technical documentation |
Create and maintain detailed technical documentation before the system is put on the market. The documentation must demonstrate compliance with all requirements. |
You must be able to provide regulatory authorities with a detailed description of the system's functioning, its purpose, the data used, algorithms, testing, and measures at any time. |
Article 11, Annex IV |
Record keeping |
Ensure that systems are capable of automatically recording events (logs) during operation. |
Your AI system must create an audit trail that allows you to trace its operation and investigate any incidents or incorrect outputs. You must keep these logs (for at least 6 months). |
Article 12 |
Transparency and information |
Design the system so that its operation is transparent to users. Provide users with clear and understandable instructions for use. |
A physician using AI must understand its capabilities and limitations. You must provide detailed instructions explaining how to use the system, how to interpret its outputs, and what the risks are. |
Article 13 |
Human oversight |
The system must be designed so that it can be effectively supervised by human beings. There must be a possibility for human intervention, review, and overriding of AI decisions. |
AI does not replace doctors. You must implement processes that ensure that the final decision (e.g., regarding diagnosis or treatment) is always made or can be reviewed and reversed by a qualified professional. |
Article 14 |
Accuracy, robustness, and cybersecurity |
Achieve an appropriate level of accuracy, robustness, and cybersecurity throughout the system lifecycle. |
The system must be resistant to errors and external attacks. You must perform testing and ensure robust protection against cyber threats that could compromise patient data or system functionality. |
Article 15 |
Conformity assessment and registration |
Perform a conformity assessment before placing on the market and register the high-risk system in the EU public database. |
As with other certified products, you must undergo a formal assessment process and publicly declare that your system meets all legal requirements. |
Article 19, Article 43 |
Artificial intelligence is hungry for data. The higher quality and more extensive the data available for training, the more accurate and reliable the results. In healthcare, however, this data is extremely sensitive. This is where the world of innovation collides with the strict rules of the General Data Protection Regulation (GDPR).
The GDPR defines health data as a “special category of personal data” (previously referred to as sensitive data). This means that its processing is generally prohibited unless it falls under one of the precisely defined exceptions in Article 9 of the GDPR.
For the purposes of training and operating AI in healthcare, two legal bases are primarily applicable:
In addition, legitimate interest (Article 6(1)(f) of the GDPR) may also be relevant for processing that does not involve special categories of data. The European Data Protection Board (EDPB) has acknowledged that legitimate interest may be a legal basis for AI development, but only if a rigorous three-step balancing test is carried out. The controller must demonstrate that its interest is legitimate, that the processing is necessary for it, and that the rights and freedoms of patients do not override its interest.
For any deployment of a new AI system that will process health data on a large scale, a DPIA is practically always mandatory. This is because such processing meets several high-risk criteria under the GDPR: it involves the large-scale processing of special categories of data, often includes systematic evaluation (profiling), and uses new technologies. A DPIA is a process that aims to identify and minimize risks to patients' rights and freedoms before processing begins. ARROWS regularly assists clients with the preparation and review of DPIA to ensure that this document stands up to any inspection by the Office for Personal Data Protection.
The AI Act's requirements for the quality of training data fundamentally change the role and responsibilities of healthcare providers. It is no longer enough to be a “data controller” within the meaning of the GDPR, whose main responsibility is to protect data.
A hospital or clinic that provides data for AI development becomes a “data curator.” It thus bears a new responsibility for ensuring that the data set provided is of high quality, representative, and free of bias that could lead to discriminatory or otherwise harmful AI outcomes.
If a hospital provides a developer with a data set that is demographically biased and the resulting algorithm then diagnoses a certain group of the population less accurately, the hospital may bear joint responsibility for the harm caused. It is no longer just a matter of protecting data, but of actively ensuring that it is suitable for creating a safe and fair high-risk system. This opens up the need for new specialized legal services, such as audits of AI data sets and the preparation of data curation agreements that clearly define responsibility for data quality and the performance of the resulting model. ARROWS lawyers are ready to provide expert advice in this area as well.
Imagine a scenario that is every doctor and patient's nightmare: an AI diagnostic system recommends the wrong treatment and the patient suffers harm. Who is legally responsible? Is it the software developer, the hospital that deployed the system, or the doctor who followed the recommendation? The answer is complex and involves a complex chain of responsibility.
Current legislation, both in the EU and in the Czech Republic, is based on the fundamental principle that ultimate responsibility lies with humans, not robots or algorithms. AI is seen as a tool, albeit a very advanced one. The final decision and responsibility for it remains in the hands of human experts.
In the event of misconduct, liability is examined at several levels:
With the massive advent of AI in medicine, the very interpretation of the standard of appropriate professional care will gradually change. Once the use of a particular diagnostic AI tool that demonstrably increases accuracy (e.g., by 5% in cancer detection) becomes the norm in the field, physicians will be expected to know how to use that tool.
This means that the professional duties of doctors will expand. They will no longer include only medical knowledge, but also “AI literacy”: the ability to correctly use, interpret, and, if necessary, critically evaluate and reject the outputs of standard technological tools. Failure in this area could be considered a form of professional misconduct in the future. This creates a new obligation for hospitals and clinics: not only to purchase these tools, but also to ensure and thoroughly document the training of their staff. ARROWS lawyers can help update employment contracts and internal regulations to reflect these new technological competencies and responsibilities.
The situation is further complicated by the fact that a draft specific European directive on liability for artificial intelligence, which was intended to make it easier for injured parties to prove their case, was withdrawn at the beginning of 2025. This has created a legal vacuum. In practice, this means that a precisely drafted contract becomes the key and essentially the only effective tool for allocating risks and responsibilities between the developer and the hospital.
Theoretical knowledge is important, but in business, practical steps and the ability to manage risks are what matter. Failure to comply with the rules set out in the AI Act and the GDPR is not just an academic offense; it is associated with draconian financial penalties that can be devastating for a company or hospital.
The AI Act, like the GDPR, sets fines based on the severity of the violation and the global turnover of the company. It is important to note that these risks can add up—a single mistake in AI implementation can lead to penalties under both regulations at the same time.
Table 2: Comparison of sanctions – Risks of non-compliance with the AI Act and GDPR
Violation |
Maximální sankce dle AI Actu |
Maximum penalties under the AI Act |
Practical example |
Use of prohibited AI practices |
Up to €35 million or 7% of global annual turnover (whichever is higher). |
N/A |
The hospital would implement a system that prioritizes access to care based on the patient's social profile (e.g., income). |
Failure to comply with obligations for high-risk AI |
Up to 15 million EUR or 3% of global annual turnover. |
N/A |
The manufacturer will place diagnostic AI software on the market without performing a conformity assessment, without technical documentation, or without ensuring human oversight. |
Violation of personal data processing principles |
N/A |
Up to 20 million EUR or 4% of global annual turnover. |
The AI system is trained on patient health data without valid legal title (e.g., without consent) or the data is insufficiently secured. |
Providing incorrect information to authorities |
Up to €7.5 million or 1.5% of global annual turnover. |
Up to 10 million EUR or 2% of global annual turnover. |
During an inspection, the company submits false or misleading documentation about its AI system or personal data processing to the supervisory authority. |
In addition to direct financial penalties, there are other serious consequences: orders to withdraw the system from the market, lawsuits for damages with injured patients, and, last but not least, enormous reputational damage.
Given the uncertainties surrounding legal liability, the contract between you (as a hospital or clinic) and the AI solution provider is an absolutely essential risk management tool. As a user, you are required by the AI Act to vet your suppliers. Your contract must be your insurance policy. When preparing and reviewing these contracts, ARROWS lawyers place particular emphasis on the following clauses:
A fictional example based on real-life experience
A regional hospital decided to implement an innovative AI tool from a Czech startup for analyzing CT scans. Management was aware of the enormous clinical benefits, but at the same time had concerns about new legal risks. They turned to the ARROWS law firm.
Result: The hospital was able to deploy the innovative technology with confidence that it met all regulatory requirements. It had a legally secure relationship with the supplier and minimized the risk of penalties and liability disputes. A potential legal nightmare turned into a safe and successful innovation.
Artificial intelligence undoubtedly represents one of the greatest revolutions in the history of medicine. Its potential to improve diagnostics, streamline treatment, and save lives is enormous.
However, as we have shown in this report, this technological wave brings with it a complex and uncompromising regulatory framework in the form of the AI Act and stricter GDPR requirements. For hospital management, clinics, and medtech innovators, it is no longer a question of whether, but how to deal with this new reality.
Ignoring these rules is not a strategy; it is gambling with the risk of millions in fines, lawsuits, and damage to the reputation you have built over many years. On the contrary, a proactive and strategic approach to legal compliance is becoming a key factor for success. It not only minimizes risks but also builds trust with patients and partners and gives you a decisive competitive advantage.
ARROWS Law Firm stands by your side in this new era. Our experts specialize in connecting the worlds of technology and law. We understand not only the law, but also the practical needs of your business and clinical practice. We don't just provide general legal advice; we offer concrete, tailor-made solutions that allow you to innovate with confidence and peace of mind. Our services include:
Artificial intelligence is revolutionizing healthcare. Don't let legal uncertainty slow down your progress. Contact us today to find out how ARROWS' team of experts can help you turn regulatory challenges into safe and successful innovation.