The issue of liability in the use of AI in healthcare

19.4.2024

In the previous article, we outlined the potential benefits of AI in healthcare, specific fields where there is room for its use, as well as the legal aspects of using AI in healthcare. One of the biggest issues in incorporating AI into healthcare is liability for its use in the event of harm. Thus, in this article we will outline both European and national approaches.

Robots in healthcare

Many of us can almost imagine a cinematic scenario where a machine operates on us in the operating room instead of a doctor, and we no longer meet a single human being in healthcare. But this scenario is unrealistic, mainly because AI and other robotics are not at a level where we can afford to leave out the human element, especially in healthcare.

The European Parliament's Resolution of 16 February 2017 containing a recommendation to the Commission on civil law rules for robotics(2015/2103(INL)) (hereafter referred to as the 'Civil Law Rules for Robotics') then clearly implies a requirement for the highest possible training and appropriate education of healthcare personnel, including in cases where any robotic technology is to be used, for example in the context of surgery. This recommendation is therefore based on the premise that even when robotic technology is used in healthcare, the human aspect and supervision of the procedure is still necessary.

With this resolution, the European Parliament has called on the European Commission to submit a legislative proposal that does not limit in any way the type or extent of damage for which compensation can be claimed, nor should it limit the form of compensation that can be offered to the injured party solely on the basis that the damage was not caused by a human being. Crucially, the European Parliament itself points out that, at least at the present stage, it should be a human and not a robot that is liable. These Civil Rules for Robotics were already issued in 2017, and of particular importance now is the so-called AI Act, which was adopted on 13 March 2024.

Regulation at EU level

At EU level, three important elements have recently emerged to regulate AI liability in the healthcare sector, namely the AI Act, the new Product Liability Directive to replace the still effective Directive (85/374/EEC) and the AI Liability Directive. These important documents are intended to harmonise the national liability rules in the field of AI, which should simplify the process of claiming compensation for victims who have suffered damage in connection with the use of AI.

Given that AI has very specific characteristics, be it its complexity, autonomy, or opacity, injured parties may find it disproportionately difficult to identify the obligor, making a successful claim for compensation very difficult. There is therefore an attempt to substantially improve the position of victims and the potential proof of damages by AI.

This will typically be the case where the injured party will argue that the AI system does not comply with the requirements of the AI Regulation or other legislation (i.e. relevant national or EU legislation), using a presumption that the defendant (i.e. the AI system) has culpably breached the relevant duty and therefore causation of the damage will be presumed.

The AI Liability Directive strengthens the position of the injured party by introducing a "presumption of causation" where the relevant fault has been established and a causal link to the performance of the AI appears reasonably likely. At the same time, the Directive gives victims the right to access evidence from companies and suppliers in cases involving high-risk AI systems, which will also significantly improve their position in any litigation.

Czech legislation

The current national legislation focusing on damages is not suitable for dealing with product liability claims in the context of artificial intelligence.

On a theoretical level, the following provisions of Act No. 89/2012 Coll., the Civil Code (hereinafter referred to as the "CC") are mainly relevant, mainly because AI does not form a homogeneous group and it is not possible to use only one provision.

In the context of providing health services, it will be possible to use the provisions of Section 2936 of the Civil Code, which regulates situations where damage is caused by a defective item. The things used will typically be various medical devices and machines (surgical robots, etc.) in the case of providing medical services to a patient. The practical problem in applying this provision will be proving the defect in the thing itself, i.e. in the case of AI, it will be proving the existence of a defect in the AI algorithm, which will often be very difficult for the injured party.

The provisions of Section 2939 of the Civil Code regulating liability for damage caused by a defect in a movable thing as a product could also be applied to damage caused by the use of AI-based applications, where, according to the commentary literature, software also has the nature of a product. Liability for such damage would then lie with the person who manufactured the product or its part and, jointly and severally with him or her, with the person who marked the product or its part with his or her name, trademark or otherwise, as well as with the person who imported such a product for the purpose of placing it on the market in the course of his or her business.

Liability for damages caused by the operation of a factory or other establishment used for gainful activity under the provisions of Section 2924 of the Civil Code, where such establishments may be hospitals, could also be considered. According to this provision, the operator of such a plant or establishment used for gainful activity shall compensate for the damage resulting from the operation, whether caused by the operation itself, by the items used in the operation or by the effect of the operation on the surroundings. The liberalising ground here is the exercise of all reasonable care that can reasonably be required to prevent the damage.

National legislation thus becomes applicable only to a certain extent, and in this regard, European legislation in the form of the AI Act, the AI Liability Directive and the new Product Liability Directive, which have yet to be adopted, must be taken into account.

Kateřina Chaloupková collaborated on the article.

If you have any questions in relation to health law or related issues, please do not hesitate to contact us. We would be happy to learn more about your case and provide you with appropriate legal assistance.

70+
countries

60+
advisors

15+
years of experience in the market