Artificial intelligence in pharmaceuticals:

legal challenges and opportunities for the future of medicine

19.6.2025

Artificial intelligence (AI) is becoming a driving force for innovation in many industries, and the pharmaceutical industry is no exception. From developing new drugs to optimizing clinical trials, AI offers revolutionary opportunities. But with these opportunities come complex legal challenges. As potential clients of law firms, you may be asking yourself: How do we navigate this rapidly evolving field and minimize the risks?

Author of the article: ​ARROWS (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)

The benefits of AI in drug development: efficiency and speed like never before

Imagine that it is possible to reduce years of drug research and development to mere months. AI can analyse vast amounts of data, identify potential molecules, predict their effects and optimise manufacturing processes with previously unsuspected efficiency. This not only accelerates the time to market for innovative drugs, but also reduces costs and makes medicine more accessible to a wider range of patients. For pharmaceutical companies this means a competitive advantage and for patients the hope of previously unavailable therapies.

AI has the potential to transform every step of a drug's journey:

  • Drug discovery and development: AI algorithms can search huge databases of chemical compounds and identify those with the greatest potential for therapeutic use. This dramatically speeds up the discovery phase, reduces screening costs and allows researchers to focus on promising candidates. AI can simulate the interactions of molecules and predict their stability and toxicity before synthesis, saving time and resources.
  • Clinical trial optimization: AI helps to select appropriate patients for clinical trials, monitor their condition and analyse the results. This increases the efficiency of studies and reduces their duration. AI can identify patients who are likely to respond to a given treatment, which improves trial outcomes and streamlines recruitment. Predictive AI models can also detect potential problems or adverse effects early, allowing for timely study adjustments.
  • Personalised medicine: by analysing genetic, disease history and lifestyle data, AI can help doctors design tailored therapies that are more effective and have fewer side effects. Instead of a "one-size-fits-all" approach, AI enables targeted treatments that maximize benefits for the individual patient. This is particularly valuable in oncology or rare disease treatment.
  • Predictive analytics and pharmacovigilance: AI can predict epidemics, track the spread of diseases and identify risk factors, enabling early intervention and better public health management. In pharmacovigilance, AI monitors huge amounts of data from adverse event reporting and social media to more quickly identify safety signals, for already approved medicines. This improves patient safety and allows regulators to react faster.

New challenges to old familiar rules: Accountability and Regulation in the AI Era

Although the benefits of AI are undeniable, their advent raises questions to which current legislation often has no clear answers. This is where we encounter the biggest challenges for your business.

1. Liability for a defective AI-developed drug: who will bear the consequences?

Imagine a situation where an AI designs a drug that turns out to be harmful after it is launched. Who is responsible in this case? Is it the AI developer? The pharmaceutical company that produced and marketed the drug? Or the programmer who wrote the AI system? This question is fundamental and its solution is key to trust in AI in medicine.

Czech and European legislation on product liability is complex. It is common to apply strict liability to the manufacturer, which means that the manufacturer does not have to be proven to be at fault. However, with AI, the question arises as to who is the real "manufacturer" in the context of autonomous system decision-making. If an AI system is shown to have made a "mistake", it is necessary to assess whether the mistake is in the design, in the data on which the AI was learning, or in the algorithm itself. Is AI a tool or a separate actor? And what if the error is the result of a complex interaction between multiple AI systems or between AI and humans?

Risks and penalties: getting liability wrong can lead to huge financial penalties that can paralyse even large companies. There is also the risk of damage to a company's reputation, which takes years to build and can be destroyed in a matter of days. In extreme cases, criminal liability may even result for individuals responsible for bringing a defective product to market. It is crucial to be clear about the risks you are taking and how to effectively defend against them.

ARROWS lawyers deal with product liability issues, including those involving AI components, on a daily basis and are ready to help you navigate these complex issues. Our experience allows us to identify weaknesses in contractual arrangements and propose solutions that minimize your risks.

2. Intellectual property: who owns the AI creation?

AI systems are capable of generating new molecules, algorithms, procedures, and even entire therapeutic plans. Who then owns the intellectual property of these creations? Can AI be the author of a patent?

Current patent law is based on the concept of the human inventor. The concept of an "invention" or "work" created by AI is a novelty that needs to be reconsidered. Most jurisdictions currently recognise only a natural person as the author or inventor. This means that if AI creates something unique without significant human intervention, legal protection is uncertain. Without clear ownership of intellectual property, there is a risk of losing R&D investment and legal disputes with competitors who might freely exploit your innovation.

Risks and penalties: IP uncertainty can lead to arbitration, litigation and loss of valuable innovation if your rights are not properly protected. Imagine AI discovers a breakthrough molecule, but you can't patent it because you don't know who the "inventor" is. You would lose exclusivity and massive investment.

ARROWS lawyers can advise you on how to ensure that your intellectual property is protected even as the boundaries between human and machine creation are blurring. We help clients create robust contractual frameworks with AI developers that clearly define ownership of IP rights to AI outputs, while monitoring developments in the field to find the best possible solutions for you.

3. Privacy and Cybersecurity: sensitive data in the hands of AI

Pharmaceutical research and development often works with extremely sensitive personal patient data, including medical history, genetic information, treatment results and biometric data. AI systems analyse this data on a large scale, which carries huge risks in terms of data protection (GDPR) and cybersecurity.

The GDPR imposes strict requirements on the processing of personal data, especially sensitive data. This includes the principles of data minimisation, purpose limitation, accuracy, transparency and accountability. The use of AI in medicine often involves the processing of huge data sets, increasing the risk of data leakage, misuse or unauthorised access.

Risks and fines: Breaches of the GDPR can lead to astronomical fines - up to €20 million or 4% of a business's total annual turnover, whichever is higher. In addition, there is the risk of reputational damage and loss of trust from patients and partners, which can have long-term negative effects on your business. Securing data from cyber attacks is absolutely key. Any data leak or cyber incident can have fatal consequences. Imagine if the sensitive medical records of thousands of patients were leaked to the public. It is every company's nightmare and a threat that must be taken very seriously.

ARROWS lawyers are experts in data privacy and cybersecurity issues and will help you set up robust systems and processes to avoid these risks. We routinely handle GDPR compliance issues in the pharmaceutical industry and AI implementation, including the preparation of Data Protection Impact Assessments (DPIAs), which are often necessary for AI systems handling sensitive data.

4. Ethical Aspects and Discrimination: Justice in an Algorithmic World

In addition to legal issues, AI in the pharmaceutical industry brings a number of profound ethical dilemmas that need to be actively addressed.

AI algorithms learn from data. If that data is skewed, incomplete or unrepresentative, AI can reproduce or even exacerbate existing discrimination and biases contained in the data. For example, if AI learns from data that is not representative of different ethnic groups, ages, or genders, AI-designed treatments may be less effective for some populations or lead to misdiagnoses. This raises serious ethical questions and may lead to legal challenges on the basis of discrimination.

Bias in AI models is a major concern, especially in healthcare, as systems trained on biased or unrepresentative data can produce skewed results, leading to inequities in diagnosis, treatment, and drug discovery.

If an AI model is built or trained on biased data, these biases can manifest in its outputs and exacerbate existing inequalities. To mitigate these risks, AI models must be trained on diverse and representative datasets.  

It is also important to ensure that AI decision-making processes are transparent and explainable. One of the main problems is the "black box" problem, where AI systems, especially deep learning models, act as "black boxes". This means that their decision-making processes are opaque and difficult for humans to understand. This lack of transparency is particularly problematic in medicine because clinicians need to understand how AI systems arrive at their recommendations to ensure patient safety.  

The concept of "black box" AI, where it is not clear how the system arrived at a particular decision, is unacceptable in medicine. What if an AI "decides" that a certain patient will not receive a specific treatment? How can we ensure that this decision is fair, ethical and justifiable? Patients and doctors need to trust that AI decisions are objective and based on the best available information.

Human oversight, or the human-in-the-loop model, is key to mitigating risks and liability issues. The EU AI Act requires that high-risk AI systems be subject to appropriate human oversight. This means that humans must have the ability and authority to change AI decisions and ensure that oversight is meaningful, not just symbolic. In the pharmaceutical industry, AI must complement (not replace) clinical and regulatory judgment.  

Risks and penalties: discrimination is not only ethically unacceptable but also punishable under anti-discrimination laws. It can lead to lawsuits, huge fines and irreparable damage to reputations. It is essential to ensure that AI systems are designed and tested to minimise the risk of discrimination and comply with ethical standards. This includes careful selection and validation of training data, transparent algorithms and mechanisms for human oversight and control.

At ARROWS, we focus on the ethical aspects of AI and help clients avoid the pitfalls associated with algorithmic discrimination, including through ethical audits of AI systems.

5. Regulation and approval processes: new rules for new technologies

Existing regulatory frameworks for the approval of medicines and medical devices were designed for traditional pharmaceutical research and development. However, the advent of AI requires them to be updated and adapted. How to ensure that medicines developed or tested by AI are safe and effective? How to verify and validate algorithms that can learn and change their behavior over time? Existing clinical trials and validation procedures may not be sufficient for AI.

Under the Artificial Intelligence Regulation (AI Act), which is intended to comprehensively regulate AI systems with varying levels of risk, AI systems used in medicine and for drug development are likely to fall into the "high-risk" category, which means stricter requirements for transparency, human oversight, accuracy, cybersecurity and pre-market conformity assessment. Meeting these requirements will require significant effort and investment.

Risks and penalties: failure to comply with new regulatory requirements can lead to denial of drug approval (which can mean the loss of hundreds of millions to billions in development investment), product recalls and massive fines. It is essential to be prepared for these changes and proactively adapt your research, development and approval processes.

With ARROWS lawyers, you will always be up-to-date on the latest regulatory changes and be able to respond in a timely manner. ARROWS lawyers are constantly monitoring the development of AI regulation and its impact on the pharmaceutical industry, both at EU and national legislative levels. We can help you interpret new regulations and prepare for certification and approval processes.

Contact our experts:

A proactive approach to legal protection: your key to success with AI

Although the legal landscape of AI in pharmaceuticals is still taking shape, one thing is clear: a passive approach does not pay. On the contrary, proactive legal advice will allow you to not only minimize the risks, but also to take full advantage of the potential that AI in medicine offers.

What should you consider and how can ARROWS lawyers help you?

  1. Thoroughly review contractual relationships: every partner you work with on AI development (data suppliers, algorithm developers, cloud service providers) must have clearly defined responsibilities. Ensure contracts have clear clauses on liability for defects, intellectual property for AI outputs (who owns the data, models, resulting discoveries?) and data protection. Regularly check that your contracts reflect the specifics of working with AI, including rules for updates and maintenance of AI systems.
  2. Audit your privacy and cybersecurity processes: before you start working with AI on patient data, make sure your processes comply with GDPR and other relevant privacy regulations. This includes implementing the principles of data minimization, anonymization and pseudonymization where possible, thoroughly securing data against cyber-attacks, and transparently informing patients how their data will be used. Regular audits, testing and staff training are essential. Don't forget the obligation to carry out Data Protection Impact Assessments (DPIAs) for high-risk operations, such as AI processing of sensitive health data.
  3. Setting up internal ethical guidelines and risk management: in addition to legislation, it is important to have internal ethical guidelines for the use of AI. Consider how to ensure fairness, transparency and accountability at all stages of AI system development and deployment. This includes defining mechanisms for human oversight of AI decisions, ensuring algorithms are explainable, and establishing rules for dealing with situations where an AI system fails or behaves unexpectedly. Establishing a code of ethics for AI and regular staff training will help build trust and minimise the risks associated with ethical controversies or algorithmic discrimination.
  4. Monitoring legislative developments and proactive adaptation: AI legislation is constantly evolving, both at the European Union level (e.g. AI Act) and in individual Member States. It is crucial to be informed about upcoming legislation and to prepare for it in a timely manner. That way, you can proactively adjust your practices, technologies and business models to comply with new requirements and avoid unpleasant surprises in the form of fines or denials of approval.

The future is now: don't be left behind

Artificial intelligence in pharmaceuticals is not the music of the future, but the dynamic reality of today. Those who understand and master the legal challenges it brings early on will gain a huge competitive advantage. They will be able to innovate faster, more efficiently and with fewer risks. Those who hesitate risk losing market share and potential legal difficulties that could threaten their entire existence.

At ARROWS, we understand that you may be feeling uncertain in this rapidly changing area. Our mission is to be your trusted partner to help you navigate the complexities of AI regulation. We are ready to support you at all stages - from strategic planning and process setup, to resolving specific legal issues, to representing you in potential litigation. We are your insurance policy for innovation.

Don't wait for problems to appear. Stay one step ahead. Investing in AI legal advice will pay you back many times over in minimized risks and maximizing the potential that AI offers in medicine.

Don't want to deal with this problem yourself? More than 2,000 clients trust us, and we have been named Law Firm of the Year 2024. Take a look HERE at our references.