Legal Challenges of Human Control in Autonomous Military Systems

In an era of rapid technological advancement, the debate surrounding military unmanned systems focuses on a critical question: how to keep a human in the decision-making loop? This article explains the role of human decision-making, legal obligations under international humanitarian law, and the risks for states and commanders. You will discover why the ethics of war cannot be automated and where the greatest legal pitfalls lie under Czech and international law.

The photograph shows an expert during a consultation regarding autonomous weapons systems.

Quick Summary

  • Human decision-making is a legal necessity:  it requires that a human remains in meaningful control during critical phases of operations.
  • Technology fails where context is missing: Algorithms can fail to distinguish between civilians and combatants or in assessing the proportionality of attacks.
  • Violations risk serious sanctions: States and individuals deploying drones without adequate oversight risk war crime charges and international prosecution.

What is meaningful human control and why is it so important in law?

The term "meaningful human control" has become a key concept, although a uniform codified definition does not yet exist. In practice, it means that a person must understand how the system works, have the ability to intervene, and bear responsibility for decisions – whether at the level of weapon development, activation, or during the operation itself.

However, lawyers and experts are divided on what specifically such control should look like.

The International Committee of the Red Cross emphasizes that without meaningful human control, it is extremely difficult to ensure compliance with the principles of distinction, proportionality, and precaution. In other words: if you want to have a legitimate military operation, a human must figure in it as a guarantor of legality.

Three critical phases of decision-making on legality

Our attorneys in Prague frequently observe that many military commanders or developers underestimate the complexity of these decision-making cycles. There are three main phases where human control is formed:

Development and testing phase: Already during the design of a drone, parameters must be implemented that limit its operation in accordance with Article 36 of Additional Protocol I to the Geneva Conventions. This is not just a technical issue, but a legal obligation of the state to ensure that the weapon can be used in accordance with International Humanitarian Law (IHL).

Activation and deployment phase: The commander decides whether to deploy a given system in a specific context. At this moment, they must have access to relevant information and must be able to evaluate whether the deployment contradicts the law.

Operational (attack) phase: When the drone is already active, there should be a mechanism to stop or redirect it if the situation changes or if new information suggests that the attack would violate IHL.

The problem lies in the fact that the pace of decision-making is constantly accelerating. In a real conflict, decisions are often made in minutes or seconds. This is precisely where the greatest risk lies – that automated systems will outpace human judgment.

Related questions regarding the drone decision-making phase

  1. What happens if a commander allows a drone to hit civilians without adequate review?
    This can be classified as a war crime under Article 8 of the Rome Statute of the International Criminal Court (ICC). Criminal liability falls on the person who had control and knew or should have known about the risk of violation and failed to take preventive steps.
  2. Can technical errors of the weapon serve as an excuse for violating IHL?
    Generally, no. Our Czech legal team points out that states are responsible for ensuring that weapons are predictable and reliable. If a state deploys untested technology that causes disproportionate damage, it is not relieved of responsibility.
  3. How does "meaningful control" differ from the mere presence of a human?
    Meaningful control requires cognitive engagement – the person must truly understand the situation and have a real possibility to influence the decision. Merely mechanically pressing a button without a conscious assessment of legality is not sufficient.

The Principle of Distinction: Where technology fails and humans are essential

One of the most fundamental principles of international humanitarian law is the principle of distinction. This stipulates that parties to a conflict must always distinguish between the civilian population and combatants. An attack directed specifically against civilians is always prohibited.

At first glance, it seems that drones should be ideal for fulfilling this principle thanks to advanced sensors. In practice, however, it turns out that technology is only a tool – and the interpretation of data can be very deceptive.

Signature strikes versus person identification

Security forces in some conflicts utilize so-called "signature strikes" – strikes aimed at individuals whose identity is not fully verified, but whose behavior matches the "profile" of a combatant. Our Prague-based attorneys warn that this approach is on very thin legal ice.

If an algorithm identifies a group of people moving in a "suspicious" manner and the system evaluates them as a target, this may not be sufficient to satisfy the principle of distinction. They could be civilians fleeing combat or individuals who, while communicating with combatants, are not directly participating in hostilities.

Loss of context and the human factor

Drone operators often monitor their "targets" over the long term. Unlike fighter pilots, they see details of the daily lives of the monitored persons. At the moment of the decision to attack, the human ability to evaluate context that escapes the algorithm also plays a role – for example, the presence of children, who may appear in the data only as "smaller objects."

It is not that humans are infallible. It is that the decision to use lethal force must carry legal and moral responsibility, which an algorithm, as an object, cannot bear.

How the principle of distinction is violated in practice

The history of conflicts shows cases where, over time, the requirements for verifying the identity of a target were lowered. Repeatedly, situations occurred where an attack was conducted based on metadata (e.g., SIM card location) rather than visual confirmation of a combatant, leading to civilian casualties.

The legal issue is clear. If a state, within its military doctrine (Rules of Engagement), overly broadens the definition of what constitutes a "legitimate military target," it risks systematic violations of the principle of distinction and the commission of war crimes.

Related questions on distinction and drones

1. If target data is obtained from allies, is the state responsible for its accuracy?
Yes. Our attorneys in Prague point out that a state is responsible for its own decision to launch an attack. If it uncritically accepts intelligence from a partner known to use overly broad definitions of a "combatant," it may be held jointly responsible for violations of international law.

2. Is target identification via AI sufficient to carry out an attack?
Generally, not on its own. International Humanitarian Law (IHL) requires a commander or operator to do everything feasible to verify that the target is military. AI can serve as a supporting tool for analysis, but the final confirmation and decision must remain human.

3. What happens if an algorithm incorrectly identifies civilians as combatants?
If this occurs due to negligence, insufficient system testing, or ignoring the principle of precaution, it constitutes a violation of IHL. Responsibility lies with the state or the commanders who deployed the system.

The Principle of Proportionality: Balancing military advantage and civilian losses

The second key principle is proportionality. This prohibits attacks expected to cause civilian damage that would exceed the anticipated concrete and direct military advantage.

In theory, this sounds logical. In practice, it is extremely difficult to program an algorithm to solve the equation of whether civilian deaths are proportionate to the elimination of a military target. This is a value judgment, not a mathematical calculation.

Quantification cannot replace legal judgment

Attorneys at the ARROWS law firm in Prague frequently encounter efforts to quantify Collateral Damage Estimation (CDE). Systems can predict the likely blast radius and the number of persons within it.

The fundamental problem, however, lies in the definition of "military advantage," which is fluid over time and highly contextual.

Algorithms tend to reduce these complex questions to statistics. The result can be a decision that is "data-driven" but legally indefensible because it fails to account for cumulative impacts or humanitarian considerations.

In practice, we observe the phenomenon of automation bias, where operators tend to uncritically accept algorithmic recommendations. If a system labels an attack as "proportionate," an operator may subconsciously suppress their own doubts.

How the ARROWS law firm approaches proportionality analysis

As part of our advisory services for state institutions and the defense industry, we emphasize the necessity of a comprehensive analysis:

  • Data Verification: Collection of all available data on civilian presence (including patterns of life in the given area).
  • Legal Assessment of Military Advantage: It must be a concrete and direct advantage, not a hypothetical or long-term political effect.
  • Alternative Means: Considering whether the military advantage can be achieved through other means with less risk to civilians.

Related questions on proportionality and drones

1. If more precise guidance reduces estimated civilian casualties, does the attack automatically become proportionate?
Not necessarily. Even a single civilian casualty can be disproportionate if the military advantage is negligible. The precision of a weapon does not automatically guarantee the legality of an attack.

2. Who is legally responsible for an incorrect proportionality estimate – the programmer or the commander?
Primarily the commander who decided on the attack. A programmer could only be held liable if they intentionally manipulated the software to cause harm, which is less common in practice.

3. Must proportionality be reassessed if the situation changes during the drone's flight?
Yes. The duty to exercise precautionary measures lasts throughout the entire duration of the attack. If civilians appear in the target area, the operator must have the ability to abort the attack.

Command Responsibility: Who is responsible for what the drone does?

In international humanitarian law, the doctrine of command responsibility applies. A commander is criminally responsible for the acts of their subordinates if they knew or should have known that they were about to commit a violation of IHL and failed to take all necessary measures to prevent such conduct.

With autonomous or semi-autonomous systems, this concept becomes more complex. Can a commander be responsible for a "decision" made by a machine?

Where responsibility begins and ends

The legal consensus, which our Czech legal team works with, is as follows:

The state and the high command are responsible for ensuring that a weapon has undergone a proper legal review and that its behavior is predictable.

Upon deployment, the specific commander bears responsibility for the decision to deploy an autonomous system in a given environment. If they deploy a drone into a densely populated area knowing that the drone's sensors cannot reliably distinguish civilians in a crowd, they bear full responsibility for the consequences.

Command must ensure that operators are properly trained not only in operating the machine but also in the rules of international humanitarian law.

A problem arises in the so-called "accountability gap," where everyone attempts to shift blame to a "software error." However, the law requires that a human always be responsible for the use of force.

Current Cases

In 2024, the International Criminal Court (ICC) issued arrest warrants for high-ranking Russian officials in connection with attacks on Ukraine's energy infrastructure. Although the attacks were often carried out using loitering munitions, the core of the accusation is the commanders' decision to conduct attacks while knowing of excessive civilian damage.

Related questions on command responsibility

1. Is a commander responsible if a drone fails technically and hits the wrong target?
If it was an unpredictable technical failure, it may not constitute a war crime. However, if the commander knew about the system's unreliability and still deployed it in a high-risk area, they bear responsibility for negligence, which can lead to criminal prosecution.

2. If a drone is developed by a third state and our military only purchases it, are we responsible for its behavior?
Yes. The state using the weapon is obliged to conduct its own legal review and ensure it is used in accordance with IHL. One cannot hide behind the manufacturer.

3. Can soldiers defend themselves by claiming they "followed the system's order"?
No. The duty to disobey a patently illegal order applies even if the "order" or recommendation to fire is issued by an algorithm.

The Problem of Time: Why accelerating decision-making poses a risk

The argument for autonomy is to increase the tempo of operations and accelerate the decision-making cycle. Automated systems react faster than humans.

However, our Prague-based attorneys warn of the risk where human oversight becomes a mere formality. If an operator has only seconds to confirm an attack, they cannot perform a high-quality legal and ethical assessment.

Speed versus Legality

In practice, situations arise where an operator, under time pressure and automation bias, fails to cancel an attack even when they should. In such cases, legal liability remains with the operator, even if the system settings effectively prevented them from performing a proper check under Czech and international law.

Recommendations for Process Setup

To ensure legality under the Czech legal system and international standards, it is necessary to distinguish between types of decision-making:

  • Deliberate Targeting: This requires a thorough legal analysis, which may take days.
  • Dynamic Targeting: Even here, boundaries must be set so that speed does not come at the expense of the law.

Related Questions on Time and Decision-Making

1. If an operator does not have time for a full legality review, should they refuse the attack?
From a legal perspective, yes. In case of doubt, the presumption of civilian status applies, along with the obligation to abstain from the attack.

2. How is "time available for decision" assessed in practice?
It is assessed through the lens of a "reasonable commander" in the given situation. If the situation allowed for waiting and verifying the target, and the commander failed to do so, they acted in violation of the precautionary principle.

Main Legal Risks in Deploying Drones Without Adequate Human Control

Risks and Sanctions

How ARROWS Assists (office@arws.cz)

Violation of the Principle of Distinction: Attacking individuals without sufficient verification of their combatant status (a war crime under the ICC Statute).

Implementation of Rules of Engagement (ROE): Our Prague-based attorneys help formulate clear rules for target identification in accordance with international law.

Violation of the Principle of Proportionality: Civilian damage exceeds military advantage; risk of international prosecution and reputational disaster.

Training and Methodology: We provide training for commanders and staff on applying the proportionality principle to the modern battlefield.

Failure of Command Responsibility: Insufficient control over the system and subordinates.

Setting up Control Mechanisms: We design internal reporting and control systems that protect commanders from unconscious negligence.

Deployment of Untested Technology: Use of a weapon that has not undergone review under Art. 36 of Additional Protocol I.

Legal Review of Weapons: We provide legal audits of new weapon systems before their introduction into service.

Criminal Liability of Individuals: Prosecution of operators and commanders before domestic or international courts.

Legal Defense and Consulting: Our Czech legal team provides legal representation and consultations on matters of international criminal and military law.

Autonomous Weapons: Where is Technology Heading and Where is the Line?

The discussion is shifting toward Lethal Autonomous Weapon Systems (LAWS), which would select and engage targets on their own without human intervention.

Definitions of Autonomy and Its Levels

  • Human-in-the-loop: The drone waits for a human command before attacking (e.g., MQ-9 Reaper).
  • Human-on-the-loop: The drone selects targets itself, but a human can veto the attack.
  • Human-out-of-the-loop: The system operates without the possibility of human intervention after activation.

A Legal Vacuum?

The biggest problem with fully autonomous weapons is the absence of a "moral actor." A machine cannot be held accountable. If LAWS commit a war crime, liability must necessarily return to the beginning of the chain – to those who programmed and deployed the system.

International Regulation

Long-term negotiations on the regulation of LAWS are ongoing at the UN within the framework of the Convention on Certain Conventional Weapons. The attorneys at our Prague-based law firm monitor these developments to provide clients with up-to-date advice in a rapidly changing legislative environment.

Related Questions on Autonomous Weapons

1. Is the development of autonomous weapons currently prohibited?
It is not explicitly prohibited by international treaty, but their use must always comply with general rules of International Humanitarian Law (distinction, proportionality). If a weapon inherently cannot meet these rules, its use is unlawful.

2. Can an algorithm distinguish better than a human?
In certain parameters (data processing speed), yes, but in understanding context, intent (e.g., a surrendering soldier), and ethics, not yet.

Specific Risks in Involving Artificial Intelligence in Decision-Making

Risks and Consequences

How ARROWS Assists (office@arws.cz)

Black Box Effect: The inability to retrospectively explain why the AI decided to attack prevents legal review and the assignment of liability.

Requirement for Explainability (XAI): We advocate for legal standards requiring the auditability of AI decision-making processes in military applications.

Data Bias: An algorithm trained on flawed data can systematically discriminate against certain groups of people.

Compliance and Data Audit: We collaborate with experts to assess the risk of datasets from the perspective of international law.

Hacking and Spoofing: The enemy taking control of an autonomous system.

Cybersecurity Legal Framework: Our Prague-based attorneys help establish legal liability for the cybersecurity of weapon systems.

International Accountability Mechanisms

Violations of International Humanitarian Law during drone deployment can have an impact at several levels.

International Criminal Court (ICC)

The ICC prosecutes individuals for the most serious crimes. The court has jurisdiction if the state is a party to the Rome Statute or if the matter is referred by the UN Security Council.

State Responsibility and Reparations

A state whose armed forces have violated international law is obliged to provide reparations to the injured state or victims. Although enforceability can be complex in international law, reputational damage and diplomatic isolation are real sanctions.

Domestic Criminal Law

The primary responsibility for prosecuting war crimes lies with the states themselves. The Czech Criminal Code contains specific provisions that also apply to drone operations conducted under Czech jurisdiction.

Practical recommendations for states and institutions

If your organization is involved in the development, procurement, or operation of unmanned systems, you should address the following under Czech and international law:

1. Have we conducted a Legal Review?
    Every new weapon must be assessed for compliance with international law before deployment. ARROWS provides expert legal opinions in this field within the Czech Republic and abroad.

2. Are operators trained in IHL?
    Technical training is not enough. Personnel must understand the legal limits of the use of force under international humanitarian law.

3. Have we established clear accountability processes?
    It must be clear who in the chain of command bears responsibility for pulling the trigger or authorizing autonomous mode.

Conclusion

The role of the human in the drone decision-making cycle is not just a technological issue, but primarily a legal and ethical one. International humanitarian law was created for humans and assumes human judgment, compassion, and responsibility. The pursuit of full automation encounters limits that can lead to tragic errors.

The attorneys at the ARROWS law firm in Prague provide expertise in international humanitarian law, military law, and compliance.

We help clients navigate the complex legal environment of modern warfare and minimize risks associated with deploying advanced technologies. For a consultation regarding the defense industry in the Czech Republic, contact us at office@arws.cz.

FAQ – Legal queries regarding the human role and drones

1. What is the main legal difference between remotely piloted and autonomous drones?
With remotely piloted drones, there is a direct line of responsibility to the operator. For autonomous systems that select targets themselves, determining specific criminal liability is more complex and shifts more towards the commander or the state.

2. Can a state be held jointly responsible for an attack carried out by an ally based on shared data?
Yes. If a state provides intelligence with the knowledge that it will be used for an unlawful attack, it may bear international legal responsibility for aiding and abetting an unlawful act.

3. What should be done if a violation of IHL is suspected?
Soldiers have a duty to report suspected war crimes to their superiors or law enforcement authorities. Legal assistance in such cases, including whistleblower protection, is part of the agenda of our specialized law firm in Prague.

4. Is there an international treaty banning "Killer Robots"?
There is currently no specific global treaty that explicitly bans fully autonomous weapons (LAWS). The use of these weapons is therefore governed by the general rules of the Geneva Conventions.

Disclaimer: The information contained in this article is for general informational purposes only and serves as a basic guide to the issue. Although we ensure maximum accuracy of the content, legal regulations and their interpretation evolve over time. To verify the current wording of regulations and their application to your specific situation, it is essential to contact the ARROWS law firm in Prague directly (office@arws.cz). We bear no responsibility for any damages or complications arising from the independent use of information from this article without our prior individual legal consultation and professional assessment. Every case requires a tailor-made solution under the Czech legal system; therefore, do not hesitate to contact us.

Read also: