top of page

Use of AI DSS in Military Operations: An Assessment under International Humanitarian Law

  • Julia Gawlas
  • May 25
  • 6 min read

Introduction

Military operations across the world have immensely increased their use of artificial intelligence (AI) techniques in the past decade. Oftentimes, AI-based decision support systems (DSS) are utilized as a form of decision-making in the use of force in armed conflicts. It is therefore imperative to explore how to preserve the necessary human judgement that is required to ensure compliance with international humanitarian law (IHL) for the protection of civilians and soldiers.1 Preserving human judgement ensures that critical decisions remain accountable, humane and lawful, especially when the lives of civilians and soldiers are at stake — it essentially acts as a safeguard against overreliance on automated systems. The potential removal of human judgement with the employment of these autonomous systems raises not only legal uncertainties but also ethical ones, particularly in decision-making that heavily affects the lives of individuals. In view of these developments, importance lies in engaging in discourse on the manner in which AI DSS is employed and the consequences this causes for populations affected by armed conflict.


The aim of this article is to explore whether AI relied upon in military operations can comply with the principles of international humanitarian law, in light of what is at stake for the actors involved.


What are Artificial Intelligence Decision Support Systems?


AI DSS are frequently referred to by military agencies as ‘key enabling technologies’ for their capabilities in accelerating and enhancing decision-making processes. They are considered to provide effective advantages in various kinds of military operations.2


AI DSS essentially relies on the mining and processing of large data sets to provide an output with recommendations for efficient decisions. However this can also be done automatically, where the output will be the recommended act itself, therefore without any human input. Synthesizing data from various sources and identifying patterns, AI DSS aims to replicate human cognitive reasoning in a complex operational environment.3


The Ukrainian armed forces are heavily reliant on the employment of AI DSS in their decision-making against Russia’s illegal invasion. Ukraine makes use of both domestic and foreign AI DSS software.4 Examples of domestic systems include Kropyva and GIS Arta, also referred to as ‘artillery Uber’ because they optimize and speed up the process of artillery targeting and are employed on a great scale.5 These systems essentially combine data from radar systems, drone surveillance and other sources to analyze and provide real-time information on Russian troop positions to Ukrainian military units.6 Palantir Technologies, an American company, has employed its AI software in support of defense against Russian attacks. Their AI technologies are now utilized by the majority of Ukrainian agencies, including the Ministry of Defense. Palantir’s AI DSS relies on the analysis of satellite images, open-source data and drone footage to guide commanders in their decisions.7


International Humanitarian Law


Reports have been published about AI DSS uses from various war zones across the world, such as the war against Ukraine and the attacks on Gaza. These reports denote the importance of questioning the role of AI in decision-making regarding the use of force in armed conflicts. This debate arises because it is unclear how much human involvement actually exists within these operations. The mere presence of humans within an operation does not guarantee the actual human involvement and control within ‘context appropriate judgement in targeting decisions’.8 Therefore, it is highly important to distinguish the core principles of international humanitarian law and AI DSS’ parallel compliance.


IHL primarily governs military targeting decisions through three core principles: distinction, precaution and proportionality. 9 The principle of distinction can be found under article 48 of Protocol 1 of the Geneva Conventions, requiring that civilians and their objects must always be distinguished from military objectives and military operations may never be directed against them.10 According to article 57 of Protocol 1, the principle of precaution requires that ‘constant care shall be taken to spare the civilian population, civilians and civilian objects’.11 Lastly, the principle of proportionality entails a balancing act of military necessity and humanitarian considerations, as stipulated in article 51 of the Protocol.12 The principle of proportionality can be argued to be the most important one, as it requires a lot of careful consideration. In this regard, the Draft Articles on Autonomous Weapons Systems (AWS) were submitted to ensure effective implementation of the principle of proportionality for operations involving autonomous systems like AI DSS.13


Proportionality assessments become difficult to conduct with the introduction of AI DSS. While it is very advantageous that the systems have the capabilities of storing and processing large data sets to analyze the required balances, the ‘black box’ effect of AI, which refers to the lack of transparency of algorithms, hinders the consequent validation process of the outputs. This complicates the process of making a balanced decision. This becomes an even greater issue when these systems are autonomous and do not rely on any human input.


While AI DSS has informative functions, this may still perpetuate challenges, consequently disturbing proper human judgement, creating compliance risks with IHL. Evidence on these systems suggests that the primary negative effects are unprecedented biases, unforeseen errors and a lack of complete understanding of the algorithms based on which these systems

function. Examples of biases include targeting specific individuals such as certain age groups, genders, ethnic groups, or persons with disabilities.14 These biases disturb the assessment of the principle of distinction. The existence of these biases presents a substantial risk of misidentification. Again, this becomes increasingly more problematic if the system is completely autonomous without any human involvement.


The precautionary principle may also be undermined if military operations become over-reliant on AI DSS. The danger lies in commanders being far too trusting in respect of the output of the systems and not being more critical of the results.15


Under international humanitarian law, responsibility for compliance is borne by the parties to the armed conflicts, namely, humans. Therefore, it is their subsequent responsibility to make judgement calls and exercise legal determinations. As such, the question of responsibility for violations of IHL becomes less clear when decisions were autonomously taken by AI DSS. However, it is often argued that the systems’ output is essentially only informing, not replacing human judgement. For instance, the AI DSS may identify patterns which suggest suspicious behavior in a certain area, but it is ultimately a human operator who must assess the intelligence.


In an attempt to avoid these challenges, States should undertake legal review of all AI DSS used in their military operations. These reviews would require considering the technical aspects of the systems and the tendencies of the users of the system.16


Conclusion

As the integration of AI DSS will continue to grow in the coming years, it is imperative to assess their compliance with principles of international humanitarian law. While these systems offer operational advantages, they also pose significant legal and ethical challenges, especially in regard to AI biases. In an attempt to mitigate and avoid these risks, states should undertake legal reviews of AI DSS analyzing their technical features and the necessity of preserving meaningful human judgement.


1 ICRC, ‘Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts’ (Geneva Academy, 13 May 2024)  accessed 15 April 2025.


2 Wen Zhou and Anna Rosalie Greipl, ‘Artificial intelligence in military decision-making: supporting humans, not replacing them’ (Humanitarian Law & Policy, 29 August 2024)  accessed 15 April 2025.


3 Alexander S. Gillis, ‘What is a decision support system (DSS)?’ (TechTarget, Date Unknown) accessed 15 April 2025.


4 Anna Nadibaidze, ‘Do AI Decision Support Systems ‘Support’ Humans in Military Decision-Making on the Use of Force’ (Opinio Juris, 29 November 2024)  accessed April 15 2025.


5 Vitaliy Goncharuk, ‘Russia’s War in Ukraine: Artificial Intelligence in Defence of Ukraine’ (International Centre for Defence and Security, 27 September 2024) accessed 16 April 2025.


6 Nadibaidze, (n 4).


7 Vera Bergengruen, ‘How Tech Giants Turned Ukraine Into an AI War Lab’ (Time, 8 February 2024)  accessed 16 April 2025.


8 Nadibaidze, (n 4).


9 Taylor Kate Woodcock, ‘Human/Machie(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard’ [2023] 38(1) Global Society  accessed 16 April 2025


10 Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS 3, art. 48.


11 Ibid art.57


12 Ibid art.51


13 Woodcock, (n 9).


14 Zhou, (n 2)


15 5 Carlos Batallas, ‘When AI Meets the Laws of War’ (IE, 3 October 2024)  accessed 16 April 2025.


16 Zhou, (n 2).



 
 
 

Comentarios


© 2024 by ASA International Law.

bottom of page