The use of Artificial Intelligence (AI) in the military is perceived revolutionary as well as controversially. AI has the potential to deeply transform the military practice, offering enhanced situational awareness, a more informed decision-making process, enhanced (cyber) security and threat warning, and improved operational efficiency. It is easy to be overwhelmed by the incredible capabilities and new possibilities that this technology unlocks. The controversial part is the (ab)use of AI for violating or restricting human rights and humanitarian law or to facilitate internal repression. Making AI (or technology and systems that allow the development of it) is a matter of international security. Therefore, , we must remember that the military use and capabilities of AI in  practice must always be guided by fundamental principles that guard what is most valuable, namely, human life and dignity. They are the foundation of international law and human rights. RSM attended the ReAIm 2023 summit organized by the Government of the Netherlands, where the responsible use of artificial intelligence in the military was heavily debated. In this short article, we present in a nutshell the different challenges, risks and opportunities that were shared by key participants. We will start by outlining what, in our view, are the considerations and red lines that can keep the use of AI in the military grounded and provide a few use cases to illustrate the argument. 


Red lines

In the context of respecting international law and fundamental rights, our argument focuses on the potential of AI as a supportive tool to enhance the decision-making process and enable (cyber) security and threat monitoring. Instead of autonomously making sensitive decisions, AI would primarily serve to process and present information. Sensitive actions refer to those with the potential to impact fundamental rights or cause unintended large-scale political consequences, such as those involving human lives.

Under this framework, the responsibility for sensitive actions would still rest with humans, while AI functions as a tool to enhance human capabilities. Thus, addressing potential biases and ensuring privacy in data processing are important considerations, but the use of AI in this manner does not introduce fundamentally new complexities to the ongoing debate.

However, when AI systems are assigned the ability to undertake sensitive actions, such as in the case of operating autonomous lethal weapons or to conduct cybersurveillance (e.g. face recognition or data traffic monitoring) , additional security and ethical concerns arise. These encompass transparency, accountability, human dignity, international security, violation of human rights and humanitarian law and the potential for unintended harm when AI technology gets into the wrong hands.

While the relationship between transparency, accountability, and human dignity warrants further exploration, it is worth noting that transparency plays a critical role. Transparency enables understanding of the intended (end) use and (end-)users as well as how a specific recommendation is derived, facilitating ethical, political, security and legal evaluations of its justifiability. Moreover, transparency allows for identifying the source of a decision, ensuring appropriate individuals, entities or even countries are held accountable for any unintended or illegitimate consequences. Without transparency, accountability becomes elusive, and the absence of an accountable political and/or military force could lead to unrestrained abuse.

Our argument posits that AI, when respecting international law and fundamental rights, can be employed as a supportive tool to enhance the decision-making process. It would process and present information, while sensitive and illegitimate actions would remain the responsibility of humans. While transparency, accountability, and human dignity are essential considerations in this context, a comprehensive exploration of their relationship will be undertaken separately. Transparency, in particular, is indispensable for understanding the system's decision-making process, reviewing its ethical, political, and legal justifiability, as well as ensuring appropriate accountability. Without transparency, the potential for abuse by a non-accountable military force cannot be ignored.

Enhanced situational awareness

An example of what is considered a non-controversial use of AI in the military, which does not entail autonomous decision-making, is to enhance the user's situational awareness. A common argument in favor of employing artificial intelligence is that military decisions are often made under time pressure. In such scenarios, decision-makers must act quickly despite having limited access to relevant information. Conversely, AI systems possess the capacity to rapidly process extensive volumes of data from multiple sources, even in real-time, within fractions of a second. Additionally, AI can swiftly and accurately identify patterns and trends, a task beyond the capabilities of the human mind. Consequently, military leaders utilizing AI would benefit from a deeper comprehension of the situation, enabling them to detect and respond to threats more swiftly. This enhanced situational awareness, achieved through AI, can be harnessed for legitimate military purposes.

AI-powered logistics

Another valid application of AI in the military lies in the domain of logistics and supply chain management. AI systems can be utilized to ensure the timely and efficient delivery of critical supplies by predicting demand and monitoring inventories in real-time. Furthermore, by equipping equipment with appropriate sensors connected to AI systems, predictive maintenance can be performed, allowing for efficient scheduling of critical gear upkeep and minimizing unexpected downtime.

When combined with basic geographic information systems (GIS), AI can optimize operational costs by tracking the locations of military assets and leveraging this information to devise more efficient logistics strategies. By utilizing predictive AI capabilities, military resources can be strategically deployed, ensuring proximity to areas of need, reducing transportation costs, and circumventing bottlenecks.

Bias in AI data processing

Even when AI systems are employed solely to enhance information processing capabilities, valid concerns can arise regarding the presence of biases in the AI-generated intelligence. It can be argued that such biases may steer decision-makers in a specific direction. It is important to recognize that bias is fundamentally a human attribute, inherent to human cognition. Thus, if bias permeates an automated data processing system, it is a consequence of human action or omission.

While the question of completely eliminating biases from AI systems extends beyond the scope of this article, they can certainly be addressed and minimized. One approach to mitigating biases involves maintaining continuous human oversight over the data used to train the system. This oversight ensures that the data is diverse and representative, to the extent possible, of the reality it aims to capture. Similarly, factors utilized by the AI to generate recommendations must be clearly identifiable, allowing trained human individuals to make necessary corrections if biases are detected.

By maintaining human involvement and scrutiny throughout the AI system's development and operation, efforts can be made to minimize biases and uphold fairness in the decision-making process. It is essential to acknowledge that addressing biases in AI systems is an ongoing endeavor, requiring vigilance and proactive measures to ensure ethical and unbiased outcomes.

What to do?

Artificial Intelligence has the potential to bring about both positive and controversial transformations in the political and military sphere. However, the current landscape lacks clear demarcation lines. It is imperative for the international community to engage in a substantive debate on the ethical implications of AI's use in decision-making. Ideally, this discourse should result in well-defined guidelines embedded in international law, accompanied by broad consensus from academia and politics. Until such clarity emerges, a cautious approach for businesses concerned with ethically enhancing AI-based military capabilities would be to refrain from developing autonomous decision-making tools. Instead, efforts should be directed towards applications that improve information capacity, operational efficiency, and techniques that promote transparency in AI processes.

During this pursuit, developers and users of AI systems must remain mindful of the ongoing discussions around regulations, particularly at the European Union level. For instance, the AI Act and the Directive on AI Liability (addressed in a separate article) exemplify proposed regulations that will outline specific requirements for AI systems. It is crucial to adhere to these guidelines from the design phase of new applications, preventing the need for costly rework in later stages of development.

Furthermore, it is vital to note that AI technology, especially in  military, including dual use applications, falls under stringent export controls and sanctions regulations within the context of international trade compliance. Safeguarding human rights, the protection of international humanitarian law and avoiding internal repression have become more explicit principles underlying both EU as well as American export control regulations. In the EU, these principles have already been laid down in the eight criteria that EU Member States must apply in their decisions for the export of military and dual use items and that are laid down in the European Council Common Position (2008/944/ CFSP). Non-compliance not only can be prosecuted as an economic crime, subjecting individuals and entities to severe penalties under criminal law, it also may impair the reputation of companies and their exclusion of trusted party communities and international supply chains. This underscores the importance of adhering to regulatory frameworks and monitoring and considering geopolitical developments to ensure lawful and responsible deployment of AI in the military domain.