Getting your Trinity Audio player ready...
|
In May 2023, press reports stated that an artificial intelligence (AI)-controlled drone had killed its own instructor during a test conducted by the US military. Although it was a virtual test with no physical consequences, this event illustrated the potential dangers of this technology. During the test, the AI’s objective was to neutralize an enemy defence system, with the human operator having the final approval or disapproval. The AI allegedly realized that the operator’s refusal disrupted the accomplishment of his mission and thus targeted him. It should be noted that the US Air Force quickly denied the existence of such a test, emphasizing their commitment to the ethical and responsible use of AI.
Whether this story is anecdotal or not, this example shows that defence strategies are constantly evolving, and that the emergence of autonomous weapons equipped with artificial intelligence raises significant concerns. These advancements generate both fantasies and apprehensions about the future of warfare and great power rivalry. In September 2017, during a speech delivered before students in Moscow, President Vladimir Putin declared that the country leading the research on artificial intelligence would become the global leader.
This perspective becomes particularly relevant when considering the potential repercussions of using AI in the military domain. Lethal autonomous weapons systems (LAWS), sometimes powered by AI, come in various forms, such as guided missiles, militarized robots, and drones of all sizes. Those drones have the potential to disrupt traditional patterns of warfare by redefining the dynamics of armed conflict through the introduction of new defence tactics and strategies. This use concerns many states and non-governmental organizations since LAWS are distinguished by their ability to select and attack targets without human intervention, as presented by the International Committee of the Red Cross (ICRC).
However, the definition of LAWS is not universal. Each state adopts its own definition, which may vary in restrictiveness. Professors from the University of Oxford and the Alan Turing Institute have identified 12 definitions of LAWS provided by states or key international actors, based on official documents. Some countries, such as France and the United States, define LAWS as machines that, once activated, are fully autonomous from humans. The British government specifies in its definition that autonomous systems can understand and interpret human intentions. Thus, the United Kingdom distinguishes itself by emphasizing the intrinsic capabilities of autonomous systems. Finally, some countries, such as Switzerland and the ICRC, focus on the nature of the tasks performed by the machine and the legal implications of autonomous action. According to them, the machine should, at all times, be capable of complying with international humanitarian law (IHL), especially during the targeting process. Therefore, there are divergent approaches from different states and key international actors when it comes to defining lethal autonomous weapons systems.
The rationale developed below focuses on two key aspects related to the issues posed by LAWS: the ambiguity of their definition and the inherent challenges in establishing a normative framework. These elements shed light on preconceptions and realities concerning the use of these autonomous weapons, which are often described in the press as “game changers” (i.e. weapons that can alter the course and dynamics of armed conflict). By extension, as we will see, these concerns raise questions about the role of the human being on the battlefield and the potential for its diminishment.
Degree of autonomy and human control
The multiple definitions of LAWS are explained by the inherent difficulty in precisely qualifying what autonomy means. Indeed, it is essential to distinguish between functions automated by humans and the independence of artificial intelligence systems from human control. However, the latter – achieving full autonomy – is not the current reality of LAWS.
The degree of autonomy is determined by the ability of these weapons to make decisions based on their own (pre-programmed by humans) analysis of the situation. There are several levels of autonomy. For example, the Russian Navy stands out with its P-800 Onik missiles, smaller than the Granit missiles but still equipped with an artificial intelligence system. Thanks to its autonomous “fire and forget” system, this missile could, using satellite guidance, track its target in real-time and adapt its trajectory. According to Russian state media, it may even work in tandem to identify and classify targets before choosing an appropriate attack strategy. Once the main target is destroyed, the remaining missiles could be redirected toward other ships to avoid any duplication of attack.
The Onik missiles are not the only autonomous weapons capable of selecting and engaging a target without human intervention, thanks to their programming. Suicide drones like the Turkish Kargu-2 and the Russian KUB are reportedly capable of operating in complete autonomy and independently targeting without human assistance. This asset, perceived as significant by several states, such as the United States and China, has led them to make considerable investments in the development of such autonomous weapons. However, defence companies tend to exaggerate the capabilities of their products, and only “confidential” sources suggest autonomous use of these weapons in the United Nations report on Libya in 2021. In addition to these mobile autonomous weapons, there are also fixed LAWS, such as the South Korean robotic sentinel SG-RA1. This weapon is deployed in the demilitarized zone between the two Koreas. Although it can spot intrusions and fire autonomously, it always sends a firing authorization request to the command post. This choice to retain human control primarily reflects South Korea’s ethical consideration, as the technology does have a function that allows it to open fire without human supervision. Therefore, the primary constraint does not lie in the technology itself but rather in a government’s willingness to develop or acknowledge the existence of such politically critical technology. It is crucial to differentiate between autonomy and control to avoid considering automation and human control as mutually exclusive concepts: automation can eliminate human intervention but does not render human control impossible.
The absence of a normative framework
These degrees of autonomy in weapons systems capable of acting without direct human supervision have led certain states, such as Austria, Brazil and Costa Rica, to seek regulation of their use.
Autonomous weapons have been developed for more than 30 years, but it is only recently that the first UN commission called for a ban on LAWS until an appropriate normative framework is established. The question, therefore, arises as to whether these current autonomous weapons represent a significant departure from previous developments.
It is essential to note that autonomous weapons are not an entirely new concept, as illustrated by landmines or automated missile defence systems. Therefore, autonomy in weapons has already been present in various forms for some time, and some of them are already subject to regulation. For example, the Ottawa Convention has internationally regulated anti-personnel mines since 1997. Thus, over 25 years ago, there was already an awareness of the need to limit weapons independent of human control with this convention. This perspective has been reiterated recently before the European Parliament, the United Nations and the ICRC.
The problem with the evolution of autonomous weapons and the lack of regulation lies in two points. Firstly, the diversity of autonomous weapons and their varying degrees of autonomy makes it extremely complex to establish a coherent and effective definition, let alone a normative framework. It is a major challenge that the international community has not yet managed to overcome. Additionally, the current strategic context and geopolitical motivations play a significant role. Some states, notably the United States and Russia, which are developing these systems, have no interest in supporting the development of regulations as they believe they can gain a competitive advantage over their adversaries. This advantage includes more precise attack capabilities, a reduction in human casualties, and improved responsiveness on the battlefield. In this context, these states are reluctant to encourage strict regulation that could restrict their freedom of action and technological edge.
LAWS: between myths and realities
However, will these autonomous weapons systems revolutionize the terms of warfare? Or will they suffer the same fate as hypersonic weapons, which were expected to bring about a major revolution but have so far turned out to be mainly “press release” weapons?
Indeed, one should not believe in the creation of the perfect weapon. LAWS, no matter how revolutionary, have limitations and vulnerabilities. Like any electronic technology, they are susceptible to disruption or neutralization. They will be subject to attacks fron the adversary who could, by taking control of their communication channel, jam or intercept the information or intelligence of the machine. The unpredictable and volatile nature of modern conflicts requires the use of a variety of weapons and strategies to achieve military objectives. Autonomous drones are, therefore, just one more tool in the arsenal of “Air power” available to states.
While technological advancements improve the collection and sharing of information on the battlefield, artificial intelligence partly contributes to complicating matters. Among other things, autonomous systems introduce new variables and require constant adaptation. AI, when combined with hypersonic weapons, leads to ultra-fast decision-making, which can result in sudden and risky tactical situations. War, as a human activity, undeniably contains unpredictable factors and ambiguity. Therefore, despite these innovations, there will always be certain limits to the ability to fully see and understand the complex realities of a military scenario. Clausewitz’s fog of war merely shifts with the emergence of these new technologies. Strategies overtly reliant on rationality and certainty are simply dangerous in the dynamic and variable environment of war.
Should we believe in the reduction of human presence on the battlefield?
The increasing use of drones and autonomous weapons raises questions about the future of military operations. Will we witness massive offensives with expeditionary forces, similar to the Normandy landings in 1944, or will we instead see countries be invaded by swarms of drones?
In May 2022, China launched its first autonomous navigation drone carrier, the “Zhu Hai Yun.” Officially presented as an oceanographic research vessel, its military functions are nevertheless evident. This vessel, which can accommodate up to 50 aerial, surface and underwater drones, becomes a strategic instrument for Chinese actions in the Indo-Pacific region. In parallel, Turkey also stands out with the development of “drone carrier vessels” equipped with Bayraktar TB2 drones. Admittedly, these drone carriers would not be able to participate in high-intensity aerial conflicts where they could not, for example, compete with light aircraft carriers equipped with Japanese or Italian F-35Bs. However, in weakly defended areas, they could change the nature of coastal battles. In this regard, the Bayraktar TB2 has demonstrated its effectiveness in recent months, whether in the conflict in Nagorno-Karabakh or the Russo-Ukrainian war.
However, whether at a distance or on the battlefield, one should not conclude that the individual is being erased. The outcome of a conflict cannot be guaranteed by this technology alone. Indeed, even if the use of autonomous drones can be successful, the occupation of the territory remains an undeniable truth in certain operational contexts, something that LAWS cannot accomplish. Thus, LAWS have a “complementary role to that of combatants.” The phases of occupation/stabilization (and, by extension, reconstruction) will always be part of conflicts and will require personnel on the ground.
While the desire of any technologically advanced country is to end the war quickly, the outcome of conflicts is never certain and remains a complex political issue. The example of the situation in Ukraine illustrates that the adversary can often arm themselves with the same technologies as their opponent. Thus, even if warfare may evolve into new realms, it is a political problem that is resolved on the ground and requires troops and, therefore, human beings.
****
As they are currently the subject of numerous fantasies and attempts at prohibition without a clear establishment of what constitutes a LAWS, these weapons systems will likely continue to have multiple definitions. While governments bear some responsibility for this situation, it also arises from ambiguities and confusion propagated by the media and other international actors.
It is also important not to give too much credence to the theories claiming that the LAWS constitute yet another military revolution, relegating, by extension, humans to the rear front or disconnecting them from warfare altogether. The question of whether war is inherently linked to human presence on the battlefield or whether it could be replaced by new technologies finds its ultimate answer in the enlightening words of Colonel Ardant du Picq: “Man is the primary instrument of combat.”
Comments are closed.