How AI Changes the Future of Warfare
[caption id="attachment_18120" align="aligncenter" width="606"]
Copyright: U.S. Navy photo by Mass Communication Specialist 2nd Class Gary Granger Jr. - This Image was released by the United States Navy with the ID 130521-N-YR391-001.[/caption]
In the past, civil society has often benefited from innovations in the military sector. The microwave, the ambulance and the Global Positioning System (GPS) are just a few examples of the components of our everyday lives that have emerged from military research. This logic could be reversed in the future. The rapid advances in civil research in artificial intelligence will radically change the future of warfare.
Algorithms allow military systems to create their own picture of the situation and make their own decisions on the basis of it. This technology will soon revolutionize many areas of the armed forces, such as reconnaissance, logistics, transport, communications and even medical service. However, it will also open up many opportunities for the development and deployment of weapons systems. Such autonomous weapons, stigmatized by opponents as "killer robots", are currently the subject of heated debate not only in Europe.
Autonomous weapons are systems that can independently identify, track and engage targets. Whether a weapon is considered autonomous depends on its software. In this sense, a Kalashnikov assault rifle from 1946 could also be converted into an intelligent weapon if it were controlled by an appropriate algorithm. This would require a platform, such as a ship or a drone, and sensors for situational awareness and target tracking.
A little controversial form of such weapons are systems that protect their platform from incoming ammunition. Just think of the Rolling Airframe Missiles (RAM), which German frigates use to defend against missiles. Much more problematic are autonomous weapons that can target manned vehicles or soldiers. In this case one speaks of Lethal Autonomous Weapon Systems (LAWS).
AI as Game Changer?
In contrast to systems that are traditionally controlled by humans, autonomous weapon systems offer two advantages: First, they process information much faster than human cognition could ever do. Second, they replace people on the battlefield who are no longer exposed to the risks of armed combat.
Autonomous weapon systems are still in their infancy and exist mainly as prototypes and test versions. We know from the armed forces of Israel, China, Russia, the USA, France and Great Britain that they are intensively researching on artificially intelligent weapons.
This development has the potential to significantly change the military balance of power on the planet. Emerging powers like China will still need decades to equalize the USA's lead in conventional high technologies such as combat aircraft. The use of artificial intelligence, however, gives them the opportunity to achieve what Walter Ulbricht used to dream of in economic policy: overtaking without catching up.
China wants to be the leader in artificial intelligence by 2030. The Russian armed forces have announced their intention to operate a third of their systems autonomously by 2025. Whether these goals are credible remains to be seen. However, they indicate an accelerating race for a new type of weapon.
Ethical Questions
This raises urgent ethical questions for the international community. Is it acceptable for machines to decide whether and how a person is killed? Dr. Frank Sauer from the University of the Federal Armed Forces in Munich says: "For the person killed, it may not matter whether he was killed by a machine or by a human being. The question, however, is what it means for society and the principle of human dignity". While a human being is aware of the consequences of killing because of his or her own qualities as a human being, this cannot be assumed for software.
If a Bundeswehr soldier kills abroad, this results in an examination by the public prosecutor's office. The soldier must account for his actions. But who bears the responsibility when an autonomous system kills? The software developer? The operator? Nobody? In connection with artificial intelligence, the question of imputability also arises.
It is also questionable to what extent algorithms are capable of processing complex relationships and correctly assessing situations. Frank Sauer emphasizes that "artificial intelligence has nothing to do with intelligence. Computers are programmed to perform very narrowly defined tasks." On today's battlefields, however, there are no longer clearly identifiable combatants who are recognisable in accordance with international law. Modern wars are often waged with false uniforms or no uniforms at all, involving the civilian population and deliberately abusing international symbols of protection such as the Red Cross - a context that would be difficult to comprehend for algorithms.
Finally, the risk of an unintended escalation must also be considered. Analogous to a "flash crash", triggered by computers trading on the stock exchange, one could also imagine a "flash war", triggered by weapons systems reacting to each other, according to Sauer.
One suggestion to counter these problems is that the entire process of target selection and engagement should continue to be supervised by humans. However, this would mean renouncing the desired speed advantage. The Dutch MEP Samira Rafaela (D66) also warns that computers could take over human prejudices in the course of machine learning. For example, a Muslim could be more likely to be classified as a terrorist by an algorithm.
In view of the ethical risks, in September 2018 the European Parliament voted overwhelmingly in favour of an international ban on lethal autonomous weapon. According to a survey, this also reflects the opinion of a majority of European citizens. Nevertheless, Austria is so far the only EU member in the circle of 30 countries worldwide that officially support such a ban.
The Convention on Certain Conventional Weapons (CCW) in Geneva, a UN body regulating particularly injurious and indiscriminate systems, has been dealing with autonomous weapons since 2014. The last meeting in November ended with the conclusion that the talks would continue – which means there is no result so far.
An international agreement on the regulation of autonomous weapons systems has so far failed because of several problems. First of all, the five permanent members of the UN Security Council are themselves carrying out intensive research in this field. Secondly, there is a lack of an internationally recognised definition, which in turn would be a prerequisite for clear rules. Thirdly, compliance with the rules would hardly be verifiable. Tanks, nuclear warheads and missiles can be counted and found on emergency satellite images. But how do you verify a rule-compliant software in drones, ships or land vehicles?
In previous military revolutions, the international community has usually failed to agree on effective rules in good time. The Russian-American author Isaac Asimov said: "Science produces technology faster than society produces wisdom". In the field of autonomous weapons systems, the international community now has a chance to do better.
Sebastian Vagt
European Affairs Manager
Head of FNF Security Hub