They do not yet exist. But, already, thousands of citizens and experts seek to ban them. Them ? What are the Systems of lethal weapons stand-alone (Sala) : war machines piloted by an artificial intelligence (AI), capable of opening the fire of their own initiative. The NGOS describe as “killer robots”, a kind of “Terminator” relentless, intended to replace the soldiers on the ground.

Listen to Sebastian Julian explain that the artificial Intelligence is not close to replace the soldiers and conventional arms. Finally, not all around… (on SoundCloud).

The image, a caricature, is now more of science-fiction as the domain of the possible. But it has the merit of putting the feet in the dish. Because several countries, including China and the United States, are investing heavily in the military robots and AI. A race which could lead, in the future, to the emergence of systems armed with a level of autonomy unparalleled.

An arsenal of increasingly automated

The military equipment has already evolved a lot. “The functions of movement and target identification are the subject of a wide-automation”, notes Vincent Boulanin, a researcher at the international Institute of peace research in Stockholm (Sipri). The turret auto Super aEgis II, for example, is able to identify a form of human-size to 2 or 3 kilometres away, day or night. Installed at the border between the two Koreas, it detects explosives, and its manufacturer is working on software updates that will allow him to eventually identify the silhouettes of friendly and enemy.

equally impressive, minitanks automated are currently being tested on both sides of the Atlantic. Equipped with missiles, they can carry heavy loads, and automatically follow, or even anticipate it, in pathfinders, a squad of military on the ground… “These missiles will make tanks obsolete in certain types of missions,” says a spokesman for the manufacturer Estonian Milrem.

The drones, finally, are also developing. They know how to fly autonomously to an area in search of “signatures” specific and explode on them. But, soon, these devices, smaller, move in swarms, so as to saturate the defenses of your enemies. The Americans are already causing them to drop packets of drones, called the “Perdix” in altitude for fighter aircraft. And, in the lab, the researchers organize of the fighting dummy between several swarms piloted by AI.

(above, in a minitank automatic u.s. in a test phase)

“For the moment, all the weapon systems used in the field retain a human in the loop,” says Vincent Boulanin. “The machine remains subordinate to the man. This is the official doctrine of the armed forces,” confirms Patrick Bezombes, deputy director of the Centre interarmées de concepts, doctrines and experimentations (CICDE).

But, behind the scenes, some manufacturers argue that it will take a day without an operator in the flesh, because it is too… slow. “On the theatre of operations, the tempo speeds up, which leads to a shortening of decision cycles”, concedes Frédéric Coste, master of research at the Foundation for strategic research (FRS). “Today, we’re talking about missiles hypersonic, even the batteries, air-to-ground current can not shoot down”, confirms Jean-Christophe Noël, research associate at the Centre for security studies, French Institute of international relations (Ifri).

READ ALSO >> The new weapons of war flash

In these conditions, to entrust the defense to only the human beings will soon be a heresy : well-trained, their response time in the face of danger – for five to ten seconds – too slow ! “This is the reason for which the defence system naval Aegis, for example, has a mode of ’emergency’. To simplify, just press a button and the machine handles”, summarizes Jean-Christophe Noël.

An AI for the defense… and the attack

In the next few years, this system will only increase in efficiency thanks to a dose of AI. But it will also be used for attacking systems. Several scenarios already : “One can imagine, for example, members of the military in the face of sensitive targets,” explains Frédéric Coste. The case has already occurred in Afghanistan, where a 4×4 vehicle passed quickly from one village to the other, leaving hardly a minute to the armed forces to detect and destroy them.

Jean-Christophe Christmas evokes, for him, a scenario of flock where a squad of soldiers, backed up by robots, and progresses on the ground towards a specific goal. By the way, it undergoes a surprise attack, so that the group leader sends controllers eliminate the threat, giving them carte blanche. Without loss of life, the battalion may continue with its original mission.

Chilling ? “The military will reflect very probably to such jobs. In the future, robots will be able to analyze a lot of data, they do not tremble at the moment of shooting, the trajectories of their ammo can be optimized,” warns the expert.

discussions on weapons autonomous is ongoing at the UN since 2013. SHORT

“One thing is for sure : if an army deploys Sala on the ground, the whole world will align, because the AI will give a decisive advantage”, says a military. Some recent tests have shown the superiority of the computer on man : in October 2015, the US Air Force has faced severals experienced pilots with an AI named Alpha in a simulator. Result ? The machine has put a beating to his opponents. She seemed to guess in advance their every move !

Encouraged by these results, the major powers are investing billions of dollars in this area. But going from a simulator to the fighting on the ground will not be easy. Because the Sala pose major ethical problems. Dare we give the right to a machine to kill a human ? Will she be able to avoid the smudging ? “Without denying the true potential of the AI, we must remain lucid as to its actual performance, not to say limited. To date, the artificial intelligence is still more artificial than intelligent,” says Patrick Bezombes.

To lead the IA in the military, experts feed on small sequences of movies with the characters. The objective may be, for example, to detect a weapon hidden under a cloak. “We repeat the experiment thousands of times and was told each time to the machine if it is wrong or not. In the end, statistically, it is known that if there is an odd shape to such a place, there may be a problem,” explains Frédéric Coste.

The catch ? This learning requires data that should be indexed, the more often the hand ! And contrary to popular belief, there are very few. To the point that China would be on the point of the creation of SMES specialized in the production of these valuable data !

(above, a project about robots, two-legged defensive, covered by ECA).

the fact Remains that, for many experts, the AI will never be able to get a perfect vision of the battlefield. “Its operation, based on the stats, lets achieve a success rate of 70% or 80%, in particular depending on the fineness of the databases, but reaching 100 % is illusory,” says Raja Chatila, director of the Institute of intelligent systems and robotics (Isir). As ever the experts can not predict all cases and teach to a machine.

The dismal failure of an AI in China, revealed last November, illustrates the difficulty : it was widely believed that a person was crossing the street at a red light, so that it was in fact a photo tacked on a bus ! Fortunately, it was not army… “These systems are not reliable. It would be criminal to deploy them,” époumone Jean-Paul Laumond, roboticien and director of research at the Laboratory of analysis and architecture of systems (Laas) of the CNRS. But the cries of alarm, like those of Stephen Hawking and Elon Musk as of 2015, the petitions and the discussions at the UN – started in 2013 – be enough to prevent the deployment of the Sala ? Not sure.

mystifying decisions for the man

In his book, killer Robots, the pilot of attack helicopter Brice Erbland anticipates already the programming of the ethics of artificial and human virtues, like courage or intuition. These could be embedded in the software of a “machine brancardière” which would be able, on a field of war, to repatriate an injured soldier at the front. Either. But the AI poses another problem for researchers. The more it progresses, the less one understands.

“in this kind of complex platforms, there is often a mixture of different algorithms. Therefore, it is difficult to have a comprehensive understanding of the operation,” warns Frédéric Coste. Today, the machine begins to learn by itself. Tomorrow, she will decide to treat the data in a new way, without informing the man. It will therefore in the inability to understand the mechanisms of treatment to work ! An AI killer impossible to understand ? The battle for or against Sala is only the beginning.

Read our complete file

THESE 2019: robots galore “The AI will be able to meet the great challenges of this century” VIDEO. The predictions (and misses) of Blade Runner for 2019 ZOOM : the army in The clamp for the IA

And if, instead of holding a rifle, the IA military devoted to more peaceful ? In the future, it could very well oversee the maintenance of the vehicles by identifying failure in advance, slides to an expert. Other utility : the management of documents. Has the help of key words, the AI would select the best files for briefing the team leaders. It would determine whether a memo should be classified secret-defense or not. Better, analyzing the faces of the soldiers, it would detect the states of post-traumatic stress disorder.


Please enter your comment!
Please enter your name here

13  +    =  20