AI in Wartime

AI in Wartime

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Artificial intelligence is gaining use in modern warfare – what does it mean, and is it dangerous? AI, while faster than humans, is not necessarily safer or more ethical. Following is a report provided by Techxplore, delving into the role of AI in modern warfare.

AI, with its high-speed algorithms processing huge amounts of data to identify potential threats, can be useful for selecting targets, but experts warn that the results are only probabilities that must be inspected, as mistakes are inevitable. It can also operate in tactics, like the increasingly popular drone swarms that will soon be able to communicate with each other and interact according to previously assigned objectives.

Lastly, at a strategic level, AI could produce models of battlefields and propose responses and courses of action. Senior Analyst Technology and Conflict Alessandro Accorsi said: “Imagine a full-scale conflict between two countries, and AI coming up with strategies and military plans and responding in real time to real situations. The reaction time is significantly reduced. What a human can do in one hour, they can do it in a few seconds.”

However, with the worldwide “arms race,” AI may be moving onto the battlefield with much of the world not yet fully aware of the potential consequences. People might take a machine’s suggestion as fact, without considering the facts the machine used to reach that conclusion.

Accorsi claims that the real “game changer” is happening right now, with Ukraine becoming a laboratory for the military use of AI. Since the Russian attack in 2022, Ukraine began developing and fielding AI solutions for tasks like geospatial intelligence, operations with unmanned systems, military training and cyberwarfare. This war has become the first conflict where both parties compete in and with AI.

According to Techxplore, earlier in 2024, researchers from four American institutes and universities published a study of five LLMs in conflict situations, which showed a tendency “to develop an arms race dynamic, leading to larger conflicts and, in rare cases, to the deployment of nuclear weapons”.

Furthermore, efforts to regulate the field of AI are complicated by major global powers determined to “win the military AI race.”

“There are debates about what needs to be done in the civil AI industry, but very little when it comes to the defense industry,” concluded Accorsi.