This post is also available in: עברית (Hebrew)
Scientists from the PLA (People’s Liberation Army) Strategic Support Force in China are reportedly training a military AI system to predict what potential enemy humans might do, as reported by the South China Morning Post.
According to Interesting Engineering, the researchers fed the AI large volumes of sensor data and reports provided by frontline units using either descriptive language or images, which were then sent by the AI model to commercial LLM models like Baidu’s Ernie and iFlyTek’s Spark. The military AI then generates prompts for further discussion on tasks like combat simulations. The entire process is automated and requires no human involvement.
This marks the first instance of the Chinese military acknowledging its use of commercial large language models. The team aimed to enhance military AI by making it more humanlike and better at understanding commanders’ intentions, which is important since the unpredictable nature and adaptability of human adversaries can often fool machines.
In their paper describing the innovation, the team stated: “As the highest form of life, humans are not perfect in cognition and often have persistent beliefs, also known as biases. This can lead to situations of overestimating or underestimating threats on the battlefield. Machine-assisted human situational awareness has become an important development direction.”
Nevertheless, the team did admit that their setup is not foolproof, mainly because commercial LLMs are not designed for warfare and their predictions can be too general for the specific needs of a military commander. To help with this, however, the team experimented with multi-modal communication using military AI to create a map analyzed by iFlyTek’s Spark, which improved the LLMs’ performance and produced practical analysis reports and predictions.
This news has made many people worried, though, with many expressing their concern regarding the possible use of such technology and the repercussions.