AI Language Models Could be Used for Bioweapon Attacks

AI Language Models Could be Used for Bioweapon Attacks

images provided by pixabay

This post is also available in: heעברית (Hebrew)

Nowadays, chatbots and Large Language Models are widely used by any and all industries as well as private individuals and are known to be extremely useful, but just like any other technological revolution they also come with disadvantages and the risk of misuse. A new report by a US think tank shows that AI models could assist in planning and executing a biological attack.

The report (released by Rand Corporation) claims various LLMs were discovered to offer helpful guidance for the planning and implementation of a biological attack but did not produce explicit biological instructions for creating weapons. Despite the researchers not specifying which LLMs were tested, it is essential to note that LLMs are a fundamental technology behind chatbots like ChatGPT.

According to The Guardian, the upcoming global AI safety summit in the UK is planning to discuss severe AI-related threats, including bioweapons, with CEO of Anthropic Dario Amodei warning that AI systems could assist in creating bioweapons within the next two to three years.

According to Interesting Engineering, the tested scenario included a non-disclosed LLM that was used to identify potential biological agents (smallpox, anthrax, plague, etc.) and discussed their likelihood of causing mass death, as well as the possibility of obtaining plague-infested animals and transporting them. In a different scenario, the LLM evaluated the advantages and disadvantages of various delivery mechanisms for a botulinum toxin and assessed the potential of using food or aerosols to deliver the toxin. Additionally, the LLM recommended a possible cover story for obtaining Clostridium botulinum while appearing to conduct genuine scientific research.

The researchers state that their final report would investigate whether the responses obtained from the LLMs were similar to the information available online, adding that it is yet uncertain whether the capabilities of current LLMs posed a more significant threat than the harmful information already accessible on the internet.

The Rand researchers concluded by saying that there is an unequivocal need for rigorous testing of models, and recommended that AI companies limit LLMs’ openness to conversations such as those in their report. They also called for collaboration between the government and the industry to ensure the safety and benefits of AI.