This post is also available in: heעברית (Hebrew)

With AI tech spreading and getting increasingly accessible, is it possible for a person with enough money to access a laboratory capable of resurrecting dangerous viruses and plagues?

A new study by research organization RAND reveals that although current AI technology is not capable of planning a biological weapon attack, that may change in the near future, presenting a new risk. During the study several teams worked on creating operation plans for a biological attack while some teams utilized LLMs and others only had access to the internet. The plans were then assessed for viability and scored from 1 to 9 (1 meaning an entirely unworkable plan and 9 meaning a flawless, fully achievable one).

According to Cybernews, fifteen teams worked for several weeks, mimicking malicious actors and inspecting AI models across high-risk scenarios. The paper concluded that the average viability of operation plans generated with the aid of LLMs was statistically indistinguishable from those created without LLM assistance, and none of the plans scored as satisfactory in terms of a sufficiently detailed and accurate basis for a malign actor to execute an effective biological attack.

Teams working with LLMs had a slightly higher score than the ones only using the internet. LLMs reportedly provided some “unfortunate outputs,” but tend to generally provide information already available online.

The researchers stated: “Overall, our findings on viability suggest that the tasks involved in biological weapon attack planning likely fall outside the existing capabilities of LLMs.” Nevertheless, they added that the study is not yet conclusive and does not completely rule out the risk of biological attacks using LLMs, requiring further testing for more accurate assessment.

RAND warns: “Although our findings suggest that existing LLMs do not meaningfully increase the viability of biological weapon attack planning, the potential for an unknown, grave biological threat propelled or even generated by LLMs cannot be ruled out. Given more time, advanced skills, additional resources, or elevated motivations, a malign nonstate actor could conceivably be spurred by an existing or future LLM to plan or wage a biological weapon attack.”

They concluded by claiming that given the rapid evolution of AI, it is important to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning.