Cybersecurity authorities outlined the real-world risk in an AI safety governance document on Monday, describing the danger of losing “control over knowledge and capabilities of nuclear, biological, chemical and missile weapons”.
“In training, AI uses content-rich and wide-ranging [texts] and data, including fundamental theoretical knowledge related to nuclear, biological, chemical and missile weapons,” it said.
“Without sufficient management, extremist groups and terrorists may be able to acquire relevant knowledge and develop capabilities to design, manufacture, synthesise and use such weapons with the help of retrieval-augmented generation capabilities.”
“Retrieval-augmented generation capabilities” is an AI technique that combines the ability to retrieve large amounts of information online or from an up-to-date knowledge base before generating a text response.
“This would render existing control systems ineffective and intensify threats to global and regional peace and security,” it said.