Chinese research institutions tied to the People's Liberation Army (PLA) have reportedly developed a military-focused AI tool using Meta's publicly available Llama model. This development, detailed in academic papers and reviewed by analysts, demonstrates how Meta’s AI technology is being adapted for potential military applications, despite the company's restrictions on such uses.
In a June paper, six researchers from three institutions, including two associated with the PLA’s Academy of Military Science, revealed that they used an early version of Meta’s Llama model as the foundation for a new AI tool, named "ChatBIT." The tool was designed specifically for gathering intelligence and providing operational decision-making support for military purposes. According to the researchers, ChatBIT outperformed some AI models, rivaling 90% of the performance level of OpenAI's ChatGPT-4. However, the paper did not clarify how this performance was measured or whether the tool has been implemented in any real-world military operations.
This marks the first concrete evidence that Chinese military experts are exploring open-source large language models (LLMs) like Meta’s Llama for defense purposes, according to Sunny Cheung, an expert on China’s emerging technologies. The use of these models is significant because Meta, which released the Llama 2 model in February 2023, has promoted the open release of its AI models with specific guidelines, such as prohibiting military or espionage uses. Despite these restrictions, enforcing them is challenging because the models are publicly accessible, allowing anyone to adapt them.
Meta responded to this development, reaffirming its policies against the unauthorized use of its AI for military purposes. Molly Montgomery, Meta's Director of Public Policy, emphasized that any use of their models by the PLA is unauthorized and against their acceptable use policies. Although Meta implements measures to prevent misuse, the open nature of the technology limits the company’s ability to fully control how it is employed.
The Chinese researchers involved in developing ChatBIT include experts from the PLA’s Military Science Information Research Center, the Beijing Institute of Technology, and Minzu University. They envision future advancements of ChatBIT, which could be utilized not only for intelligence analysis but also for strategic planning, simulation training, and military command decisions.
This research comes amid increasing concerns in the U.S. about the security risks posed by open-source AI models. President Joe Biden's administration has been actively addressing these concerns, with the president signing an executive order in October 2023 to manage AI developments, highlighting the potential benefits but also noting significant risks. The U.S. government is also finalizing rules aimed at curbing American investment in Chinese technology sectors, especially those that could pose national security threats.
The Pentagon has acknowledged the dual-edged nature of open-source AI, recognizing both the advantages and the risks. Meanwhile, China’s progress in AI development continues to raise alarms. The country has invested heavily in AI research, narrowing the technological gap between China and the U.S. Reports suggest that Chinese institutions have used Western-developed AI models, like Llama, for domestic security and military advancements, including in areas such as electronic warfare and intelligence policing.
The growing collaboration between Chinese and Western scientists in AI research makes it increasingly difficult to prevent China from accessing cutting-edge technological developments, a reality acknowledged by several experts.