The Biden administration’s Commerce Department is taking steps to tighten restrictions on the export of artificial intelligence (AI) models, whether open or closed-source. This move aims to protect US interests and AI technology from countries like Russia and China. It is an extension of previous measures taken to limit Chinese access to advanced computer chips.
According to Reuters, the initiative focuses on enhancing protective barriers around the core software of large language models, such as the one used in the ChatGPT app. Researchers in both the private sector and the government have expressed concerns about the potential use of this technology by US adversaries for aggressive cyber attacks or the development of biological weapons. However, the Chinese Embassy has opposed this initiative, labeling it as unilateral bullying and economic coercion.
One of the threats that the US is particularly concerned about is the use of deepfakes as a disinformation weapon. Deepfakes are realistic yet fabricated videos created using AI tools. While such media content has been around for a few years, the advancement of generative AI tools has made it easier for anyone, including rogue actors, to produce and manipulate public opinion on sensitive issues, especially during election campaigns. Social media platforms like YouTube, Facebook, and Twitter have already implemented measures to combat deepfakes, but the tactics used to develop and distribute them are constantly evolving. Companies like Microsoft and OpenAI offer tools that can be exploited to create and spread disinformation.
A more significant concern is the potential for AI models to leak information that could be used in the development of biological weapons. Researchers at Rand Corporation and Gryphon Scientific have identified how large language models can generate expert knowledge, including on a doctoral level, that could aid in the creation of viruses with pandemic capabilities. This information falling into the wrong hands poses a serious threat.
The Department of Homeland Security has also raised concerns about the use of AI in cyber attacks on critical infrastructure, such as railways and pipelines. They believe that AI could enable the development of new tools capable of larger-scale and more complex cyber attacks, executed at a faster pace. Additionally, China is reportedly developing AI software for malware attacks and technologies that could potentially undermine the cyber defenses of other nations.
In February, Microsoft released a report identifying cyber groups affiliated with the military intelligence of Russia, North Korea, China, and Iran’s Revolutionary Guards. These groups are using large language models to enhance their hacking campaigns. In response, Microsoft banned state-funded cyber groups from using its AI products and services. Recently, a group of statesmen proposed a bill to further regulate the export of AI models to prevent them from falling into the hands of potential adversaries.
Experts in the field believe that while Washington aims to address the risks associated with AI, it must avoid overly burdensome regulations that could stifle innovation. They argue that strict regulations could create a void for foreign competitors and have negative impacts on infrastructure, national security, and drug discovery. The goal is to strike a balance between fostering innovation and managing the risks posed by AI technology.