On October 24, 2024, the Biden administration released a National Security Memorandum (NSM) titled “Memorandum on Advancing america’ Management in Synthetic Intelligence; Harnessing Synthetic Intelligence to Fulfill National Security Targets; and Fostering the Security, Security, and Trustworthiness of Synthetic Intelligence.” Penning this memorandum was a said requirement from the administration’s October 2023 AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Because the prolonged title suggests, this doc covers a various set of points. At almost 40 pages lengthy, this doc is by far probably the most complete articulation but of United States nationwide safety technique and coverage towards synthetic intelligence (AI). A carefully associated companion doc, the Framework to Advance AI Governance and Risk Management in National Security, was printed on the identical day.
Q1: What points of AI is the NSM centered on?
A1: For many of the final decade, the AI and nationwide safety coverage group has been focused on deep studying AI know-how, which has been booming since 2012. Deep studying has powered important AI-enabled functions corresponding to face recognition, voice recognition, autonomous methods, and advice engines, every of which has resulted in important military and intelligence applications. The 2024 AI NSM, not like the AI government order, largely ignores the AI applied sciences developed and deployed within the 2012–2022 timeframe. As an alternative, the NSM is squarely involved with frontier AI fashions, which exploded in significance after the discharge of ChatGPT by OpenAI in 2022.
Whereas all frontier AI fashions—corresponding to those that energy OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini—proceed to be based mostly on deep studying approaches, frontier fashions are different from earlier deep studying fashions in that they’re extremely succesful throughout a way more numerous set of functions. The earlier era of AI methods, particularly these based mostly on supervised deep studying, tended to be far more application-specific.
The AI NSM formally defines frontier fashions as “a general-purpose AI system close to the cutting-edge of efficiency, as measured by broadly accepted publicly accessible benchmarks, or comparable assessments of reasoning, science, and total capabilities.” That is in step with different current Biden administration strikes. Elizabeth Kelly, director of the U.S. AI Security Institute (AISI), said in an interview at CSIS that AISI is particularly centered on superior AI and particularly frontier fashions.
Part 1 of the NSM articulates why the Biden administration views frontier AI know-how as such a urgent nationwide safety precedence:
“Current improvements have spurred not solely a rise in AI use all through society, but in addition a paradigm shift throughout the AI discipline . . . This development is most evident with the rise of huge language fashions, but it surely extends to a broader class of more and more general-purpose and computationally intensive methods. The United States Authorities should urgently take into account how this present AI paradigm particularly may remodel the nationwide safety mission.”
Q2: What’s the historic precedent for a doc like this?
A2: In his October 24 speech saying the AI NSM, Jake Sullivan, the assistant to the president for nationwide safety affairs (APNSA), explicitly in contrast the present AI revolution to earlier transformative nationwide safety applied sciences corresponding to nuclear and area. U.S. authorities officers advised CSIS that among the important early U.S. nationwide safety technique paperwork for these applied sciences served as a direct inspiration for the creation of the AI NSM. For instance, NSC-68, printed in 1950 at a important second within the early Chilly Struggle, beneficial an enormous buildup of nuclear and standard arms in response to the Soviet Union’s nuclear program. This analogy is imperfect because the AI NSM isn’t advocating an enormous arms buildup, however the comparability does helpfully illustrate that the Biden administration views the AI NSM as a landmark doc articulating a complete technique in direction of a transformative know-how.
Q3: Who’s the supposed viewers for this doc?
A3: In June 2024, Maher Bitar, deputy assistant to the president and a pacesetter on the White Home National Security Council workers, noted that the AI NSM could be “chatting with many audiences without delay, however there might be a portion that can stay categorized as properly.”
Among the many Biden administration’s “many audiences,” there are 4 key ones that it seemingly had in thoughts:
- Viewers 1—U.S. federal businesses and their workers: The AI NSM units out U.S. nationwide safety coverage towards frontier AI and offers particular taskings for a lot of totally different federal businesses in executing that coverage. Offering coverage readability and taskings for federal businesses was the first said goal for the AI NSM, as specified by the October 2023 AI government order.
- Viewers 2—U.S. AI firms: As APNSA Sullivan mentioned, “Personal firms are main the event of AI, not the federal government.” For personal business, the AI NSM clarifies what the Biden administration sees as the correct roles of the private and non-private sectors in advancing U.S. nationwide safety pursuits, together with what the U.S. authorities should do to assist personal sector AI management and what the federal government needs for and desires from the personal sector within the title of nationwide safety.
- Viewers 3—U.S. allies: That is removed from the primary main coverage transfer that the Biden administration has made on the intersection of AI and nationwide safety. For instance, in October 2022, the Biden administration unveiled comprehensive new controls on exports to China of the semiconductor applied sciences that allow superior AI methods. On the time, the written justification for the export management coverage centered on using superior AI chips by China’s navy, particularly associated to weapons of mass destruction. Nevertheless, as a standalone measure, this justification struck some allies as incomplete for such a dramatic reversal of 25 years of U.S. commerce and know-how coverage towards China. With the AI NSM, a bigger and largely unspoken (not less than publicly) justification for the coverage—the important strategic significance of frontier AI methods—is now clear. U.S. allies now have a canonical reference level for understanding why america sees management in frontier AI methods as important and why america was prepared to take extraordinary measures to protect AI management.
Moreover, U.S. allies and companions try to know what position they will play within the U.S.-led AI ecosystem. For instance, america and the United Arab Emirates (UAE) struck a deal associated to constructing important AI-related knowledge facilities and vitality infrastructure in that nation. The want for a government-to-government deal was motivated by a private-sector agreement between Microsoft and G42. Each offers have attracted controversy amongst some U.S. nationwide safety leaders who questioned why the U.S. would encourage a strategic know-how to unfold overseas somewhat than stay in america. The NSM doesn’t remark on the UAE deal particularly, but it surely does search to reassure U.S. allies and companions that they stand to profit from the U.S. technique. Particularly, the NSM states, “The United States’ community of allies and companions confers important benefits over opponents. In line with the 2022 National Security Technique or any successor methods, america Authorities should spend money on and proactively allow the co-development and co-deployment of AI capabilities with choose allies and companions.”
- Viewers 4—U.S. adversaries and strategic opponents: Analysts in China and Russia will undoubtedly research the NSM carefully. Whereas China is rarely talked about by title, it’s america’ most formidable competitor for international AI management by far. A lot of the doc particulars how the U.S. intends to outcompete China, however some provisions could be seen as complimenting earlier and potential future diplomatic overtures. For instance, the Framework to Advance AI Governance and Risk Management in National Security, which is a companion doc to the AI NSM and continuously referenced in it, includes a prohibition on utilizing AI to “Take away a human within the loop for actions important to informing and executing selections by the President to provoke or terminate nuclear weapons employment.” This can be a subject that was likely raised on the Could 2024 U.S.-China AI Security assembly in Geneva.
This fall: What are the first goals of the NSM, and the way does it pursue them?
A4: The NSM lays out three high-level coverage goals for the U.S. nationwide safety group. This part will summarize every goal and spotlight some (removed from all) of the important thing company taskings which can be tied to those goals.
Goal 1—Keep U.S. management within the growth of superior AI methods: The AI NSM outlines a sequence of measures designed to make sure america retains its place as the worldwide chief of the AI ecosystem.
- AI Expertise and Immigration: The AI NSM sees a bigger and superior pool of AI expertise as a important U.S. strategic benefit. It focuses particularly on taking actions to protect and increase america’ energy in attracting main AI expertise from around the globe.
The doc states, “It’s the coverage of america Authorities that advancing the lawful capacity of noncitizens extremely expert in AI and associated fields to enter and work in america constitutes a nationwide safety precedence.” To assist this effort, the AI NSM directs federal businesses to, “use all accessible authorized authorities to help in attracting and quickly bringing to america people with related technical experience who would enhance United States competitiveness in AI and associated fields, corresponding to semiconductor design and manufacturing.”
Moreover, the AI NSM directs the White Home Council of Financial Advisers to conduct a research on the state of AI expertise within the U.S. and overseas and a separate research on the “relative aggressive benefit of america personal sector AI ecosystem.”
- Power and Infrastructure: APNSA Sullivan said, “One factor is for sure: If we don’t quickly construct out this [energy and data center] infrastructure within the subsequent few years, including tens and even a whole bunch of gigawatts of unpolluted energy to the grid, we’ll threat falling behind.” The complete U.S. electrical era capability is right this moment solely 1,250 gigawatts. Thus, if Sullivan’s extra bullish “a whole bunch of gigawatts” situation happens, AI may signify as a lot as 25 % of complete U.S. electrical energy consumption. Such an enormous growth of infrastructure in such a brief time frame has not occurred in america in lots of many years.
With out new budgetary laws from Congress, the manager department can’t do a lot to extend funding accessible for an enormous AI infrastructure buildout. Unsurprisingly, the doc is due to this fact centered on the components of the problem which can be nearer to government department competencies.
This consists of figuring out the obstacles to speedy development and making an attempt to mitigate or get rid of them. The doc directs the White Home Chief of Workers, the Division of Power (DOE), and different related businesses to “coordinate efforts to streamline allowing, approvals, and incentives for the development of AI-enabling infrastructure, in addition to surrounding property supporting the resilient operation of this infrastructure, corresponding to clear vitality era, energy transmission traces, and high-capacity fiber knowledge hyperlinks.”
The NSM’s use of the phrase “coordinate” is a tacit acknowledgment of the truth that a lot of the important authority for budgeting and reforming laws lies outdoors of the manager department, with Congress and state and native governments. On this space, the White Home can search to steer and persuade, but it surely has restricted capacity to command. Nevertheless, the White Home clearly views this as a prime political precedence, as evidenced by the truth that that is tasked to the White Home Chief of Workers, not merely the Division of Power.
- Counterintelligence: White Home leaders know that it could make little sense for america to spend tens or a whole bunch of billions of {dollars} growing frontier AI fashions if China can steal them in a trivially costly espionage marketing campaign. The AI NSM thus extends U.S. counterintelligence actions to the important thing gamers within the U.S. AI business. APNSA Sullivan stated the inclusion of AI infrastructure and mental property amongst official counterintelligence priorities would imply “extra sources and extra personnel” dedicated to countering adversaries’ theft, espionage, and disruption.
Within the current previous, some main U.S. AI firms have been the victims of devastating cyber-attacks, and never simply from nation-states. For instance, WIREDreported in 2022 {that a} group of cyber criminals breached Nvidia and stole “a major quantity of delicate details about the designs of Nvidia graphics playing cards, supply code for an Nvidia AI rendering system referred to as DLSS, and the usernames and passwords of greater than 71,000 Nvidia staff.” Nvidia claims to have since considerably upgraded its cybersecurity, although it’s removed from clear that the U.S. AI business as an entire is ready. The AI NSM will get the U.S. nationwide safety group concerned in defending business AI firms and securing their delicate know-how.
Goal 2—Speed up adoption of frontier AI methods throughout U.S. nationwide safety businesses: APNSA Sullivan opened his remarks by expressing confidence concerning the present state of U.S. management in synthetic intelligence know-how. Nevertheless, he additionally expressed concern that this management was not successfully being harnessed for National Security benefit, stating, “We may have the very best crew however lose as a result of we didn’t put it on the sphere.” Accordingly, the AI NSM mandates that nationwide safety businesses “act decisively to allow the efficient and accountable use of AI in furtherance of its nationwide safety mission.” Among the key actions embody
- Directing businesses to reform their human capital and hiring practices to raised appeal to and retain AI expertise;
- Directing businesses to reform their acquisition and contracting practices to make it simpler for personal sector AI firms to contribute to the nationwide safety mission;
- Directing the Division of Protection (DOD) and the Intelligence Neighborhood (IC) to look at how present insurance policies and procedures associated to present authorized obligations (e.g., privateness and civil liberties) will be revised to “allow the efficient and accountable use of AI”; and
- Directing federal businesses to look at how present insurance policies and procedures associated to cybersecurity can likewise be revised to speed up adoption (whereas with out exacerbating cybersecurity threat).
Whereas the White Home deserves credit score for accurately figuring out the important areas and tasking the businesses with tackling them. It needs to be famous that every of those areas has lengthy been recognized as key barriers to the adoption of AI, and businesses have struggled previously to meaningfully reform. With or with out an AI NSM, this isn’t a straightforward enterprise.
Goal 3—Develop sturdy governance frameworks to assist U.S. nationwide safety: Whereas the worldwide AI group makes use of the phrase “governance” to imply many alternative issues, typically together with most of the concepts lined by AI security, the AI NSM addresses governance primarily when it comes to who has authority to make selections relating to using AI and what processes they use to make such selections. To this finish, the NSM tasked businesses with a variety of governance actions.
Two particularly noteworthy efforts embody requiring almost all nationwide safety businesses to designate a chief AI officer and directing the creation of an AI National Security Coordination Group consisting of the chief AI officers of the Division of State, DOD, Division of Justice, DOE, Division of Homeland Security, Workplace of Administration and Finances, Workplace of the Director of National Intelligence, Central Intelligence Company, Protection Intelligence Company, National Security Company, and National Geospatial-Intelligence Company.
The AI NSM additionally commits america to work with worldwide companions and establishments—such because the G7, Organisation for Financial Co-operation and Improvement, and the United Nations—to advance worldwide AI governance.
Many parts of the NSM associated to Goal 3 instantly check with the companion Framework to Advance AI Governance and Risk Management in National Security (which this text will henceforth check with because the National Security AI Governance Framework). The Biden administration sees these paperwork as two components of a complete on the subject of the U.S. nationwide safety technique for AI. The governance framework might be addressed in additional element in Q6.
Q5: What does the AI NSM imply for the way forward for AI Security and Security?
A5: The NSM has a major focus on AI security and safety initiatives. At first look, this may increasingly appear at odds with the NSM’s beforehand said purpose of accelerating the adoption and use of AI methods. Nevertheless, APNSA Sullivan said the next about this obvious contradiction:
“Making certain safety and trustworthiness will really allow us to maneuver sooner, not sluggish us down. Put merely, uncertainty breeds warning. After we lack confidence about security and reliability, we’re slower to experiment, to undertake, to make use of new capabilities—and we simply can’t afford to do this in right this moment’s strategic panorama.”
In different phrases, the absence of clear and complete security approaches impedes the nationwide safety group’s capacity to quickly undertake frontier AI instruments. There is a crucial extra mechanism whereby codifying AI security, safety, and governance procedures may help authorities businesses speed up AI adoption that APNSA Sullivan didn’t point out. That is that clear steering on prohibited AI use instances and procedures for getting approval for high-risk AI functions will assist allay the issues of presidency workers that their AI adoption efforts could be breaking laws of which they aren’t even conscious. In huge authorities bureaucracies, this profession threat aversion is a common phenomenon.
The AI NSM’s safety-related sections largely focus on the work of the U.S. AISI and thus present the clearest articulation to this point of the connection between AI security and U.S. nationwide safety pursuits. One senior Biden administration official went as far as to say that,
“The NSM serves as a proper constitution for the AI Security Institute within the Division of Commerce, which we now have created to be the first port of name for U.S. AI builders. They’ve already issued steering on secure, safe, and reliable AI growth and have secured voluntary agreements with firms to check new AI methods earlier than they’re launched to the general public.”
When it comes to particular taskings, the AI NSM does the next:
- Designates the U.S. AISI because the “major level of contact” in authorities for personal sector AI firms on the subject of AI testing and analysis actions.
- Directs the AISI to, inside 180 days, “pursue voluntary preliminary testing of not less than two frontier AI fashions previous to their public deployment or launch to guage capabilities that may pose a risk to nationwide safety.”
- Directs the AISI to, inside 180 days, “concern steering for AI builders on the right way to take a look at, consider, and handle dangers to security, safety, and trustworthiness arising from dual-use basis fashions.”
- Directs the AISI to start sturdy collaboration with the AI Security Middle on the National Security Company to “develop the potential to carry out speedy systematic categorized testing of AI fashions’ capability to detect, generate, and/or exacerbate offensive cyber threats”
Q6: What are the first goals of the National Security AI Governance Framework?
A6: The National Security AI Governance Framework outlines 4 key pillars federal businesses should use as a place to begin for his or her AI governance selections. The doc additionally consists of additional taskings to businesses not included within the NSM. The said purpose of this framework is to “assist and allow the U.S. Authorities to proceed taking lively steps to uphold human rights, civil rights, civil liberties, privateness, and security; make sure that AI is utilized in a way in step with the President’s authority as commander-in-chief to determine when to order navy operations within the nation’s protection; and make sure that navy use of AI capabilities is accountable, together with via such use throughout navy operations inside a accountable human chain of command and management.”
Relating to the final merchandise, one authorities official advised CSIS that the framework solidified the U.S. coverage that nations are liable for and commanders are accountable for the actions of their navy and intelligence organizations—whether or not or not AI is enjoying an necessary position in these actions.
Of particular notice, whereas the AI NSM is targeted on frontier AI methods, the National Security AI Governance Framework applies to all AI methods, which within the U.S. authorities context usually means methods which can be based mostly upon machine studying know-how.
The framework’s 4 pillars are listed under, together with a non-exhaustive description of key provisions:
- AI Use Restrictions: This part outlines functions of AI methods which can be prohibited along with defining what AI use instances qualify as “high-impact.” With respect to autonomous and semiautonomous weapons methods, the doc defers to the Department of Defense Directive 3000.09.
- Minimal Threat Administration Practices for Excessive-Influence and Federal Personnel-Impacting AI Makes use of: This part lays out the minimal baseline safeguards businesses ought to put in place for high-impact AI makes use of. It features a complete checklist of threat administration practices businesses should adhere to inside 180 days of the framework’s launch.
- Cataloguing and Monitoring AI Use: This part units the stock, knowledge administration, and oversight necessities federal businesses should observe. It features a prolonged set of Chief AI Officer skillsets and duties. The AI NSM directed that “every lined company shall have a Chief AI Officer who holds major duty inside that company.”
- Coaching and Accountability: This part duties businesses with creating “standardized coaching necessities and tips” for officers who should work together with AI methods, along with updating their whistleblower safety insurance policies for personnel who use AI methods in nationwide safety contexts.
One of many causes for separating the National Security AI Governance Framework and the AI NSM is that the 2 paperwork have a separate course of for being up to date, with the AI Governance Framework being simpler to amend.
Q7: Do the method necessities of the National Security AI Governance Framework apply universally?
A7: No. Along with the framework’s restricted focus on “prohibited” and “high-risk” AI use instances, the Framework additionally creates a brand new waiver course of whereby the chief AI officer of any federal company can authorize bypassing threat administration practices within the occasion that they, “would create an unacceptable obstacle to important company operations or exceptionally grave injury to nationwide safety,” amongst different situations.
The framework consists of measures to make sure that these waivers will not be used excessively and with out good motive. For instance, chief AI officers can’t delegate waiver authority, should evaluation all issued waivers yearly, and should “report back to company management instantly, and to the Division Head and APNSA inside three days, upon granting or revoking any waiver.”
Regardless of these limitations on waiver utilization, the existence of this waiver course of in and of itself represents a major elevation in energy and authority for each chief AI officer all through the U.S. National Security Neighborhood.
Q8: What may the upcoming U.S. presidential election imply for the implementation of the AI NSM?
A8: The NSM is a landmark nationwide safety technique doc and descriptions an bold and complete imaginative and prescient for AI’s position in nationwide safety. Nevertheless, the diploma to which it’s applied might be affected by the election just because—whatever the electoral end result—a brand new president might be in workplace when most of the tasking deadlines happen.
Whereas there’s strong reason to believe {that a} Kamala Harris administration would proceed many of the Biden administration’s main AI coverage efforts, a second Donald Trump administration may signify a dramatic departure. For instance, the Republican Party policy platform explicitly endorses repealing the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which mandated the creation of the NSM.
Nevertheless, there are provisions of the AI NSM which can be in step with coverage positions taken by former president Trump. In September 2024, for instance, Trump said that he wished to “shortly double our electrical capability, which might be wanted to compete with China and different nations on synthetic intelligence.” A hypothetical Trump administration could, due to this fact, search to cancel solely particular provisions of the AI NSM somewhat than the doc as an entire.
Gregory C. Allen is the director of the Wadhwani AI Middle on the Middle for Strategic and Worldwide Research in Washington, D.C. Isaac Goldston is a analysis affiliate with the Wadhwani AI Middle.
The authors want to thank Samantha Gonzalez for her analysis assist.