로고

Sec. 9401. Definitions. SUBCHAPTER I—NATIONAL ARTIFICIAL INTELLIGENCE INITIATIVE 9411. National Artificial Intelligence Initiative. 9412. National Artificial Intelligence Initiative Office. 9413. Coordination by Interagency Committee. 9414. National Artificial Intelligence Advisory Committee. 9415. National AI Research Resource Task Force. SUBCHAPTER II—NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH INSTITUTES 9431. National Artificial Intelligence Research Institutes. SUBCHAPTER III—DEPARTMENT OF COMMERCE ARTIFICIAL INTELLIGENCE ACTIVITIES 9441. Stakeholder outreach. 9442. National Oceanic and Atmospheric Administration Artificial Intelligence Center. SUBCHAPTER IV—NATIONAL SCIENCE FOUNDATION ARTIFICIAL INTELLIGENCE ACTIVITIES 9451. Artificial intelligence research and education. SUBCHAPTER V—DEPARTMENT OF ENERGY ARTIFICIAL INTELLIGENCE RESEARCH PROGRAM 9461. Department of Energy artificial intelligence research program. 9462. Veterans’ health initiative.

COMMERCE AND TRADE

CHAPTER 119—

NATIONAL ARTIFICIAL INTELLIGENCE INITIATIVE

§ 9401.

Definitions In this chapter:

(1) Advisory Committee The term “Advisory Committee” means the National Artificial Intelligence Advisory Committee established under section 9414(a) of this title .

(2) Agency head The term “agency head” means the head of any Executive agency (as defined in section 105 of title 5 ).

(3) Artificial intelligence The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—

(A)

perceive real and virtual environments;

(B)

abstract such perceptions into models through analysis in an automated manner; and

(C)

use model inference to formulate options for information or action.

(4) Community college The term “community college” means a public institution of higher education at which the highest degree that is predominantly awarded to students is an associate’s degree, including 2-year Tribal Colleges or Universities under section 1059c of title 20 and public 2-year State institutions of higher education.

(5) Initiative The term “Initiative” means the National Artificial Intelligence Initiative established under section 9411(a) of this title .

(6) Initiative Office The term “Initiative Office” means the National Artificial Intelligence Initiative Office established under section 9412(a) of this title .

(7) Institute The term “Institute” means an Artificial Intelligence Research Institute described in section 9431(b)(2) of this title .

(8) Institution of higher education The term “institution of higher education” has the meaning given the term in section 1001 and section 1002(c) of title 20 .

(9) Interagency Committee The term “Interagency Committee” means the interagency committee established under section 9413(a) of this title .

(10) K-12 education The term “K-12 education” means elementary school and secondary school education provided by local educational agencies, as such agencies are defined in section 7801 of title 20 .

(11) Machine learning The term “machine learning” means an application of artificial intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed. ( Pub. L. 116–283, div. E, § 5002 , Jan. 1, 2021 , 134 Stat. 4523 .)

Editorial Notes

References in Text

This chapter, referred to in text, was in the original “this division”, meaning div. E of Pub. L. 116–283 , Jan. 1, 2021 , 134 Stat. 4523 , which is classified principally to this chapter. For complete classification of div. E to the Code, see Short Title note set out below and Tables.

Statutory Notes and Related Subsidiaries

Short Title

Pub. L. 116–283, div. E, § 5001 , Jan. 1, 2021 , 134 Stat. 4523 , provided that: “This division [enacting this chapter and section 278h–1 of this title and amending sections 1862i and 1862n–1 of Title 42, The Public Health and Welfare] may be cited as the ‘National Artificial Intelligence Initiative Act of 2020’.”

Executive Documents

Ex. Ord. No. 14110. Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Ex. Ord. No. 14110, Oct. 30, 2023 , 88 F.R. 75191, provided: By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows: Section 1. Purpose . Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society. My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society. In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change. They are the reasons we will succeed again in this moment. We are more than capable of harnessing AI for justice, security, and opportunity for all. Sec. 2. Policy and Principles . It is the policy of my Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations: (a) Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use. It also requires addressing AI systems’ most pressing security risks—including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers—while navigating AI’s opacity and complexity. Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies. Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not. These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits. (b) Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators. Across the Federal Government, my Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores—not just to study, but to stay—so that the companies and technologies of the future are made in America. The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation. Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and it requires supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs. (c) The responsible development and use of AI require a commitment to supporting American workers. As AI creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities. My Administration will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates. In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions. The critical next steps in AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation. (d) Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms. My Administration will build on the important steps that have already been taken—such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) [ 5 U.S.C. 601 note]—in seeking to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation. It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government. Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all. (e) The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected. Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change. The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights. At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability. (f) Americans’ privacy and civil liberties must be protected as AI continues advancing. Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed. To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks. Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks—including the chilling of First Amendment rights—that result from the improper collection and use of people’s data. (g) It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines—including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields—and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used. (h) The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change. This leadership is not measured solely by the technological advancements our country makes. Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly—and building and promoting those safeguards with the rest of the world. My Administration will engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges. The Federal Government will seek to promote responsible AI safety and security principles and actions with other nations, including our competitors, while leading key global conversations and collaborations to ensure that AI benefits the whole world, rather than exacerbating inequities, threatening human rights, and causing other harms. Sec. 3. Definitions . For purposes of this order: (a) The term “agency” means each agency described in 44 U.S.C. 3502(1) , except for the independent regulatory agencies described in 44 U.S.C. 3502(5) . (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3) [ section 5002(3) of Pub. L. 116–283 ]: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. (c) The term “AI model” means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs. (d) The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system. (e) The term “AI system” means any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI. (f) The term “commercially available information” means any information or data about an individual or group of individuals, including an individual’s or group of individuals’ device or location, that is made available or obtainable and sold, leased, or licensed to the general public or to governmental or non-governmental entities. (g) The term “crime forecasting” means the use of analytical techniques to attempt to predict future crimes or crime-related information. It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics. (h) The term “critical and emerging technologies” means those technologies listed in the February 2022 Critical and Emerging Technologies List Update issued by the National Science and Technology Council (NSTC), as amended by subsequent updates to the list issued by the NSTC. (i) The term “critical infrastructure” has the meaning set forth in section 1016(e) of the USA PATRIOT Act of 2001, 42 U.S.C. 5195c(e) . (j) The term “differential-privacy guarantee” means protections that allow information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities. (k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation. Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. (l) The term “Federal law enforcement agency” has the meaning set forth in section 21(a) of Executive Order 14074 of May 25, 2022 (Advancing Effective, Accountable Policing and Criminal Justice Practices To Enhance Public Trust and Public Safety) [ 34 U.S.C. 10101 note prec.]. (m) The term “floating-point operation” means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base. (n) The term “foreign person” has the meaning set forth in section 5(c) of Executive Order 13984 of January 19, 2021 (Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities) [ 15 U.S.C. 7421 note]. (o) The terms “foreign reseller” and “foreign reseller of United States Infrastructure as a Service Products” mean a foreign person who has established an Infrastructure as a Service Account to provide Infrastructure as a Service Products subsequently, in whole or in part, to a third party. (p) The term “generative AI” means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content. (q) The terms “Infrastructure as a Service Product,” “United States Infrastructure as a Service Product,” “United States Infrastructure as a Service Provider,” and “Infrastructure as a Service Account” each have the respective meanings given to those terms in section 5 of Executive Order 13984. (r) The term “integer operation” means any mathematical operation or assignment involving only integers, or whole numbers expressed without a decimal point. (s) The term “Intelligence Community” has the meaning given to that term in section 3.5(h) of Executive Order 12333 of December 4, 1981 (United States Intelligence Activities) [ 50 U.S.C. 3001 note], as amended. (t) The term “machine learning” means a set of techniques that can be used to train AI algorithms to improve performance at a task based on data. (u) The term “model weight” means a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs. (v) The term “national security system” has the meaning set forth in 44 U.S.C. 3552(b)(6) . (w) The term “omics” means biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system. (x) The term “Open RAN” means the Open Radio Access Network approach to telecommunications-network standardization adopted by the O-RAN Alliance, Third Generation Partnership Project, or any similar set of published open standards for multi-vendor network equipment interoperability. (y) The term “personally identifiable information” has the meaning set forth in Office of Management and Budget (OMB) Circular No. A–130. (z) The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality. These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools. This is also sometimes referred to as “privacy-preserving technology.” (aa) The term “privacy impact assessment” has the meaning set forth in OMB Circular No. A–130. (bb) The term “Sector Risk Management Agency” has the meaning set forth in 6 U.S.C. 650(23) [ section 2200(23) of Pub. L. 107–296 ]. (cc) The term “self-healing network” means a telecommunications network that automatically diagnoses and addresses network issues to permit self-restoration. (dd) The term “synthetic biology” means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics. Synthetic nucleic acids are a type of biomolecule redesigned through synthetic-biology methods. (ee) The term “synthetic content” means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI. (ff) The term “testbed” means a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including AI and PETs, to help evaluate the functionality, usability, and performance of those tools or technologies. (gg) The term “watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI—including into outputs such as photos, videos, audio clips, or text—for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance. Sec. 4. Ensuring the Safety and Security of AI Technology . 4.1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security . (a) Within 270 days of the date of this order [ Oct. 30, 2023 ], to help ensure the development of safe, secure, and trustworthy AI systems, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall: (i) Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including: (A) developing a companion resource to the AI Risk Management Framework, NIST AI 100–1, for generative AI; (B) developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and (C) launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity. (ii) Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts shall include: (A) coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and (B) in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order. (b) Within 270 days of the date of this order, to understand and mitigate AI security risks, the Secretary of Energy, in coordination with the heads of other Sector Risk Management Agencies (SRMAs) as the Secretary of Energy may deem appropriate, shall develop and, to the extent permitted by law and available appropriations, implement a plan for developing the Department of Energy’s AI model evaluation tools and AI testbeds. The Secretary shall undertake this work using existing solutions where possible, and shall develop these tools and AI testbeds to be capable of assessing near-term extrapolations of AI systems’ capabilities. At a minimum, the Secretary shall develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards. The Secretary shall do this work solely for the purposes of guarding against these threats, and shall also develop model guardrails that reduce such risks. The Secretary shall, as appropriate, consult with private AI laboratories, academia, civil society, and third-party evaluators, and shall use existing solutions. 4.2. Ensuring Safe and Reliable AI . (a) Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act [of 1950], as amended, 50 U.S.C. 4501 et seq ., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require: (i) Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following: (A) any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats; (B) the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and (C) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security. Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and (ii) Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster. (b) The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section. Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for: (i) any model that was trained using a quantity of computing power greater than 10 26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10 23 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 10 20 integer or floating-point operations per second for training AI. (c) Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities) [listed in a table under 50 U.S.C. 1701 ], as amended by Executive Order 13757 of December 28, 2016 (Taking Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to: (i) Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary. (ii) Include a requirement in the regulations proposed pursuant to subsection 4.2(c)(i) of this section that United States IaaS Providers prohibit any foreign reseller of their United States IaaS Product from providing those products unless such foreign reseller submits to the United States IaaS Provider a report, which the United States IaaS Provider must provide to the Secretary of Commerce, detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run described in subsection 4.2(c)(i) of this section. Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary. (iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 10 26 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 10 20 integer or floating-point operations per second for training AI. (d) Within 180 days of the date of this order, pursuant to the finding set forth in subsection 4.2(c) of this section, the Secretary of Commerce shall propose regulations that require United States IaaS Providers to ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller. These regulations shall, at a minimum: (i) Set forth the minimum standards that a United States IaaS Provider must require of foreign resellers of its United States IaaS Products to verify the identity of a foreign person who opens an account or maintains an existing account with a foreign reseller, including: (A) the types of documentation and procedures that foreign resellers of United States IaaS Products must require to verify the identity of any foreign person acting as a lessee or sub-lessee of these products or services; (B) records that foreign resellers of United States IaaS Products must securely maintain regarding a foreign person that obtains an account, including information establishing: (1) the identity of such foreign person, including name and address; (2) the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier); (3) the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and (4) the internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account; and (C) methods that foreign resellers of United States IaaS Products must implement to limit all third-party access to the information described in this subsection, except insofar as such access is otherwise consistent with this order and allowed under applicable law; (ii) Take into consideration the types of accounts maintained by foreign resellers of United States IaaS Products, methods of opening an account, and types of identifying information available to accomplish the objectives of identifying foreign malicious cyber actors using any such products and avoiding the imposition of an undue burden on such resellers; and (iii) Provide that the Secretary of Commerce, in accordance with such standards and procedures as the Secretary may delineate and in consultation with the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence, ma

SUBCHAPTER I— NATIONAL ARTIFICIAL INTELLIGENCE INITIATIVE

§ 9411.

National Artificial Intelligence Initiative

(a)

Establishment; purposes The President shall establish and implement an initiative to be known as the “National Artificial Intelligence Initiative”. The purposes of the Initiative shall be to—

(1) ensure continued United States leadership in artificial intelligence research and development;

(2) lead the world in the development and use of trustworthy artificial intelligence systems in the public and private sectors;

(3) prepare the present and future United States workforce for the integration of artificial intelligence systems across all sectors of the economy and society; and

(4) coordinate ongoing artificial intelligence research, development, and demonstration activities among the civilian agencies, the Department of Defense and the Intelligence Community to ensure that each informs the work of the others.

(b)

Initiative activities In carrying out the Initiative, the President, acting through the Initiative Office, the Interagency Committee, and agency heads as the President considers appropriate, shall carry out activities that include the following:

(1) Sustained and consistent support for artificial intelligence research and development through grants, cooperative agreements, testbeds, and access to data and computing resources.

(2) Support for K-12 education and postsecondary educational programs, including workforce training and career and technical education programs, and informal education programs to prepare the American workforce and the general public to be able to create, use, and interact with artificial intelligence systems.

(3) Support for interdisciplinary research, education, and workforce training programs for students and researchers that promote learning in the methods and systems used in artificial intelligence and foster interdisciplinary perspectives and collaborations among subject matter experts in relevant fields, including computer science, mathematics, statistics, engineering, social sciences, health, psychology, behavioral science, ethics, security, legal scholarship, and other disciplines that will be necessary to advance artificial intelligence research and development responsibly.

(4) Interagency planning and coordination of Federal artificial intelligence research, development, demonstration, standards engagement, and other activities under the Initiative, as appropriate.

(5) Outreach to diverse stakeholders, including citizen groups, industry, and civil rights and disability rights organizations, to ensure public input is taken into account in the activities of the Initiative.

(6) Leveraging existing Federal investments to advance objectives of the Initiative.

(7) Support for a network of interdisciplinary artificial intelligence research institutes, as described in

(8) Support opportunities for international cooperation with strategic allies, as appropriate, on the research and development, assessment, and resources for trustworthy artificial intelligence systems.

(c)

Limitation The Initiative shall not impact sources and methods, as determined by the Director of National Intelligence.

(d)

Rules of construction Nothing in this chapter shall be construed as—

(1) modifying any authority or responsibility, including any operational authority or responsibility of any head of a Federal department or agency, with respect to intelligence or the intelligence community, as those terms are defined in

(2) authorizing the Initiative, or anyone associated with its derivative efforts to approve, interfere with, direct or to conduct an intelligence activity, resource, or operation; or

(3) authorizing the Initiative, or anyone associated with its derivative efforts to modify the classification of intelligence information.

(e)

Sunset The Initiative established in this chapter shall terminate on the date that is 10 years after January 1, 2021 . ( Pub. L. 116–283, div. E, title LI, § 5101 , Jan. 1, 2021 , 134 Stat. 4524 .)

Editorial Notes

References in Text

This chapter, referred to in subsecs. (d) and (e), was in the original “this division”, meaning div. E of Pub. L. 116–283 , Jan. 1, 2021 , 134 Stat. 4523 , which is classified principally to this chapter. For complete classification of div. E to the Code, see Short Title note set out under section 9401 of this title and Tables. 50 U.S.C. 3003 , referred to in subsec. (d)(1), was so in the original, but probably should have been a reference to section 3 of the National Security Act of 1947, act July 26, 1947, ch. 343 , which is classified to section 3003 of Title 50 , War and National Defense.

§ 9412.

National Artificial Intelligence Initiative Office

(a)

In general The Director of the Office of Science and Technology Policy shall establish or designate, and appoint a director of, an office to be known as the “National Artificial Intelligence Initiative Office” to carry out the responsibilities described in subsection (b) with respect to the Initiative. The Initiative Office shall have sufficient staff to carry out such responsibilities, including staff detailed from the Federal departments and agencies described in section 9413(c) of this title , as appropriate.

(b)

Responsibilities The Director of the Initiative Office shall—

(1) provide technical and administrative support to the Interagency Committee and the Advisory Committee;

(2) serve as the point of contact on Federal artificial intelligence activities for Federal departments and agencies, industry, academia, nonprofit organizations, professional societies, State governments, and such other persons as the Initiative Office considers appropriate to exchange technical and programmatic information;

(3) conduct regular public outreach to diverse stakeholders, including civil rights and disability rights organizations; and

(4) promote access to the technologies, innovations, best practices, and expertise derived from Initiative activities to agency missions and systems across the Federal Government.

(c)

Funding estimate The Director of the Office of Science and Technology Policy, in coordination with each participating Federal department and agency, as appropriate, shall develop and annually update an estimate of the funds necessary to carry out the activities of the Initiative Coordination Office and submit such estimate with an agreed summary of contributions from each agency to Congress as part of the President’s annual budget request to Congress. ( Pub. L. 116–283, div. E, title LI, § 5102 , Jan. 1, 2021 , 134 Stat. 4526 .)

§ 9413.

Coordination by Interagency Committee

(a)

Interagency Committee The Director of the Office of Science and Technology Policy, acting through the National Science and Technology Council, shall establish or designate an Interagency Committee to coordinate Federal programs and activities in support of the Initiative.

(b)

Co-chairs The Interagency Committee shall be co-chaired by the Director of the Office of Science and Technology Policy and, on an annual rotating basis, a representative from the Department of Commerce, the National Science Foundation, or the Department of Energy, as selected by the Director of the Office of Science and Technology Policy.

(c)

Agency participation The Committee shall include representatives from Federal agencies as considered appropriate by determination and agreement of the Director of the Office of Science and Technology Policy and the head of the affected agency.

(d)

Responsibilities The Interagency Committee shall—

(1) provide for interagency coordination of Federal artificial intelligence research, development, and demonstration activities and education and workforce training activities and programs of Federal departments and agencies undertaken pursuant to the Initiative;

(2) not later than 2 years after January 1, 2021 , develop a strategic plan for artificial intelligence (to be updated not less than every 3 years) that establishes goals, priorities, and metrics for guiding and evaluating how the agencies carrying out the Initiative will—

(A)

determine and prioritize areas of artificial intelligence research, development, and demonstration requiring Federal Government leadership and investment;

(B)

support long-term funding for interdisciplinary artificial intelligence research, development, demonstration, and education;

(C)

support research and other activities on ethical, legal, environmental, safety, security, bias, and other appropriate societal issues related to artificial intelligence;

(D)

provide or facilitate the availability of curated, standardized, secure, representative, aggregate, and privacy-protected data sets for artificial intelligence research and development;

(E)

provide or facilitate the necessary computing, networking, and data facilities for artificial intelligence research and development;

(F)

support and coordinate Federal education and workforce training activities related to artificial intelligence; and

(G)

support and coordinate the network of artificial intelligence research institutes described in

(3) as part of the President’s annual budget request to Congress, propose an annually coordinated interagency budget for the Initiative to the Office of Management and Budget that is intended to ensure that the balance of funding across the Initiative is sufficient to meet the goals and priorities established for the Initiative; and

(4) in carrying out this section, take into consideration the recommendations of the Advisory Committee, existing reports on related topics, and the views of academic, State, industry, and other appropriate groups.

(e)

Annual report For each fiscal year beginning with fiscal year 2022, not later than 90 days after submission of the President’s annual budget request for such fiscal year, the Interagency Committee shall prepare and submit to the Committee on Science, Space, and Technology, the Committee on Energy and Commerce, the Committee on Transportation and Infrastructure, the Committee on Armed Services, the House Permanent Select Committee on Intelligence, the Committee on the Judiciary, and the Committee on Appropriations of the House of Representatives and the Committee on Commerce, Science, and Transportation, the Committee on Health, Education, Labor, and Pensions, the Committee on Energy and Natural Resources, the Committee on Homeland Security and Governmental Affairs, the Committee on Armed Services, the Senate Select Committee on Intelligence, the Committee on the Judiciary, and the Committee on Appropriations of the Senate a report that includes a summarized budget in support of the Initiative for such fiscal year and the preceding fiscal year, including a disaggregation of spending and a description of any Institutes established under section 9431 of this title for the Department of Commerce, the Department of Defense, the Department of Energy, the Department of Agriculture, the Department of Health and Human Services, and the National Science Foundation. ( Pub. L. 116–283, div. E, title LI, § 5103 , Jan. 1, 2021 , 134 Stat. 4526 .)

§ 9414.

National Artificial Intelligence Advisory Committee

(a)

In general The Secretary of Commerce shall, in consultation with the Director of the Office of Science and Technology Policy, the Secretary of Defense, the Secretary of Energy, the Secretary of State, the Attorney General, and the Director of National Intelligence establish an advisory committee to be known as the “National Artificial Intelligence Advisory Committee”.

(b)

Qualifications The Advisory Committee shall consist of members, appointed by the Secretary of Commerce, who are representing broad and interdisciplinary expertise and perspectives, including from academic institutions, companies across diverse sectors, nonprofit and civil society entities, including civil rights and disability rights organizations, and Federal laboratories, who are representing geographic diversity, and who are qualified to provide advice and information on science and technology research, development, ethics, standards, education, technology transfer, commercial application, security, and economic competitiveness related to artificial intelligence.

(c)

Membership consideration In selecting the members of the Advisory Committee, the Secretary of Commerce shall seek and give consideration to recommendations from Congress, industry, nonprofit organizations, the scientific community (including the National Academies of Sciences, Engineering, and Medicine, scientific professional societies, and academic institutions), the defense and law enforcement communities, and other appropriate organizations.

(d)

Duties The Advisory Committee shall advise the President and the Initiative Office on matters related to the Initiative, including recommendations related to—

(1) the current state of United States competitiveness and leadership in artificial intelligence, including the scope and scale of United States investments in artificial intelligence research and development in the international context;

(2) the progress made in implementing the Initiative, including a review of the degree to which the Initiative has achieved the goals according to the metrics established by the Interagency Committee under

(3) the state of the science around artificial intelligence, including progress toward artificial general intelligence;

(4) issues related to artificial intelligence and the United States workforce, including matters relating to the potential for using artificial intelligence for workforce training, the possible consequences of technological displacement, and supporting workforce training opportunities for occupations that lead to economic self-sufficiency for individuals with barriers to employment and historically underrepresented populations, including minorities, Indians (as defined in

(5) how to leverage the resources of the initiative to streamline and enhance operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery;

(6) the need to update the Initiative;

(7) the balance of activities and funding across the Initiative;

(8) whether the strategic plan developed or updated by the Interagency Committee established under

(9) the management, coordination, and activities of the Initiative;

(10) whether ethical, legal, safety, security, and other appropriate societal issues are adequately addressed by the Initiative;

(11) opportunities for international cooperation with strategic allies on artificial intelligence research activities, standards development, and the compatibility of international regulations;

(12) accountability and legal rights, including matters relating to oversight of artificial intelligence systems using regulatory and nonregulatory approaches, the responsibility for any violations of existing laws by an artificial intelligence system, and ways to balance advancing innovation while protecting individual rights; and

(13) how artificial intelligence can enhance opportunities for diverse geographic regions of the United States, including urban, Tribal, and rural communities.

(e)

Subcommittee on artificial intelligence and law enforcement

(1) Establishment The chairperson of the Advisory Committee shall establish a subcommittee on matters relating to the development of artificial intelligence relating to law enforcement matters.

(2) Advice The subcommittee shall provide advice to the President on matters relating to the development of artificial intelligence relating to law enforcement, including advice on the following:

(A)

Bias, including whether the use of facial recognition by government authorities, including law enforcement agencies, is taking into account ethical considerations and addressing whether such use should be subject to additional oversight, controls, and limitations.

(B)

Security of data, including law enforcement’s access to data and the security parameters for that data.

(C)

Adoptability, including methods to allow the United States Government and industry to take advantage of artificial intelligence systems for security or law enforcement purposes while at the same time ensuring the potential abuse of such technologies is sufficiently mitigated.

(D)

Legal standards, including those designed to ensure the use of artificial intelligence systems are consistent with the privacy rights, civil rights and civil liberties, and disability rights issues raised by the use of these technologies.

(f)

Reports Not later than 1 year after January 1, 2021 , and not less frequently than once every 3 years thereafter, the Advisory Committee shall submit to the President, the Committee on Science, Space, and Technology, the Committee on Energy and Commerce, the House Permanent Select Committee on Intelligence, the Committee on the Judiciary, and the Committee on Armed Services of the House of Representatives, and the Committee on Commerce, Science, and Transportation, the Senate Select Committee on Intelligence, the Committee on Homeland Security and Governmental Affairs, the Committee on the Judiciary, and the Committee on Armed Services of the Senate, a report on the Advisory Committee’s findings and recommendations under subsection (d) and subsection (e).

(g)

Travel expenses of non-Federal members Non-Federal members of the Advisory Committee, while attending meetings of the Advisory Committee or while otherwise serving at the request of the head of the Advisory Committee away from their homes or regular places of business, may be allowed travel expenses, including per diem in lieu of subsistence, as authorized by section 5703 of title 5 for individuals in the Government serving without pay. Nothing in this subsection shall be construed to prohibit members of the Advisory Committee who are officers or employees of the United States from being allowed travel expenses, including per diem in lieu of subsistence, in accordance with existing law.

(h)

FACA exemption The Secretary of Commerce shall charter the Advisory Committee in accordance with the Federal Advisory Committee Act (5 U.S.C. App.), 1 except that the Advisory Committee shall be exempt from section 14 of such Act. ( Pub. L. 116–283, div. E, title LI, § 5104 , Jan. 1, 2021 , 134 Stat. 4528 .)

Editorial Notes

References in Text

25 U.S.C. 5304 , referred to in subsec. (d)(4), was so in the original, but probably should have been a reference to section 4 of the Indian Self-Determination and Education Assistance Act, Pub. L. 93–638 , which is classified to section 5304 of Title 25 , Indians. The Federal Advisory Committee Act, referred to in subsec. (h), is Pub. L. 92–463 , Oct. 6, 1972 , 86 Stat. 770 , which was set out in the Appendix to Title 5, Government Organization and Employees, and was substantially repealed and restated in chapter 10 (§ 1001 et seq.) of Title 5 by Pub. L. 117–286 , §§ 3(a), 7, Dec. 27, 2022 , 136 Stat. 4197 , 4361. Section 14 of the Act was repealed and restated as section 1013 of Title 5 . For disposition of sections of the Act into chapter 10 of Title 5, see Disposition Table preceding section 101 of Title 5 .

§ 9415.

National AI Research Resource Task Force

(a)

Establishment of Task Force

(1) Establishment

(A)

In general The Director of the National Science Foundation, in coordination with the Office of Science and Technology Policy, shall establish a task force—

(i)

to investigate the feasibility and advisability of establishing and sustaining a National Artificial Intelligence Research Resource; and

(ii)

to propose a roadmap detailing how such resource should be established and sustained.

(B)

Designation The task force established by subparagraph (A) shall be known as the “National Artificial Intelligence Research Resource Task Force” (in this section referred to as the “Task Force”).

(2) Membership

(A)

Composition The Task Force shall be composed of 12 members selected by the co-chairpersons of the Task Force from among technical experts in artificial intelligence or related subjects, of whom—

(i)

4 shall be representatives from the Interagency Committee established in section 9413 of this title , including the co-chairpersons of the Task Force;

(ii)

4 shall be representatives from institutions of higher education; and

(iii)

4 shall be representatives from private organizations.

(B)

Appointment Not later than 120 days after enactment of this Act, the co-chairpersons of the Task Force shall appoint members to the Task Force pursuant to subparagraph (A).

(C)

Term of appointment Members of the Task Force shall be appointed for the life of the Task Force.

(D)

Vacancy Any vacancy occurring in the membership of the Task Force shall be filled in the same manner in which the original appointment was made.

(E)

Co-chairpersons The Director of the Office of Science and Technology Policy and the Director of the National Sciences Foundation,1 or their designees, shall be the co-chairpersons of the Task Force. If the role of the Director of the National Science Foundation is vacant, the Chair of the National Science Board shall act as a co-chairperson of the Task Force.

(F)

Expenses for non-Federal Members

(i)

Except as provided in clause (ii), non-Federal Members of the Task Force shall not receive compensation for their participation on the Task Force.

(ii)

Non-Federal Members of the Task Force shall be allowed travel expenses, including per diem in lieu of subsistence, at rates authorized for employees under subchapter I of chapter 57 of title 5, while away from their homes or regular places of business in the performance of services for the Task Force.

(b)

Roadmap and implementation plan

(1) In general The Task Force shall develop a coordinated roadmap and implementation plan for creating and sustaining a National Artificial Intelligence Research Resource.

(2) Contents The roadmap and plan required by paragraph (1) shall include the following:

(A)

Goals for establishment and sustainment of a National Artificial Intelligence Research Resource and metrics for success.

(B)

A plan for ownership and administration of the National Artificial Intelligence Research Resource, including—

(i)

an appropriate agency or organization responsible for the implementation, deployment, and administration of the Resource; and

(ii)

a governance structure for the Resource, including oversight and decision-making authorities.

(C)

A model for governance and oversight to establish strategic direction, make programmatic decisions, and manage the allocation of resources;

(D)

Capabilities required to create and maintain a shared computing infrastructure to facilitate access to computing resources for researchers across the country, including scalability, secured access control, resident data engineering and curation expertise, provision of curated data sets, compute resources, educational tools and services, and a user interface portal.

(E)

An assessment of, and recommended solutions to, barriers to the dissemination and use of high-quality government data sets as part of the National Artificial Intelligence Research Resource.

(F)

An assessment of security requirements associated with the National Artificial Intelligence Research Resource and its research and a recommendation for a framework for the management of access controls.

(G)

An assessment of privacy and civil rights and civil liberties requirements associated with the National Artificial Intelligence Research Resource and its research.

(H)

A plan for sustaining the Resource, including through Federal funding and partnerships with the private sector.

(I)

Parameters for the establishment and sustainment of the National Artificial Intelligence Research Resource, including agency roles and responsibilities and milestones to implement the Resource.

(c)

Consultations In conducting its duties required under subsection (b), the Task Force shall consult with the following:

(1) The National Science Foundation.

(2) The Office of Science and Technology Policy.

(3) The National Academies of Sciences, Engineering, and Medicine.

(4) The National Institute of Standards and Technology.

(5) The Director of National Intelligence.

(6) The Department of Energy.

(7) The Department of Defense.

(8) The General Services Administration.

(9) The Department of Justice.

(10) The Department of Homeland Security.

(11) The Department of Health and Human Services.

(12) Private industry.

(13) Institutions of higher education.

(14) Civil and disabilities rights organizations.

(15) Such other persons as the Task Force considers appropriate.

(d)

Staff Staff of the Task Force shall comprise detailees with expertise in artificial intelligence, or related fields from the Office of Science and Technology Policy, the National Science Foundation, or any other agency the co-chairs deem appropriate, with the consent of the head of the agency.

(e)

Task Force reports

(1) Initial report Not later than 12 months after the date on which all of the appointments have been made under subsection (a)(2)(B), the Task Force shall submit to Congress and the President an interim report containing the findings, conclusions, and recommendations of the Task Force. The report shall include specific recommendations regarding steps the Task Force believes necessary for the establishment and sustainment of a National Artificial Intelligence Research Resource.

(2) Final report Not later than 6 months after the submittal of the interim report under paragraph (1), the Task Force shall submit to Congress and the President a final report containing the findings, conclusions, and recommendations of the Task Force, including the specific recommendations required by subsection (b).

(f)

Termination

(1) In general The Task Force shall terminate 90 days after the date on which it submits the final report under subsection (e)(2).

(2) Records Upon termination of the Task Force, all of its records shall become the records of the National Archives and Records Administration.

(g)

Definitions In this section:

(1) National Artificial Intelligence Research Resource and Resource The terms “National Artificial Intelligence Research Resource” and “Resource” mean a system that provides researchers and students across scientific fields and disciplines with access to compute resources, co-located with publicly-available, artificial intelligence-ready government and non-government data sets and a research environment with appropriate educational tools and user support.

(2) Ownership The term “ownership” means responsibility and accountability for the implementation, deployment, and ongoing development of the National Artificial Intelligence Research Resource, and for providing staff support to that effort. ( Pub. L. 116–283, div. E, title LI, § 5106 , Jan. 1, 2021 , 134 Stat. 4531 .)

Editorial Notes

References in Text

Enactment of this Act, referred to in subsec. (a)(2)(B), means the enactment of Pub. L. 116–283 , which was approved Jan. 1, 2021 .

SUBCHAPTER II— NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH INSTITUTES

§ 9431.

National Artificial Intelligence Research Institutes

(a)

In general Subject to the availability of funds appropriated for this purpose, the Director of the National Science Foundation shall establish a program to award financial assistance for the planning, establishment, and support of a network of Institutes (as described in subsection (b)(2)) in accordance with this section.

(b)

Financial assistance to establish and support National Artificial Intelligence Research Institutes

(1) In general Subject to the availability of funds appropriated for this purpose, the Secretary of Energy, the Secretary of Commerce, the Director of the National Science Foundation, and every other agency head may award financial assistance to an eligible entity, or consortia thereof, as determined by an agency head, to establish and support an Institute.

(2) Artificial intelligence institutes An Institute described in this subsection is an artificial intelligence research institute that—

(A)

is focused on—

(i)

a particular economic or social sector, including health, education, manufacturing, agriculture, security, energy, and environment, and includes a component that addresses the ethical, societal, safety, and security implications relevant to the application of artificial intelligence in that sector; or

(ii)

a cross-cutting challenge for artificial intelligence systems, including trustworthiness, or foundational science;

(B)

requires partnership among public and private organizations, including, as appropriate, Federal agencies, institutions of higher education, including community colleges, nonprofit research organizations, Federal laboratories, State, local, and Tribal governments, industry, including startup companies, and civil society organizations, including civil rights and disability rights organizations (or consortia thereof);

(C)

has the potential to create an innovation ecosystem, or enhance existing ecosystems, to translate Institute research into applications and products, as appropriate to the topic of each Institute;

(D)

supports interdisciplinary research and development across multiple institutions of higher education and organizations;

(E)

supports interdisciplinary education activities, including curriculum development, research experiences, and faculty professional development across undergraduate, graduate, and professional academic programs; and

(F)

supports workforce development in artificial intelligence related disciplines in the United States, including increasing participation of historically underrepresented communities.

(3) Use of funds Financial assistance awarded under paragraph (1) may be used by an Institute for—

(A)

managing and making available to researchers accessible, curated, standardized, secure, and privacy protected data sets from the public and private sectors for the purposes of training and testing artificial intelligence systems and for research using artificial intelligence systems, pursuant to subsections (c), (e), and (f) of

(B)

developing and managing testbeds for artificial intelligence systems, including sector-specific test beds, designed to enable users to evaluate artificial intelligence systems prior to deployment;

(C)

conducting research and education activities involving artificial intelligence systems to solve challenges with social, economic, health, scientific, and national security implications;

(D)

providing or brokering access to computing resources, networking, and data facilities for artificial intelligence research and development relevant to the Institute’s research goals;

(E)

providing technical assistance to users, including software engineering support, for artificial intelligence research and development relevant to the Institute’s research goals;

(F)

engaging in outreach and engagement to broaden participation in artificial intelligence research and the artificial intelligence workforce; and

(G)

such other activities that an agency head, whose agency’s missions contribute to or are affected by artificial intelligence, considers consistent with the purposes described in

(4) Duration

(A)

Initial periods An award of financial assistance under paragraph (1) shall be awarded for an initial period of 5 years.

(B)

Extension An established Institute may apply for, and the agency head may grant, extended funding for periods of 5 years on a merit-reviewed basis using the merit review criteria of the sponsoring agency.

(5) Application for financial assistance A person seeking financial assistance under paragraph (1) shall submit to an agency head an application at such time, in such manner, and containing such information as the agency head may require.

(6) Competitive, merit review In awarding financial assistance under paragraph (1), the agency head shall—

(A)

use a competitive, merit review process that includes peer review by a diverse group of individuals with relevant expertise from both the private and public sectors; and

(B)

ensure the focus areas of the Institute do not substantially and unnecessarily duplicate the efforts of any other Institute.

(7) Collaboration

(A)

In general In awarding financial assistance under paragraph (1), an agency head may collaborate with Federal departments and agencies whose missions contribute to or are affected by artificial intelligence systems.

(B)

Coordinating network The Director of the National Science Foundation shall establish a network of Institutes receiving financial assistance under this subsection, to be known as the “Artificial Intelligence Leadership Network”, to coordinate cross-cutting research and other activities carried out by the Institutes.

(8) Limitation No funds authorized in this subchapter shall be awarded to Institutes outside of the United States. All awardees and subawardees for such Institute shall be based in the United States, in addition to any other eligibility criteria as established by each agency head. ( Pub. L. 116–283, div. E, title LII, § 5201 , Jan. 1, 2021 , 134 Stat. 4534 .)

Editorial Notes

References in Text

Section 5301 of this division, referred to in subsec. (b)(3)(A), means section 5301 of div. E of Pub. L. 116–283 , Jan. 1, 2021 , 134 Stat. 4536 .

SUBCHAPTER III— DEPARTMENT OF COMMERCE ARTIFICIAL INTELLIGENCE ACTIVITIES

§ 9441.

Stakeholder outreach In carrying out the activities under section 278h–1 of this title as amended by title III of this Act, 1 1 See References in Text note below. the Director shall—

(1) solicit input from university researchers, private sector experts, relevant Federal agencies, Federal laboratories, State, Tribal, and local governments, civil society groups, and other relevant stakeholders;

(2) solicit input from experts in relevant fields of social science, technology ethics, and law; and

(3) provide opportunity for public comment on guidelines and best practices developed as part of the Initiative, as appropriate. ( Pub. L. 116–283, div. E, title LIII, § 5302 , Jan. 1, 2021 , 134 Stat. 4539 .)

Editorial Notes

References in Text

Section 278h–1 of this title as amended by title III of this Act, referred to in text, probably means section 278h–1 of this title as added by title LIII of Pub. L. 116–283, div. E , Jan. 1, 2021 , 134 Stat. 4536 .

§ 9442.

National Oceanic and Atmospheric Administration Artificial Intelligence Center

(a)

In general The Administrator of the National Oceanic and Atmospheric Administration (hereafter referred to as “the Administrator”) shall establish,1 a Center for Artificial Intelligence (hereafter referred to as “the Center”).

(b)

Center goals The goals of the Center shall be to—

(1) coordinate and facilitate the scientific and technological efforts related to artificial intelligence across the National Oceanic and Atmospheric Administration; and

(2) expand external partnerships, and build workforce proficiency to effectively transition artificial intelligence research and applications to operations.

(c)

Comprehensive program Through the Center, the Administrator shall implement a comprehensive program to improve the use of artificial intelligence systems across the agency in support of the mission of the National Oceanic and Atmospheric Administration.

(d)

Center priorities The priorities of the Center shall be to—

(1) coordinate and facilitate artificial intelligence research and innovation, tools, systems, and capabilities across the National Oceanic and Atmospheric Administration;

(2) establish data standards and develop and maintain a central repository for agency-wide artificial intelligence applications;

(3) accelerate the transition of artificial intelligence research to applications in support of the mission of the National Oceanic and Atmospheric Administration;

(4) develop and conduct training for the workforce of the National Oceanic and Atmospheric Administration related to artificial intelligence research and application of artificial intelligence for such agency;

(5) facilitate partnerships between the National Oceanic and Atmospheric Administration and other public sector organizations, private sector organizations, and institutions of higher education for research, personnel exchange, and workforce development with respect to artificial intelligence systems; and

(6) make data of the National Oceanic and Atmospheric Administration accessible, available, and ready for artificial intelligence applications.

(e)

Stakeholder engagement In carrying out the activities authorized in this section, the Administrator shall—

(1) collaborate with a diverse set of stakeholders including private sector entities and institutions of higher education;

(2) leverage the collective body of research on artificial intelligence and machine learning; and

(3) engage with relevant Federal agencies, research communities, and potential users of data and methods made available through the Center.

(f)

Authorization of appropriations There are authorized to be appropriated to the Administrator to carry out this section $10,000,000 for fiscal year 2021.

(g)

Protection of national security interests

(1) In general Notwithstanding any other provision of this section, the Administrator, in consultation with the Secretary of Defense as appropriate, may withhold models or data used by the Center if the Administrator determines doing so to be necessary to protect the national security interests of the United States.

(2) Rule of construction Nothing in this section shall be construed to supersede any other provision of law governing the protection of the national security interests of the United States. ( Pub. L. 116–283, div. E, title LIII, § 5303 , Jan. 1, 2021 , 134 Stat. 4539 .)

SUBCHAPTER IV— NATIONAL SCIENCE FOUNDATION ARTIFICIAL INTELLIGENCE ACTIVITIES

§ 9451.

Artificial intelligence research and education

(a)

In general the 1 Director of the National Science Foundation shall fund research and education activities in artificial intelligence systems and related fields, including competitive awards or grants to institutions of higher education or eligible nonprofit organizations (or consortia thereof).

(b)

Uses of funds In carrying out the activities under subsection (a), the Director of the National Science Foundation shall—

(1) support research, including interdisciplinary research, on artificial intelligence systems and related areas, including fields and research areas that will contribute to the development and deployment of trustworthy artificial intelligence systems, and fields and research areas that address the application of artificial intelligence systems to scientific discovery and societal challenges;

(2) use the existing programs of the National Science Foundation, in collaboration with other Federal departments and agencies, as appropriate to—

(A)

improve the teaching and learning of topics related to artificial intelligence systems in K-12 education and postsecondary educational programs, including workforce training and career and technical education programs, undergraduate and graduate education programs, and in informal settings; and

(B)

increase participation in artificial intelligence related fields, including by individuals identified in sections 1885a and 1885b of title 42;

(3) support partnerships among institutions of higher education, Federal laboratories, nonprofit organizations, State, local, and Tribal governments, industry, and potential users of artificial intelligence systems that facilitate collaborative research, personnel exchanges, and workforce development and identify emerging research needs with respect to artificial intelligence systems;

(4) ensure adequate access to research and education infrastructure with respect to artificial intelligence systems, which may include the development of new computing resources and partnership with the private sector for the provision of cloud-based computing services;

(5) conduct prize competitions, as appropriate, pursuant to

(6) coordinate research efforts funded through existing programs across the directorates of the National Science Foundation;

(7) provide guidance on data sharing by grantees to public and private sector organizations consistent with the standards and guidelines developed under

(8) evaluate opportunities for international collaboration with strategic allies on artificial intelligence research and development.

(c)

Engineering support In general, the Director shall permit applicants to include in their proposed budgets funding for software engineering support to assist with the proposed research.

(d)

Ethics

(1) Sense of Congress It is the sense of Congress that—

(A)

a number of emerging areas of research, including artificial intelligence, have potential ethical, social, safety, and security risks that might be apparent as early as the basic research stage;

(B)

the incorporation of ethical, social, safety, and security considerations into the research design and review process for Federal awards may help mitigate potential harms before they happen;

(C)

the National Science Foundation’s agreement with the National Academies of Sciences, Engineering, and Medicine to conduct a study and make recommendations with respect to governance of research in computing and computing technologies is a positive step toward accomplishing this goal; and

(D)

the National Science Foundation should continue to work with stakeholders to understand and adopt policies that promote best practices for governance of research in emerging technologies at every stage of research.

(2) Report on ethics statements No later than 6 months after publication of the study described in paragraph (1)(C), the Director shall report to Congress on options for requiring an ethics or risk statement as part of all or a subset of applications for research funding to the National Science Foundation.

(e)

Education

(1) In general The Director of the National Science Foundation shall award grants for artificial intelligence education research, development and related activities to support K-12 and postsecondary education programs and activities, including workforce training and career and technical education programs and activities, undergraduate, graduate, and postdoctoral education, and informal education programs and activities that—

(A)

support the development of a diverse workforce pipeline for science and technology with respect to artificial intelligence systems;

(B)

increase awareness of potential ethical, social, safety, and security risks of artificial intelligence systems;

(C)

promote curriculum development for teaching topics related to artificial intelligence, including in the field of technology ethics;

(D)

support efforts to achieve equitable access to K-12 artificial intelligence education in diverse geographic areas and for populations historically underrepresented in science, engineering, and artificial intelligence fields; and

(E)

promote the widespread understanding of artificial intelligence principles and methods to create an educated workforce and general public able to use products enabled by artificial intelligence systems and adapt to future societal and economic changes caused by artificial intelligence systems.

(2) Artificial intelligence faculty fellowships

(A)

Faculty recruitment fellowships

(i)

The Director of the National Science Foundation shall establish a program to award grants to eligible institutions of higher education to recruit and retain tenure-track or tenured faculty in artificial intelligence and related fields. The Director of the National Science Foundation shall establish a program to award grants to eligible institutions of higher education to recruit and retain tenure-track or tenured faculty in artificial intelligence and related fields.

(ii)

An institution of higher education shall use grant funds provided under clause (i) for the purposes of—

(I)

recruiting new tenure-track or tenured faculty members that conduct research and teaching in artificial intelligence and related fields and research areas, including technology ethics; and

(II)

paying salary and benefits for the academic year of newly recruited tenure-track or tenured faculty members for a duration of up to three years.

(iii)

For purposes of this subparagraph, an eligible institution of higher education is—

(I)

a Historically Black College and University (within the meaning of the term “part B institution” under section 1061 of title 20 ), Tribal College or University, or other minority-serving institution, as defined in section 1067q(a) of title 20 ;

(II)

an institution classified under the Carnegie Classification of Institutions of Higher Education as a doctorate-granting university with a high level of research activity; or

(III)

an institution located in a State jurisdiction eligible to participate in the National Science Foundation’s Established Program to Stimulate Competitive Research.

(B)

Faculty technology ethics fellowships

(i)

The Director of the National Science Foundation shall establish a program to award fellowships to tenure-track and tenured faculty in social and behavioral sciences, ethics, law, and related fields to develop new research projects and partnerships in technology ethics. The Director of the National Science Foundation shall establish a program to award fellowships to tenure-track and tenured faculty in social and behavioral sciences, ethics, law, and related fields to develop new research projects and partnerships in technology ethics.

(ii)

The purposes of such fellowships are to enable researchers in social and behavioral sciences, ethics, law, and related fields to establish new research and education partnerships with researchers in artificial intelligence and related fields; learn new techniques and acquire systematic knowledge in artificial intelligence and related fields; and mentor and advise graduate students and postdocs pursuing research in technology ethics. The purposes of such fellowships are to enable researchers in social and behavioral sciences, ethics, law, and related fields to establish new research and education partnerships with researchers in artificial intelligence and related fields; learn new techniques and acquire systematic knowledge in artificial intelligence and related fields; and mentor and advise graduate students and postdocs pursuing research in technology ethics.

(iii)

A fellowship may include salary and benefits for up to one academic year, expenses to support coursework or equivalent training in artificial intelligence systems, and additional such expenses that the Director deems appropriate. A fellowship may include salary and benefits for up to one academic year, expenses to support coursework or equivalent training in artificial intelligence systems, and additional such expenses that the Director deems appropriate.

(C)

Omitted

(3) Update to advanced technological education program

(A)

Omitted

(B)

Artificial intelligence centers of excellence The Director of the National Science Foundation shall establish national centers of scientific and technical education to advance education and workforce development in areas related to artificial intelligence pursuant to section 1862i of title 42 . Activities of such centers may include—

(i)

the development, dissemination, and evaluation of curriculum and other educational tools and methods in artificial intelligence related fields and research areas, including technology ethics;

(ii)

the development and evaluation of artificial intelligence related certifications for 2-year programs; and

(iii)

interdisciplinary science and engineering research in employment-based adult learning and career retraining related to artificial intelligence fields.

(f)

National Science Foundation pilot program of grants for research in rapidly evolving, high priority topics

(1) Pilot program required The Director of the National Science Foundation shall establish a pilot program to assess the feasibility and advisability of awarding grants for the conduct of research in rapidly evolving, high priority topics using funding mechanisms that require brief project descriptions and internal merit review, and that may include accelerated external review.

(2) Duration

(A)

In general The Director shall carry out the pilot program required by paragraph (1) during the 5-year period beginning on Janaury 1, 2021.

(B)

Assessment and continuation authority After the period set forth in paragraph (2)(A)—

(i)

the Director shall assess the pilot program; and

(ii)

if the Director determines that it is both feasible and advisable to do so, the Director may continue the pilot program.

(3) Grants In carrying out the pilot program, the Director shall award grants for the conduct of research in topics selected by the Director in accordance with paragraph (4).

(4) Topic selection The Director shall select topics for research under the pilot program in accordance with the following:

(A)

The Director shall select artificial intelligence as the initial topic for the pilot program.

(B)

The Director may select additional topics that the Director determines are—

(i)

rapidly evolving; and

(ii)

of high importance to the economy and security of the United States.

(g)

Authorization of appropriations There are authorized to be appropriated to the National Science Foundation to carry out this section—

(1) $868,000,000 for fiscal year 2021;

(2) $911,400,000 for fiscal year 2022;

(3) $956,970,000 for fiscal year 2023;

(4) $1,004,820,000 for fiscal year 2024; and

(5) $1,055,060,000 for fiscal year 2025. ( Pub. L. 116–283, div. E, title LIV, § 5401 , Jan. 1, 2021 , 134 Stat. 4540 .)

Editorial Notes

References in Text

Sections 1885a and 1885b of title 42, referred to in subsec. (b)(2)(B), were in the original sections 33 and 34 of the Science and Engineering Equal Opportunity Act and were translated as meaning sections 33 and 34 of the Science and Engineering Equal Opportunities Act to reflect the probable intent of Congress. Section 5301 of this division, referred to in subsec. (b)(7), means section 5301 of div. E of Pub. L. 116–283 , Jan. 1, 2021 , 134 Stat. 4536 .

Codification

Section is comprised of section 5401 of Pub. L. 116–283 . Subsec. (e)(2)(C) of section 5401 of Pub. L. 116–283 amended section 1862n–1 of Title 42 , The Public Health and Welfare. Subsec. (e)(3)(A) of section 5401 of Pub. L. 116–283 amended section 1862i of Title 42 .

SUBCHAPTER V— DEPARTMENT OF ENERGY ARTIFICIAL INTELLIGENCE RESEARCH PROGRAM

§ 9461.

Department of Energy artificial intelligence research program

(a)

In general The Secretary shall carry out a cross-cutting research and development program to advance artificial intelligence tools, systems, capabilities, and workforce needs and to improve the reliability of artificial intelligence methods and solutions relevant to the mission of the Department. In carrying out this program, the Secretary shall coordinate across all relevant offices and programs at the Department, including the Office of Science, the Office of Energy Efficiency and Renewable Energy, the Office of Nuclear Energy, the Office of Fossil Energy, the Office of Electricity, the Office of Cybersecurity, Energy Security, and Emergency Response, the Advanced Research Projects Agency-Energy, and any other relevant office determined by the Secretary.

(b)

Research areas In carrying out the program under subsection (a), the Secretary shall award financial assistance to eligible entities to carry out research projects on topics including—

(1) the application of artificial intelligence systems to improve large-scale simulations of natural and other phenomena;

(2) the study of applied mathematics, computer science, and statistics, including foundations of methods and systems of artificial intelligence, causal and statistical inference, and the development of algorithms for artificial intelligence systems;

(3) the analysis of existing large-scale datasets from science and engineering experiments and simulations, including energy simulations and other priorities at the Department as determined by the Secretary using artificial intelligence tools and techniques;

(4) the development of operation and control systems that enhance automated, intelligent decisionmaking capabilities;

(5) the development of advanced computing hardware and computer architecture tailored to artificial intelligence systems, including the codesign of networks and computational hardware;

(6) the development of standardized datasets for emerging artificial intelligence research fields and applications, including methods for addressing data scarcity; and

(7) the development of trustworthy artificial intelligence systems, including—

(A)

algorithmic explainability;

(B)

analytical methods for identifying and mitigating bias in artificial intelligence systems; and

(C)

safety and robustness, including assurance, verification, validation, security, and control.

(c)

Technology transfer In carrying out the program under subsection (a), the Secretary shall support technology transfer of artificial intelligence systems for the benefit of society and United States economic competitiveness.

(d)

Facility use and upgrades In carrying out the program under subsection (a), the Secretary shall—

(1) make available high-performance computing infrastructure at national laboratories;

(2) make any upgrades necessary to enhance the use of existing computing facilities for artificial intelligence systems, including upgrades to hardware;

(3) establish new computing capabilities necessary to manage data and conduct high performance computing that enables the use of artificial intelligence systems; and

(4) maintain and improve, as needed, networking infrastructure, data input and output mechanisms, and data analysis, storage, and service capabilities.

(e)

Report on ethics statements Not later than 6 months after publication of the study described in section 9451(d)(1)(C) of this title , the Secretary shall report to Congress on options for requiring an ethics or risk statement as part of all or a subset of applications for research activities funded by the Department of Energy and performed at Department of Energy national laboratories and user facilities.

(f)

Risk management The Secretary shall review agency policies for risk management in artificial intelligence related projects and issue as necessary policies and principles that are consistent with the framework developed under section 278h–1(c) of this title (as added by section 5301 of this division).

(g)

Data privacy and sharing The Secretary shall review agency policies for data sharing with other public and private sector organizations and issue as necessary policies and principles that are consistent with the standards and guidelines submitted under section 278h–1(e) of this title (as added by section 5301 of this division). In addition, the Secretary shall establish a streamlined mechanism for approving research projects or partnerships that require sharing sensitive public or private data with the Department.

(h)

Partnerships with other Federal agencies The Secretary may request, accept, and provide funds from other Federal departments and agencies, State, United States territory, local, or Tribal government agencies, private sector for-profit entities, and nonprofit entities, to be available to the extent provided by appropriations Acts, to support a research project or partnership carried out under this section. The Secretary may not give any special consideration to any agency or entity in return for a donation.

(i)

Stakeholder engagement In carrying out the activities authorized in this section, the Secretary shall—

(1) collaborate with a range of stakeholders including small businesses, institutes of higher education, industry, and the National Laboratories;

(2) leverage the collective body of knowledge from existing artificial intelligence and machine learning research; and

(3) engage with other Federal agencies, research communities, and potential users of information produced under this section.

(j)

Definitions In this section:

(1) Secretary The term “Secretary” means the Secretary of Energy.

(2) Department The term “Department” means the Department of Energy.

(3) National laboratory The term “national laboratory” has the meaning given such term in section 15801 of title 42 .

(4) Eligible entities The term “eligible entities” means—

(A)

an institution of higher education;

(B)

a National Laboratory;

(C)

a Federal research agency;

(D)

a State research agency;

(E)

a nonprofit research organization;

(F)

a private sector entity; or

(G)

a consortium of 2 or more entities described in subparagraphs (A) through (F).

(k)

Authorization of appropriations There are authorized to be appropriated to the Department to carry out this section—

(1) $200,000,000 for fiscal year 2021;

(2) $214,000,000 for fiscal year 2022;

(3) $228,980,000 for fiscal year 2023;

(4) $245,000,000 for fiscal year 2024; and

(5) $262,160,000 for fiscal year 2025. ( Pub. L. 116–283, div. E, title LV, § 5501 , Jan. 1, 2021 , 134 Stat. 4545 .)

Editorial Notes

References in Text

Section 5301 of this division, referred to in subsecs. (f) and (g), means section 5301 of div. E of Pub. L. 116–283 , Jan. 1, 2021 , 134 Stat. 4536 .

§ 9462.

Veterans’ health initiative

(a)

Purposes The purposes of this section are to advance Department of Energy expertise in artificial intelligence and high-performance computing in order to improve health outcomes for veteran populations by—

(1) supporting basic research through the application of artificial intelligence, high-performance computing, modeling and simulation, machine learning, and large-scale data analytics to identify and solve outcome-defined challenges in the health sciences;

(2) maximizing the impact of the Department of Veterans Affairs’ health and genomics data housed at the National Laboratories, as well as data from other sources, on science, innovation, and health care outcomes through the use and advancement of artificial intelligence and high-performance computing capabilities of the Department;

(3) promoting collaborative research through the establishment of partnerships to improve data sharing between Federal agencies, National Laboratories, institutions of higher education, and nonprofit institutions;

(4) establishing multiple scientific computing user facilities to house and provision available data to foster transformational outcomes; and

(5) driving the development of technology to improve artificial intelligence, high-performance computing, and networking relevant to mission applications of the Department, including modeling, simulation, machine learning, and advanced data analytics.

(b)

Veterans health research and development

(1) In general The Secretary of Energy (in this section referred to as the “Secretary”) shall establish and carry out a research program in artificial intelligence and high-performance computing, focused on the development of tools to solve large-scale data analytics and management challenges associated with veteran’s healthcare, and to support the efforts of the Department of Veterans Affairs to identify potential health risks and challenges utilizing data on long-term healthcare, health risks, and genomic data collected from veteran populations. The Secretary shall carry out this program through a competitive, merit-reviewed process, and consider applications from National Laboratories, institutions of higher education, multi-institutional collaborations, and other appropriate entities.

(2) Program components In carrying out the program established under paragraph (1), the Secretary may—

(A)

conduct basic research in modeling and simulation, machine learning, large-scale data analytics, and predictive analysis in order to develop novel or optimized algorithms for prediction of disease treatment and recovery;

(B)

develop methods to accommodate large data sets with variable quality and scale, and to provide insight and models for complex systems;

(C)

develop new approaches and maximize the use of algorithms developed through artificial intelligence, machine learning, data analytics, natural language processing, modeling and simulation, and develop new algorithms suitable for high-performance computing systems and large biomedical data sets;

(D)

advance existing and construct new data enclaves capable of securely storing data sets provided by the Department of Veterans Affairs, Department of Defense, and other sources; and

(E)

promote collaboration and data sharing between National Laboratories, research entities, and user facilities of the Department by providing the necessary access and secure data transfer capabilities.

(3) Coordination In carrying out the program established under paragraph (1), the Secretary is authorized—

(A)

to enter into memoranda of understanding in order to carry out reimbursable agreements with the Department of Veterans Affairs and other entities in order to maximize the effectiveness of Department research and development to improve veterans’ healthcare;

(B)

to consult with the Department of Veterans Affairs and other Federal agencies as appropriate; and

(C)

to ensure that data storage meets all privacy and security requirements established by the Department of Veterans Affairs, and that access to data is provided in accordance with relevant Department of Veterans Affairs data access policies, including informed consent.

(4) Report Not later than 2 years after December 27, 2020 , the Secretary shall submit to the Committee on Energy and Natural Resources and the Committee on Veterans’ Affairs of the Senate, and the Committee on Science, Space, and Technology and the Committee on Veterans’ Affairs of the House of Representatives, a report detailing the effectiveness of—

(A)

the interagency coordination between each Federal agency involved in the research program carried out under this subsection;

(B)

collaborative research achievements of the program; and

(C)

potential opportunities to expand the technical capabilities of the Department.

(5) Funding There is authorized to be appropriated to the Secretary of Veterans Affairs to carry out this subsection $27,000,000 for fiscal year 2021.

(c)

Interagency collaboration

(1) In general The Secretary is authorized to carry out research, development, and demonstration activities to develop tools to apply to big data that enable Federal agencies, institutions of higher education, nonprofit research organizations, and industry to better leverage the capabilities of the Department to solve complex, big data challenges. The Secretary shall carry out these activities through a competitive, merit-reviewed process, and consider applications from National Laboratories, institutions of higher education, multi-institutional collaborations, and other appropriate entities.

(2) Activities In carrying out the research, development, and demonstration activities authorized under paragraph (1), the Secretary may—

(A)

utilize all available mechanisms to prevent duplication and coordinate research efforts across the Department;

(B)

establish multiple user facilities to serve as data enclaves capable of securely storing data sets created by Federal agencies, institutions of higher education, nonprofit organizations, or industry at National Laboratories; and

(C)

promote collaboration and data sharing between National Laboratories, research entities, and user facilities of the Department by providing the necessary access and secure data transfer capabilities.

(3) Report Not later than 2 years after December 27, 2020 , the Secretary shall submit to the Committee on Energy and Natural Resources of the Senate and the Committee on Science, Space, and Technology of the House of Representatives a report evaluating the effectiveness of the activities authorized under paragraph (1).

(4) Funding There are authorized to be appropriated to the Secretary to carry out this subsection $15,000,000 for each of fiscal years 2021 through 2025.

(d)

Definition In this section, the term “National Laboratory” has the meaning given such term in section 15801(3) of title 42 . ( Pub. L. 116–260, div. Z, title IX, § 9008 , Dec. 27, 2020 , 134 Stat. 2600 .)

Editorial Notes

Codification

Section was formerly classified to section 5544 of this title . Section was enacted as part of the Energy Act of 2020, and not as part of the National Artificial Intelligence Initiative Act of 2020 which comprises this chapter.