What you need to know about Biden's AI Executive Order in order to advise your clients.
Practice tips for lawyers, and an outline-style list of highlights from Biden's Executive Order of October 30, 2023, regarding AI.
President Biden’s massive Executive Order issued 30 October 2023 (the Order) is all about AI. Although it’s not the first executive order regarding AI, it is by far the broadest and most sweeping. It’s also the most aspirational.
More than anything, this Order is a signal to the U.S., and the Global Community, that this Administration is aware of the potential for good and evil posed by AI, and is taking steps to address the risks and take advantage of the benefits. This should have come from the Legislative Branch, not the Executive. But the Legislative branch has not taken any action in regard to AI, even though it’s been widely available for almost a year.
As a typical lawyer, this Order probably won’t have a huge impact on your life or your practice, but here are a few practice tips.
PRACTICE TIPS:
If you are advising clients about using AI in their business or organization you should refer them to the AI NIST1 framework. Check back for our series of articles on the AI NIST framework.
If your clients are adopting AI in their business, they should understand the AI model they are using, and how it was trained.
If your clients are developing a large AI Model, they will be required to report how it is trained, and results of testing.
If your client is acquiring or developing a large-scale computing cluster they must notify the government.
If you use AI in your criminal practice, monitor the report that the Attorney General will create in the next year - this report will asses and recommend changes to the criminal justice system related to the use of AI.
The Order Starts out with 8 Guiding Principles:
1. Artificial Intelligence must be Safe and Secure.
2. The US needs to adopt and adapt to the use of AI. There must be fair, open and competitive adoption of AI. The US will work to stop unlawful collusion and address risks posed by “dominant firms.”
3. AI should be used fairly, taking into consideration the needs and desires of workers. And the US will assist in cultivating competent AI workers.
4. AI must be used to advance equity, and not to exacerbate existing biases.
5. Consumer protections will remain in place, and
6. Privacy and Civil Rights concerns are especially important in light of AI’s capability to reconstitute once-private information, so we need to use Privacy Enhancing Technologies (PETs).
7. The Federal Government will manage risks from its use of AI, and will seek to attract AI workers and specialists, and train workers to be able to use AI.
8. The U.S. will continue to lead the way to promote the use of safe and responsible AI.
These principles are followed by a list of definitions. Of note is the definition of Artificial Intelligence.
“AI has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
Section 3(b). This is a more broad-based definition of A.I., which indicates the Administration’s desire to have these rules, reports and requirements apply broadly.
Much of the Order sets deadlines for various agencies to create reports, or implement AI reports, safeguards, and creates a new agency.
Highlights:
Here are highlights of sections to assist in understanding an overview of the Order, with sections referenced for your convenience.
1. Definitions regarding AI. Section 3
2. Developing Guidelines, Standards, and Best Practices for AI Safety and Security by:
a. Making an AI companion to NIST AI 100-1 for generative AI. (See footnote 1)
b. Developing guidelines for appropriate AI use; Section 4.1
c. Developing guidelines for red-teaming testing in order to ensure safe, secure, and trustworthy systems. Section 4.1(iii)
d. Ensuring safe and reliable AI in regard to dual-use foundation models (models with at least 10s of billions of parameters) by:
i. Requiring reports and records by companies showing:
1. Physical and cybersecurity protections taken to assure the integrity of their systems against “sophisticated threats;”
2. Ownership and possession of model weights of dual-use foundation models, and physical and cybersecurity protections for those model weights; Section 4.2a(i)(B)
3. Follow the NIST AI Risk Management Framework, and report on the results of red-team testing. (See footnote 1) Section 4.2a(i)(A-C)
4. Acquisition of large computer clusters, or systems with large computing capabilities. Section 4.2b
5. Require reports and vetting of foreign companies that will be purchasing Infrastructure as a Service (IaaS) from U.S. Providers; Section 4.2(c)
6. Maintain reports of resellers of US IaaS products – the U.S. IaaS re-sellers will be required to verify the identity of any foreign person that obtains an IaaS account; Section 4.2(c)
ii. Requiring DHS to report regarding risk assessments; Section 4.3a(i)
iii. Requiring all relevant federal agency heads to review AI risks to critical infrastructure and establish guidelines; Section 4.3a(iii, iv)
iv. Establishes the Artificial Intelligence Safety and Security Board; Section 4.3a(v);
v. Requiring Department of Defense and Homeland Security to report on cybersecurity vulnerabilities and fixes; Section 4.3b
vi. Requiring HS and Department of Energy to evaluate the threat of AI being used to enhance chemical, biological, radiological and nuclear (CRBN) threats; Section 4.4a
vii. Create a framework regarding the potential use and misuse of genetic research; Section 4.4b (including updating records Section 4.5b)
viii. Adapting existing tools and practices related to AI-created content, such as with labeling or watermarks; Section 4.5a
ix. Preventing AI-created child pornography and AI-created pornography depicting actual people; Section 4.5a
x. Assessing the risks, benefits and implications of widely-available model weights and propose policy and regulatory recommendations; Section 4.6
xi. Promoting the safe use of Federal Data to train AI models by making data more accessible and formatted for use in data systems; Section 4.7
xii. Requiring a National Security Memo that will guide and coordinate relevant National Security Agencies in their use of AI supervision
3. Emphasizes the Administration’s desire to attract foreign AI talent to the U.S. by such means as:
a. Streamlining the visa process and petitions and applications; Section 5.1a(i)
b. Revising the H-1B program; Section 5.1d(ii)
c. Updating the occupations that qualify for H-1B visas to including AI and other STEM-related occupations; Section 5.1e
4. Promotes Innovation by:
a. Creating 4 new National AI Research Institutes, in addition to the 25 that are currently funded by the US Government; Section 5.2a(iii).
b. Establishing a pilot program to train 500 AI researchers by 2025; Section 5.2b
c. Issuing guidance to the U.S. Patent and Trademark Office; Section 5.2c(ii)
d. Considering the Copyright Office’s study regarding potential changes and/or executive orders regarding AI copyright issues such as amount of protection to AI generated products, and the treatment of copyrighted works that were used to train current AI models; Section 5.2c(iii)
e. Develop a training, analysis, and evaluation program to mitigate AI-related risks including collecting reports of AI-related IP theft, and information sharing between agencies such as the FBI and US Customs and Border Protection; Section 5.2d
f. Guidance for law enforcement professionals Section 5.2d(iv)
g. Work with Health and Human Services to protect and collect personal health-related information; HHS will prioritize grants and awards to support reasonable AI development in the health sector; Section 5.2e
h. Improve the quality of Veterans’ healthcare by hosting two 3-month AI Tech Sprint Competitions which will include mentorship, expert feedback, potential contract opportunities and technical support; Section 5.2f(i)
i. Improve our resilience to climate changes. Section 5.2g.
5. Promotes Competition especially in regard to semiconductor chips—necessary for AI computing—and encouraging small businesses to join the market. Section 5.3
6. Supporting Workers Section 6.0
a. By developing best practices Section 6.0a(ii),
b. Monitoring job displacement risks and career opportunities Section 6.0b
c. Tracking compensation to workers affected by AI; Section 6.0b(iii)
d. Foster diversity Section 6.0c
7. Advancing Equity and Civil Rights
a. Implement and enforce current Federal laws regarding civil rights and civil liberties and discrimination related to AI; Section 7.1a
b. Promote equity and nondiscrimination in the criminal justice system, including forecasting and predictive policing and prison management tools; Section 7.1b
c. Advance the presence of AI experts in law enforcement; Section 7c
d. Protecting civil rights in regard to government benefits and programs. Section 7.2; including
i. agricultural public benefits Section 7.2(b)(ii);
ii. housing and real estate transactions Section 7.2(c)
iii. tenant screening Section 7.2c(i)
iv. people with disabilities Section 7.3d
8. Protecting Consumers, Patients, Passengers, and Students Section 8
a. Regulatory agencies use their full range of authorities to protect consumers from fraud, discrimination and threats to privacy Section 8a
b. Develop a strategic plan for the use of AI in healthcare Section 8b
c. Protect Personally Identifiable Information, including in health and human services, with protection of information, complaints, clinical errors, bias, discrimination Section 8b
d. Develop a strategy for regulating the use of AI in drug development. CITE
e. Promote safe and responsible development and use of AI in transportation; including autonomous mobility ecosystems; Section 8c
f. Ensure fair and nondiscriminatory use of AI in education; Section 8d
g. FCC should consider how AI will affect efficiency in communications, including using AI to combat unwanted robocalls and robotexts; Section 8e
9. Protecting Privacy by evaluating data brokers, and the general use of Commercially Available Information; Section 9
10. Advancing Federal Government’s use of AI by Section 10
a. coordinating use of AI across Federal Government agencies,
b. identifying the appropriate use and innovation using AI,
c. determining minimum risk-management practices,
d. developing a watermark or label to identify output from generative AI
e. determining training and public reporting on AI
f. limiting specific generative AI
g. increasing agency investment in AI
h. increasing AI talent in Government Section 10.2
i. train government employees in the use of AI Section 10.2(g)
j. address gaps in AI talent in National Defense Section 10.2(h)
11. Address Global use of AI, including AI risk management, standard terminology, and global standards, and a report regarding cross-border risks of AI; Section 10.3
12. Creation of a White House AI Council. Section 11
Of all these, my personal favorite might be seeing what progress AI models can make in stopping robocalls and robotexts. Yes please!
https://www.nist.gov/itl/ai-risk-management-framework