image by Gage Skidmore from Peoria, AZ; used under Creative Commons Attribution-Share Alike license
Executive Summary
The recent budget cuts under the Trump administration, coupled with the rescinding of Biden’s AI executive order mark a significant shift in AI regulation within the United States. These policy changes prioritize rapid AI innovation and reduced regulatory oversight, creating both opportunities and risks for businesses and legal professionals.
For AI companies, consumers, and users, this marks a shift from safety and ethical development and use to accelerated product development, increased investment opportunities, and a competitive edge over regions with stricter AI governance, such as the EU and China. However, these advantages come with notable legal challenges, including heightened liability risks, ethical concerns related to bias and misinformation, cybersecurity vulnerabilities, and regulatory uncertainty in global markets.
Introduction
The ground beneath AI in the U.S. had a seismic shift in policy and direction last month. On January 23, 2025, three days after he was sworn in as president of the United States, President Trump revoked President Biden’s Executive Order on AI and issued his own executive order titled "Removing Barriers to American Leadership in Artificial Intelligence," aiming to eliminate perceived obstacles to AI innovation and promote a more flexible regulatory environment, according to the Fact Sheet issued by the White House.
Trump’s actions marked a sharp change in focus from Biden’s. Trump’s creation of the Department of Government Efficiency (“DOGE”) has resulted in wide-sweeping budget cuts. There is a great deal of speculation that DOGE will significantly reduce funding at key agencies such as the National Institute of Standards and Technology (NIST) and the U.S.’s newly-formed AI Safety Institute (US AISI), which is housed under NIST. Stakeholders across the AI ecosystem face a new regulatory reality. This shift from the previous administration's more structured approach to a far less regulated framework creates both opportunities and challenges for AI developers, implementers, and users.
This article provides a structured risk-benefit analysis to help legal professionals guide their clients based on their business priorities. Additionally, it explores the geopolitical implications of U.S. AI policy shifts and offers practical strategies for lawyers advising clients in AI-related industries. Ultimately, while deregulation may benefit short-term AI growth, it demands strategic legal navigation to address evolving risks and global compliance challenges.
Safety versus Innovation
The Previous Regulatory Framework
President Biden took a measured approach to AI development and governance. Some might say an overly cautious approach. Biden’s executive orders on AI were centered around knowledge-gathering and background. For example, Biden's Executive Order 14110 created the US AI Safety Institute. The US AISI was designed to develop testing methodologies, advise policymakers, and coordinate with international counterparts on AI safety. This framework emphasized risk assessment and mitigation for advanced AI systems, particularly frontier models with significant capabilities. At the time of the administration change, US AISI was in the midst of establishing comprehensive oversight frameworks to address potential risks associated with AI, including ethical considerations and bias mitigation. It now faces an uncertain future.
Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023)
This Executive Order intended to balance innovation with responsible AI deployment. It required AI developers to test and assess AI models for risks before deployment, particularly in high-stakes areas like national security, healthcare, and finance. It also emphasized privacy protection, directing agencies to develop new safeguards against AI-enabled surveillance and data misuse. To combat AI-driven discrimination, the order reinforced civil rights protections, ensuring AI systems do not perpetuate biases or unfair outcomes in hiring, lending, and law enforcement. Additionally, it sought to protect workers from AI displacement, promote fair competition in AI markets, and prioritize federal AI research and development to maintain the U.S.'s technological edge. It also encouraged international collaboration on AI standards and governance.
Executive Order 14141: Advancing United States Leadership in Artificial Intelligence Infrastructure (January 14, 2025)
Executive Order 14141 built on the foundation of EO 14110, emphasizing domestic AI infrastructure to enhance national security, economic competitiveness, and energy sustainability. The Order attempted to create sustainable AI by mandating that AI infrastructure be powered by clean energy sources, including nuclear, geothermal, wind, and solar power, and called for upgrades to the electric grid to handle AI’s growing energy demands. Recognizing AI’s transformative economic potential, EO 14141 prioritized workforce development, requiring companies to adhere to high labor standards while creating new AI-related jobs. Additionally, the Order attempted to secure domestic AI infrastructure while promoting international AI cooperation with allied nations.
The Current Deregulatory Approach
President Trump's administration has implemented several changes to the existing framework. Trump rescinded former President Biden's executive order on AI safety, asserting that it imposed unnecessary government control and hindered innovation. His administration’s new directive prioritizes innovation over safety and regulation.
Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence (January 23, 2025)
President Trump’s Executive Order 14179 revokes Biden’s Executive Order 14110 and eliminates AI policies that were perceived as restrictive to innovation. The order prioritizes U.S. global AI dominance by removing ideological biases, regulatory barriers, and government-imposed constraints on AI development. It directs the creation of an AI Action Plan within 180 days to enhance economic competitiveness, national security, and technological leadership. The order also mandates a comprehensive review of AI-related regulations, requiring agencies to suspend, revise, or rescind any rules that hinder AI growth. Additionally, it orders the revision of the Federal Management and Budget Office’s guidance on AI oversight, shifting policy toward market-driven AI expansion. By promoting free-market innovation over regulatory oversight, EO 14179 positions the U.S. as a global AI leader by reducing government intervention and accelerating AI deployment.
In addition, as DOGE sweeps through government funded programs slashing what it considers to be waste, it may target departments like NIST. NIST is the nerdy governmental agency that creates standards for AI safety guidelines, risk management frameworks, bias mitigation strategies, and cybersecurity protocols. Although the standards it creates are voluntary, without its creation of frameworks and testing of AI tools, it might hamper the quality of AI tools in the U.S.
Benefits Of Reduced Regulation of AI
Deregulation allows for accelerated AI innovation. As companies face fewer regulatory barriers, they can develop AI applications more rapidly. The ability to expedite market entry is particularly beneficial for AI startups and enterprises seeking first-mover advantages in emerging technologies.
Another advantage is the potential for increased investment and market growth. A more relaxed regulatory environment generally attracts venture capital and industry investment as businesses can operate with reduced compliance costs. This deregulated landscape may boost profitability and foster a more dynamic AI sector in the U.S. This might be a boon to the AI industry in the U.S. in light of the introduction of DeepSeek – an open-source AI tool that is competitive with general purpose AI tools like ChatGPT, Gemini and Claude.
Paradoxically, this deregulation could lead to the U.S. gaining a competitive edge over Europe and China, where AI regulation is tightening. Unlike the EU AI Act and China’s AI restrictions, the U.S.’s new business-friendly stance on AI provides companies with more operational flexibility, particularly in industries such as finance, healthcare, and defense.
Risks and Legal Challenges
While deregulation presents growth opportunities, it also introduces AI safety risks and liability exposure. Without oversight from institutions like the U.S. AI Safety Institute, companies may face increased litigation risks over AI failures. The lack of mandatory safety assessments raises concerns about AI hallucinations, misinformation, and bias. Without careful oversight by users and AI professionals, AI tools with fewer regulations could result in an increase in legal disputes and reputational damage.
Cybersecurity vulnerabilities also become a critical challenge in a deregulated environment. For example, AI-driven automation in hacking and fraud increases corporate liability, while the lack of enforceable AI security standards raises national security risks.
Finally, companies operating globally must navigate regulatory uncertainty here in the U.S. and international compliance risks. AI firms expanding into the EU or China may encounter legal hurdles if their models fail to meet strict foreign regulatory standards. Additionally, the absence of U.S. federal AI safety rules could delay cross-border AI trade and partnerships, creating further business uncertainty.
The Rising Role of State Laws in AI Regulation
The retreat from federal oversight of AI creates a regulatory vacuum that states are poised to fill, following established patterns in privacy, data security, and consumer protection law. This shift to state-driven regulation represents a significant change in the compliance landscape that legal practitioners must prepare their clients to navigate. State legislatures are likely to accelerate their existing regulatory initiatives, creating a patchwork of requirements similar to what occurred following the absence of comprehensive federal privacy legislation. California, with its technological and economic prominence, will likely set de facto national standards through legislation building upon its existing consumer privacy framework. Other states with significant technology sectors—including New York, Massachusetts, Washington, and Colorado—are already developing their own approaches to algorithmic accountability, automated decision-making, and AI transparency requirements.
According to the International Association of Privacy Professionals (IAPP), there are over thirty (30) active private-sector AI bills active in various states in the U.S as of February 24, 2025. That number does not include government-only or sector-specific AI legislation. We will no doubt see an increase in that number before the end of the year.
Increased Cost of Compliance for Companies
Nonuniform state-level AI governance creates substantial compliance challenges for companies operating in interstate commerce. It's rare for a business to be completely outside of interstate commerce due to modern supply chains and financial systems. Even local businesses often engage in some form of interstate activity through banking, shipping, or online transactions. With our internet-driven economy, lawyers advising such companies and litigating AI-related issues must be aware of all relevant state AI and data privacy laws and regulations. As companies face potentially conflicting requirements across jurisdictions, this leads to higher compliance costs, and increased litigation risk as plaintiffs' attorneys leverage state-specific statutory protections. The absence of federal preemption means that companies cannot rely on a single compliance framework, instead needing to address the most stringent requirements across all operating jurisdictions or develop state-specific protocols—both of which are resource-intensive approaches.
Increased Cost of Development for AI Companies
For AI developers and users with a national footprint, this regulatory fragmentation may actually prove more burdensome than a single federal framework would have been. Staying up to date with all AI laws will require more attention by law firms advising businesses. Similarly, AI laws differing among states will impact various lawsuits.
In addition, industry-specific state regulations present additional complexity. Financial services, healthcare, employment, and education sectors will likely see targeted AI governance at the state level, with requirements tailored to sector-specific risks. Financial services AI applications may face scrutiny from state banking departments, while healthcare AI will intersect with existing state patient protection laws. Employment-related AI will increasingly fall under state labor departments and fair employment practices agencies, particularly for hiring, promotion, and termination decisions.
The Fallout for Lawyers
For legal practitioners, this evolving landscape demands a more sophisticated approach to compliance counseling. Monitoring state legislative developments becomes essential, as does coordinating privacy concerns with AI privacy. This will result in an emerging field of AI ethics specialists. Strategic decisions about where to base AI operations may take on greater importance, potentially leading to concentration in states with more predictable or business-friendly regulatory approaches. Additionally, courts will increasingly shape AI governance through case law interpreting state consumer protection, privacy, and anti-discrimination statutes in the AI context.
The federalist approach to AI regulation may ultimately produce innovative and effective governance models through state experimentation. Still, in the meantime, it creates significant transitional challenges for organizations deploying AI technologies across multiple jurisdictions. Legal counsel must prepare clients for this more complex and fragmented regulatory environment, where compliance with a diminished federal framework represents only the beginning of their governance obligations.
Geopolitical Consequences
With the UK delaying its AI regulation plans to align more closely with the U.S., the lack of a federal AI safety agenda could lead to broader international repercussions. The U.S.’s withdrawal from structured AI governance efforts, such as the Paris AI declaration, may create fragmented global standards, making it harder to establish international AI safety norms.
The EU AI Act classifies some high-risk AI models as restricted, requiring transparency and accountability measures that U.S. companies may not comply with under Trump’s significantly reduced regulation of AI. U.S. firms could lose access to the European market unless they develop separate AI models or adhere to stricter EU laws.
Guidance for Legal Practitioners
Whether viewed as an opportunity or a challenge, AI’s changing regulatory landscape demands strategic legal navigation. Legal professionals should take a client-centered approach to AI regulation, tailoring advice based on business priorities. For clients prioritizing rapid innovation and market access, lawyers should focus on intellectual property protection, particularly AI-generated content and trade secrets, and help navigate evolving contractual liability clauses for AI-related products and services.
For clients emphasizing long-term risk management and compliance, legal counsel should develop internal AI risk management policies aligned with international best practices. Encouraging third-party audits and voluntary AI safety assessments can help businesses maintain credibility with regulators and consumers while mitigating legal exposure.
Best Practices
Best Practices for lawyers regarding the shift in AI regulation:
Assess Client Priorities: Determine whether a client values rapid AI deployment or prioritizes risk management and regulatory compliance.
Advise Clients on Applicable State AI Regulations: Maintain an understanding of all applicable state AI laws and regulations.
Advise on AI Safety and Liability: Encourage voluntary AI safety assessments and internal risk management frameworks to mitigate liability and reputational harm. Document all safety and risk management efforts.
Draft Comprehensive AI Contracts: Develop liability clauses, user agreements, and ethical AI usage policies to protect client interests.
Encourage Ethical AI Practices: Recommend transparency and bias mitigation strategies to reduce reputational and legal risks. Encourage clients to document their ethical and bias mitigation strategies.
Conclusion
The potential budget cuts to NIST and the uncertain future of the US AI Safety Institute mark a turning point for AI regulation and innovation in the U.S. While deregulation offers short-term business incentives, legal professionals must guide their clients through heightened legal risks and evolving global standards.
The shifting AI regulatory environment presents both challenges and opportunities for various stakeholders. Businesses poised to capitalize on relaxed regulations must weigh their growth ambitions against the risks associated with minimal oversight, including potential legal liabilities and reputational damage. On the other hand, firms operating in global markets must be cautious about conflicting regulatory expectations and ensure compliance with international AI governance frameworks.
For legal professionals, the task at hand is to help clients navigate this evolving landscape by providing strategic risk assessments and proactive compliance measures. Regardless of their clients’ approach to business and level of risk tolerance, lawyers must be well-versed in both domestic and international AI policies to offer comprehensive guidance. As AI continues to advance at an unprecedented pace, legal counsel will play a pivotal role in shaping responsible AI governance while ensuring that businesses remain competitive and compliant. Ultimately, AI regulation in the U.S. is still in flux, and future policy shifts could further alter the landscape. Legal professionals can help bridge the gap between innovation and responsible AI development.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.