Navigating the AI Legal Landscape: A Look at Where We are Now
What Types of Laws are States Passing in Regard to AI?
image by Dall-E in response to prompt “create an image for this article”
In Baltimore, Maryland an AI-generated voice clip of a high school principal making racist and antisemitic comments caused concern to students, parents and teachers. The backlash against the principal was so fierce that the principal was required to go on leave, with police guarding his home. According to an article by ABC News, a vengeful athletic director created the voice clip using AI, and then spread it to teachers and social media. Police detectives enlisted the aid of AI experts to unravel the issue and expose the crime. And that was the work of only one amateur who was angry at being accused of possibly inappropriately using school funds. Now imagine that type of misuse of AI, at scale.
As artificial intelligence (AI) continues to transform industries and daily life, effective AI-related laws are crucial for governing their use. In recognition of this, many states are rushing to get some type of AI-related laws passed. And none too soon.
We have already seen a few instances of AI misuse in order to sway political might. For example, some democratic voters in New Hampshire received robocalls of what sounded like President Biden telling people not to go vote in the primary election. The FCC recently passed a federal law banning AI in the use of robocalls. Hopefully this will have some noticeable impact on misuse of AI in upcoming political battles. But the federal government is not ready to pass a more detailed, comprehensive AI law. Individual states have begun to pass their own laws. This article considers what the most important elements of an AI-related state law are, how some of the most recently-passed state laws are measuring up to these elements, and finally how these state laws compare to international laws or proposed laws.
Essential Elements of AI-Related Laws
The ideal law governing AI is very likely not the same to everyone. And an AI-related law that can be passed by federal or state governments in the US is not likely to have every single desirable element in it. But here are some of the most important elements of AI laws.
Transparency
Ensuring accountability and trust in AI applications by making operations understandable to the public is vital. Transparency requirements address the "black box" nature of AI systems, mitigating the risk of opaque decision-making processes that could potentially harm users or lead to mistrust in AI technologies.
Privacy Protections
Privacy protections are critical as AI systems can process vast amounts of personal information. These safeguards are necessary to help prevent privacy breaches and unauthorized data exploitation, which are paramount in maintaining individual privacy and security in the digital age.
Bias and Discrimination Mitigation
It is becoming a common adage that AI models are only as good as the datasets they were trained on. And since no dataset to date has been perfect, no AI model to date is perfect. Yet that is no excuse for allowing imperfect AI models to pass on--or exacerbate--existing biases. Fairness and equity in AI decisions are essential, particularly in areas like employment, housing, purchasing of insurance, and law enforcement. Regulations to mitigate bias ensure that AI systems do not perpetuate existing societal biases encoded in their training data, promoting fairness and preventing discrimination. This in turn should help ensure compliance with civil rights laws and anti-discrimination laws.
Ethical Guidelines and Standards
Aligning AI development and usage with societal values and ethical norms addresses ethical dilemmas and conflicts, such as surveillance overreach or autonomy in lethal autonomous weapons. Perhaps autonomous lethal weapons are not our greatest concern in our daily lives, thankfully. But ethical guidelines and standards are nonetheless important elements. They would apply to much more common issues such as governmental or private surveillance, capturing our facial and physical images, monitoring speed and driving, and even using predictive analytics to determine the likelihood of criminal recidivism. Although this is likely to be more abstract and difficult to legislate than other areas of an AI law, laws regulating ethics are essential to sustainable AI use. Further, ethical guidelines would help developers and users navigate complex moral landscapes in which AI operates.
Advisory Bodies
AI capabilities and specifically Generative AI is advancing at an astonishing rate. Advisory bodies could provide ongoing oversight and expert advice on AI technologies and their implications. These entities could also make regulations with significantly more agility that can evolve with technological advancements, with the intent to keep laws relevant and effective.
Enforcement Mechanisms
Effective enforcement mechanisms ensure compliance with AI laws through penalties or corrective actions. A law without any enforcement mechanism is a suggestion rather than a rule. Having monitoring and enforcement capabilities is necessary to deter violations and provide means for redress, thereby enhancing if not creating the effectiveness of the law.
Public Engagement and Education
In a recent Pew Research survey, AI use is higher among American than this time last year, but unsurprisingly many do not trust AI tools, especially in relation to political races. Informing and involving the public in AI policymaking promotes democratic participation and enhances public understanding and acceptance of AI technologies. This helps to prevent misinformation and fear, fostering a more informed public discourse about AI.
Collaboration with Industry and Academia
Legislative collaboration with large industry brings practical and technologically informed insights to lawmaking, ensuring regulations that foster innovation while protecting public interest. This balance is crucial for the continued growth and integration of AI technologies into society. Likewise, academia can provide not only insight, but the benefit of multi-disciplinary insights, and expertise and research capabilities, among other things.
The Status of Our States’ AI-Related Laws
Many states have recently passed laws to begin to resolve concerns regarding AI issues. None of these laws to date satisfies all the criteria above. The current state laws can be broadly sorted into four (4) distinct categories:
1. Advisory Bodies,
2. Misuse / Criminal Misuse of AI,
3. Anti-Discrimination and Anti-Bias Laws, and
4. Consumer Protections.
1. Advisory and Oversight Frameworks
Several states have established advisory bodies to evaluate the impacts of AI and recommend future legislative measures. These bodies are typically composed of experts from various fields who guide states in forming balanced, informed AI policies. They are also likely the most easily understood, and easily passed laws. They are informative rather than prohibitive or prescriptive.
For example, Oregon’s recently-passed HB 41531 establishes a task force on Artificial Intelligence to standardize AI terminology and guide legislative efforts. The diverse composition of the task force includes members from academia, government, and consumer advocacy groups. Likewise, Texas formed an AI Advisory Council that oversees the ethical deployment of AI technologies within government in House Bill 2060 ensuring they are used to benefit public services without compromising ethical standards.
These advisory frameworks are crucial for states to adapt regulations as AI technologies evolve, ensuring that laws remain relevant and effective. But they are only a starting place, not an ending place.
2. Protections Against Misuse and Criminal Misuse of AI
To combat the potential misuse of AI technologies, especially in replicating individuals' likenesses without consent, several states have enacted laws specifically targeting the unauthorized use of AI-enhanced or created personal attributes such as voice and image. These laws are the approximate equivalent of torts like the right of publicity.
One of the most specific misuse prevention laws was passed by Tennessee. The state’s aptonymic law, clearly a nod to the “King,” the “Ensuring Likeness Voice and Image Security” (ELVIS) Act protects musicians and artists from unauthorized AI-generated replicas of their voices, safeguarding their personal and professional identities.
3. Regulations to Prevent Bias and Discrimination
With AI systems increasingly involved in decision-making processes, states are implementing regulations to prevent AI from perpetuating bias or discrimination, ensuring fairness and transparency. Ever since Amazon’s attempt to use AI to find the best candidates to work at Amazon was shown to be biased in favor of men and against women, we have been more aware of the potential for even nuanced bias in AI decision-making. To combat this, New York (A. 9314) proposed legislation that focuses on artificial employment decision tools (AEDTs), regulating their use in hiring to prevent discrimination and protect personal data integrity.
Likewise two New Jersey bills--A. 3854 and A. 3911--regulate the use of AEDTs in hiring and AI-enabled video interviews, respectively, requiring bias audits, transparency, and informed consent to prevent discrimination and ensure fairness in employment practices. These regulations underscore the importance of ethical AI use, particularly in areas such as employment where biased algorithms can have significant impacts on people's lives.
These laws highlight a proactive approach to curbing AI's potential harms, particularly in sensitive areas like employment and personal identity. Similarly Virginia requires developers and deployers of AI systems to conduct impact assessments and maintain transparency about the capabilities and limitations of their AI applications, in HB 747. Although bias testing is simply good practice and a measure to reduce liability, it is certainly a valuable and necessary category of law that states should consider.
Some states have enacted or proposed laws addressing AI applications within specific industries, reflecting unique local priorities or prevalent industries. For example the New York bill (A. 9314) mentioned above sets standards for transparency and accountability specifically in hiring practices, with the aim of ensuring that AI tools do not result in discriminatory outcomes.
And the two New Jersey bills (A. 3854 and A. 3911) mentioned above not only address the use of AEDTs but also require the employer using AEDTs inform candidates of their use, allow candidates to choose for their videos to be deleted after use and that the employer must post regular public bias audits of the AI systems.
These targeted regulations are indicative of a nuanced approach to AI legislation, where specific uses are regulated to address the distinct challenges and risks posed by AI in different contexts.
4. Consumer-Facing AI Rules and Regulations, and Data Privacy
Some states have directed their focus on consumer protections. For example, Colorado is considering a bill that would require that those using AI technology to notify the consumers that they are interacting with AI. Colorado SB 24-205. This could potentially affect the way their data is used, stored, and monetized, among other things.
California was the first state to pass a law aimed at protecting consumers from being deceived by AI, however its 2018 “Bolstering Online Transparency Act” (BOT Act), allows businesses and individuals to avoid liability for deceptive “bot” usage by posting a clear, conspicuous disclosure reasonably designed to inform users that they are interacting with the bot. However this only applies of the bot user is trying to deceive a potential buyer.
And the two New Jersey bills (A. 3854 and A. 3911) mentioned above not only address the use of AEDTs but also require the employer using AEDTs inform candidates of their use, allow candidates to choose for their videos to be deleted after use and that the employer must post regular bias audits of the AI systems.
The stand-out frontrunner of these recent forays into state-passed AI laws is Utah. The state’s two newly-minted laws, SB 149 and SB 131 are aimed at consumer protection. SB 149 requires companies to adequately disclose their use of GenAI to consumers, or face liability. It also creates an Office of Artificial Intelligence Policy to administer a statewide AI program. SB 131 requires disclosure of the use of AI or what it calls “synthetic media.” These categories reflect Utah's proactive and structured approach to addressing the challenges and opportunities presented by AI technologies. The state's legislation not only aims to harness the benefits of AI but also safeguards against potential risks, making it a notable example in the evolving landscape of AI governance.
In addition to these laws, several states have either passed or are considering laws to prevent the public from being misled by political campaigns using AI. For example, New Hampshire passed a law prohibiting deceptive and fraudulent deepfakes, whether they be in print, visual or audio format. Oregon passed a similar law.
How These Current State AI Laws Compare to EU’s AI-Related Laws
The AI laws in various US states generally emphasize consumer protection, transparency, and the mitigation of bias, with diverse approaches and focuses reflecting the unique priorities and concerns of each state. When compared to international AI laws, especially those developing in the European Union (EU), we observe a more unified and stringent approach internationally, particularly under the proposed EU Artificial Intelligence Act.
The EU's framework is characterized by comprehensive regulations that classify AI systems based on risk levels, enforce stringent compliance requirements for high-risk applications, and set out broad obligations across transparency, accountability, and ethical considerations. The AI Act focuses on ensuring that AI systems operate transparently, are free of biases, and respect fundamental rights, aligning closely with principles of human oversight and data protection standards like the GDPR. (EUR-Lex) (World Economic Forum) (MIT Technology Review).
European regulations also emphasize the need for AI systems to provide clear explanations of their functioning and decisions, which is intended to foster trust and understanding among users. For high-risk AI, such as those involved in critical infrastructure, healthcare, or policing, the EU mandates rigorous assessments and conformity with strict operational standards before deployment.
In contrast, US state laws often tackle specific issues such as privacy concerns in consumer data handling, transparency in law enforcement's use of technology, or fairness in employment practices driven by AI. For instance, California has been a leader in privacy and data protection with the CCPA, while other states like New York focus on fairness in AI-driven employment decisions. However, these state-level initiatives in the US can lack the uniformity and comprehensive scope observed in the EU's approach.
Comparison by Topic
1. Transparency and Accountability
International: The European Union’s AI regulation proposals strongly emphasize transparency, especially for high-risk AI systems. They require clear information on AI system capabilities and decision-making processes.
US States: States like California and Utah have begun to implement transparency measures, particularly in consumer protection, but generally US state laws are less prescriptive than EU regulations regarding transparency requirements across all AI applications.
2. Privacy Protections
International: The EU's General Data Protection Regulation (GDPR) sets a high standard for data privacy, including specific provisions on automated decision-making and extensive rights for individuals to control their data.
US States: California’s Consumer Privacy Act (CCPA) is similar in spirit to the GDPR, offering robust data protection rights. However, other states vary significantly in the level of data privacy protections, often not reaching the comprehensive coverage found in the GDPR.
3. Bias and Discrimination Mitigation
International: The EU guidelines and upcoming regulations stress the importance of unbiased AI, requiring thorough testing for bias and measures to mitigate it before deployment.
US States: New York and New Jersey focus on specific applications like employment, requiring bias audits for AI tools used in hiring. However, there is less consistency across states in comprehensive anti-bias measures.
4. Ethical Guidelines and Standards
International: Organizations like the Organization for Economic Co-operation and Development (OECD) and UNESCO have developed detailed ethical guidelines for AI that many countries have endorsed, focusing on fairness, transparency, and accountability. OECD is an international organization of 38 member countries that provides a platform where governments can work together to share experiences and seek solutions to common problems. This lends itself well to considering broad and unified AI policies.
US States: While some states have advisory bodies to consider ethical standards—Texas, North Dakota and West Virginia among them—there is no uniform approach to embedding broad ethical guidelines into state law. If the U.S. federal government were to create a platform similar to OECD platform, it would facilitate states working together to share experiences and expertise.2
5. Regulatory and Advisory Bodies
International: Many countries have established national AI advisory councils (e.g., the UK’s Frontier AI Task Force (formerly the UK’s AI Council)) that play a significant role in shaping AI policy and recommendations.
US States: States like Texas and Oregon have developed advisory councils, but their scope and influence can vary, and not all states have such bodies.
6. Public Engagement
International: Various countries have engaged in public consultations to shape their national AI strategies.
US States: Public engagement is less emphasized in state AI laws, though some states include stakeholder engagement in the process of developing AI regulations.
7. Industry Collaboration
International: International regulations often include mechanisms for industry input and compliance, seen in the EU’s coordinated plan nn AI.
US States: Industry collaboration exists at the state level in the US, with states like California actively involving tech companies in legislative processes, but is not uniform. To date one of the best platforms and sources of coordination is through the National Conference of State Legislatures.
Overall, the EU's regulations are set to establish a broad legal framework that could serve as a benchmark for global AI policies, potentially influencing how the US and other regions approach the regulation of AI technologies. This suggests a movement towards more harmonized global standards, where thorough risk assessments, transparency, and accountability become central tenets of AI regulation worldwide.
Although the Biden Administration has issued two (2) executive orders, they are more aspirational, and ask individual federal administrative bodies to issue statements on their AI use. Previous administrations have likewise passed AI-related laws, but to date we still lack a comprehensive law or set of AI-related laws to regulate and AI and protect consumers.3
Conclusion
As AI continues to evolve, state legislatures are actively working to ensure that its integration into society is both beneficial and regulated. By establishing advisory councils, enacting protections against misuse, and addressing potential biases, states are laying a foundation for the ethical development and application of AI technologies. This state-by-state legislative approach not only reflects local priorities and concerns but also contributes to a broader national dialogue on the responsible governance of AI. States would almost certainly benefit, however, from a federally-directed platform to facilitate this state action. As technology progresses, the interplay between these state laws and emerging federal guidelines will be crucial in shaping a balanced, equitable, and innovative future for AI in America.
Passed March 4, 2024.
The National AI Advisory Committee exists but is “tasked with advising the President and the National AI Initiative Office on topics related to AI.”