The rapid rise in interest in – and adoption of – artificial intelligence (“AI”) technology, including generative AI, has resulted in global demands for regulation and corresponding legislation. Microsoft, for one, has pushed for the development of “new law and regulations for highly capable AI foundation models” and the creation of a new agency in the United States to implement those new rules, as well as the establishment of licensing requirements in order for entities to operate the most powerful AI models. At the same time, Sam Altman, the CEO of ChatGPT-developer OpenAI, had called for “the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations, and tests that A.I. models must pass before being released to the public,” among other things.
The United States has “trailed the globe on regulations in privacy, speech, and protections for children,” the New York Times reported recently in connection with calls for AI regulation. The paper’s Cecilia Kang noted that the U.S. is “also behind on A.I. regulations” given that “lawmakers in the European Union are set to introduce rules for the technology later this year” in the form of the Regulation on Artificial Intelligence (better known as the “AI Act”). Meanwhile, China currently has “the most comprehensive suite of AI regulations in the world, including its newly released draft measures for managing generative AI.”
What the U.S. has done to date is release non-binding guidance in the form of the AI Risk Management, the second draft of which was released by the National Institute of Standards and Technology in August 2022. Intended for voluntary use, the AI Risk Management Framework aims to enable companies to “address risks in the design, development, use, and evaluation of AI products, services, and systems” in light of the “rapidly evolving” AI research and development standards landscape.
Shortly thereafter, in October 2022, the White House Office of Science and Technology Policy published the Blueprint for the Development, Use and Deployment of Automated Systems at the center of which are five principles that intended to minimize potential harm from AI systems. (Those five principles are: safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, consideration, and fallback.)
Despite such lags in regulation, a growing number of new AI-focused bills coming from lawmakers at the federal and state levels are worth keeping an eye on. With that in mind, here is a running list of key domestic legislation that industry occupants should be aware of – and we will continue to track substantive developments for each and update accordingly …
Apr. 18, 2024 – Future of Artificial Intelligence Innovation Act of 2024
Bill: Future of Artificial Intelligence Innovation Act of 2024 (S. 4178)
Introduced: Apr. 18, 2024
Introduced by/Sponsors: Sens. Maria Cantwell (D-Wash.), Todd Young (R-Ind.), John Hickenlooper (D-Colo.), and Marsha Blackburn (R-Tenn.)
Snapshot: The bill aims to set the foundation for continued U.S. leadership in the development of AI and emerging technologies, responding to “longstanding calls from researchers by incorporating key cybersecurity recommendations, such as the development of international standards, metrics, and AI testbeds; increased collaboration between the public-private sector and governments both domestically and abroad; and enhanced information sharing to drive secure AI research and development.”
Apr. 9, 2024 – Generative AI Copyright Disclosure Act
Bill: Generative AI Copyright Disclosure Act (H.R. 7913)
Introduced: Apr. 9, 2024
Introduced by/Sponsors: Rep. Schiff, Adam (D-CA)
Snapshot: The bill would require a notice to be submitted to the Register of Copyrights prior to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system. The bill’s requirements would also apply retroactively to previously released generative AI systems.
Mar. 21, 2024 – Protecting Consumers From Deceptive AI Act
Bill: Protecting Consumers From Deceptive AI Act (H.R. 7766)
Introduced: Mar. 21, 2024
Introduced by/Sponsors: Rep. Eshoo, Anna G. (D-CA)
Snapshot: The bill requires the National Institute of Standards and Technology (NIST) to establish task forces to facilitate and develop technical standards and guidelines for identifying and labeling AI-generated content. It also requires generative artificial intelligence (GAI) developers and online content platforms to provide disclosures on AI-generated content.
Mar. 19, 2024 – AI CONSENT Act
Bill: AI CONSENT Act (S. 3975)
Introduced: Mar. 19, 2024
Introduced by/Sponsors: Sen. Welch, Peter (D-VT)
Snapshot: The bill would require online platforms to obtain consumers’ express informed consent before using their personal data to train artificial intelligence (AI) models. Failure to do so would be considered a deceptive or unfair practice, subject to Federal Trade Commission (FTC) enforcement. The bill also directs the FTC to study the efficacy of data de-identification given advancements in AI tools.
Mar. 6, 2024 – Protect Victims of Digital Exploitation and Manipulation Act of 2024
Bill: Protect Victims of Digital Exploitation and Manipulation Act of 2024 (H.R. 7567)
Introduced: Mar. 6, 2024
Introduced by/Sponsors: Rep. Mace, Nancy (R-SC)
Snapshot: The bill which would amend title 18, United States Code to prohibiting the production or distribution of non-consensual, deepfake pornography.
Feb. 2, 2024 – R U REAL Act
Bill: Restrictions on Utilizing Realistic Electronic Artificial Language (“R U REAL”) Act (H.R. 7120)
Introduced: Feb. 2, 2024
Introduced by/Sponsors: Rep. Schakowsky, Janice D. (D-IL)
Snapshot: The bill would give the Federal Trade Commission about three months to add a new mandate to telemarketing rules to require telemarketers to disclose if they are using artificial intelligence to mimic a human at the beginning of any call or text message.
Feb. 1, 2024 – Artificial Intelligence Environmental Impacts Act of 2024
Bill: Artificial Intelligence Environmental Impacts Act of 2024 (H.R. 7197)
Introduced: Mar. 6, 2024
Introduced by/Sponsors: Rep. Eshoo, Anna G. (D-CA)
Snapshot: The bill would direct the National Institute of Standards and Technology (NIST) to develop standards to measure and report the full range of artificial intelligence’s (AI) environmental impacts, as well as create a voluntary framework for AI developers to report environmental impacts.
Jan. 10, 2024 – No AI FRAUD Act
Bill: No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943)
Introduced: Jan. 10, 2024
Introduced by/Sponsors: Reps. María Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY), and Rob Wittman (R-VA).
Snapshot: The bill establishes a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. The No AI FRAUD Act establishes a federal solution with baseline protections for all Americans by: (1) Reaffirming that everyone’s likeness and voice is protected, giving individuals the right to control the use of their identifying characteristics; (2) Empowering individuals to enforce this right against those who facilitate, create, and spread AI frauds without their permission; and (3) Balancing the rights against First Amendment protections to safeguard speech and innovation.
Jan. 10, 2024 – Ensuring Likeness Voice and Image Security (ELVIS) Act
Bill: Ensuring Likeness Voice and Image Security (ELVIS) Act
Introduced: Jan. 10, 2024
Introduced by/Sponsors: Tennessee Governor Bill Lee
Snapshot: The bill would update Tennessee’s Protection of Personal Rights law to include protections for songwriters, performers, and music industry professionals’ voices from the misuse of AI. The ELVIS Act aims to prevent the production and distribution of audio-visual and sound recordings featuring unauthorized AI-generated replica vocals of an individual without said individual’s consent. Tennessee law currently protects an individual’s image, photograph, and likeness from being exploited without consent, but the protection does not extend to an individual’s voice.
Status: Mar. 21, 2024 – The ELVIS Act was signed into law by Tennessee Gov. Bill Lee.
Jan. 10, 2024 – Federal Artificial Intelligence Risk Management Act
Bill: Federal Artificial Intelligence Risk Management Act (H.R.6936)
Introduced: Jan. 10, 2024
Introduced by/Sponsors: Reps. Ted Lieu, Zach Nunn (R-IA), Don Beyer (D-VA), and Marcus Molinaro (R-NY)
Snapshot: The bill would require U.S. federal agencies and vendors to follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST), including by incorporating the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.
What the sponsors are saying: “As AI continues to develop rapidly, we need a coordinated government response to ensure the technology is used responsibly and that individuals are protected,” said Rep. Lieu. “The AI Risk Management Framework developed by NIST is a great starting point for agencies and vendors to analyze the risks associated with AI and to mitigate those risks. These guidelines have already been used by a number of public and private sector organizations, and there is no reason why they shouldn’t be applied to the federal government, as well.”
2023
Dec. 22, 2023 – AI Foundation Model Transparency Act of 2023
Bill: AI Foundation Model Transparency Act of 2023 (H.R.6881)
Introduced: Dec. 22, 2023
Introduced by/Sponsors: Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA)
Snapshot: The bill would direct the Federal Trade Commission (FTC), in consultation with the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP), to set standards for what information high-impact foundation models must provide to the FTC and what information they must make available to the public. Information identified for increased transparency would include training data used, how the model is trained, and whether user data is collected in inference.
What the sponsors are saying: “AI offers incredible possibilities for our country, but it also presents peril. Transparency into how AI models are trained and what data is used to train them is critical for consumers and policy makers,” said Eshoo. “The AI Foundation Model Transparency Act directs the Federal Trade Commission and NIST to establish standards for data sharing by foundation model deployers. This critical legislation will provide necessary information and empower consumers to make well informed decisions when they interact with AI. It will also provide the FTC critical information for it to continue to protect consumers in an AI-enabled world.”
Dec. 15, 2023 – Artificial Intelligence Literacy Act of 2023
Bill: Artificial Intelligence Literacy Act of 2023 (H.R.6791)
Introduced: Dec. 15, 2023
Introduced by/Sponsors: Reps. Lisa Blunt Rochester (D-Del.) and Larry Bucshon, M.D. (R-Ind..)
Snapshot: The bill would codify AI literacy as a key component of digital literacy and creates opportunities to incorporate AI literacy into existing programs.
What the sponsors are saying: “It’s no secret that the use of artificial intelligence has skyrocketed over the past few years, playing a key role in the ways we learn, work, and interact with one another. Like any emerging technology, AI presents us with incredible opportunities along with unique challenges,” said Rep. Blunt Rochester. “That’s why I’m proud to introduce the bipartisan AI Literacy Act with my colleague, Rep. Bucshon. By ensuring that AI literacy is at the heart of our digital literacy program, we’re ensuring that we can not only mitigate the risk of AI, but seize the opportunity it creates to help improve the way we learn and the way we work.”
Dec. 12, 2023 – Eliminating Bias in Algorithmic Systems Act of 2023
Bill: Eliminating Bias in Algorithmic Systems Act of 2023 (S.3478)
Introduced: Dec. 12, 2023
Introduced by/Sponsors: Sen. Edward J. Markey (D-Mass.)
Snapshot: The bill would ensure that every federal agency that uses, funds, or oversees artificial intelligence (AI) has an office of civil rights focused on combatting AI bias and discrimination, among other harms. The legislation would also require every civil rights office to report their efforts to Congress and provide recommendations for congressional action.
What the sponsors are saying: “From housing to health care to national security, algorithms are making consequential decisions, diagnoses, recommendations, and predictions that can significantly alter our lives,” said Senator Markey. “As AI supercharges these algorithms, the federal government must protect the marginalized communities that have already been facing the greatest consequences from Big Tech’s reckless actions. The Eliminating BIAS Act will ensure that the government has the proper tools, resources, and personnel to protect these communities and mitigate AI’s dangerous effects, while providing Congress with critical information to address algorithmic harms.”
This is a short excerpt from a tracker that is published exclusively for TFL Pro+ subscribers. For access to our up-to-date legislation tracker, inquire today about how to sign up for a Professional subscription.