With Congress opting to not enact a moratorium on state artificial intelligence (AI) legislation, states continue to enact laws governing the development and use of AI systems. Through 2025, states have debated both broad and narrow AI governance frameworks and have passed a few into law, imposing new AI compliance requirements on to companies that develop or deploy AI technologies. Businesses should assess these new laws to determine whether the requirements apply to their activity and what their compliance duties may be.
California Adds to its AI Regulation
As states test different types of AI governance frameworks that may apply across sectors, California remains an active player in advancing AI legislation.
On September 29, 2025, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (SB 53), a first-of-its-kind law in the country that regulates “large frontier developers” (those with annual gross revenue exceeding $500 million) of “frontier models,” defined to include foundation models trained using a specific amount of computing power.
The law, effective January 1, 2026, requires such developers to disclose their risk management protocols and a transparency report regarding certain details of regulated models. SB 53 also includes reporting requirements for certain “critical safety incidents” and whistleblower protections for employees who report safety concerns.
California’s AI Transparency Act, passed last year, was amended by AB 853 during the state’s 2025 legislative session. AB 853 delayed the effective date of the AI Transparency Act to August 2, 2026.
Additionally, starting on January 1, 2027, the AI Transparency Act will now require “large online platforms” (platforms with over 2 million unique monthly users in a 12-month period) to detect whether certain “provenance data,” meaning data embedded into digital content or metadata to verify the content’s authenticity, origin, or history of modification, is embedded into or attached to content distributed on the platform, in addition to other requirements.
The amendment also has new requirements for manufacturers of photo, audio, or video “capture” devices, such as cameras, mobile phones, and voice recorders, to include “latent disclosure” in captured content that conveys certain information.
California also enacted SB 243 to require operators of “companion chatbots,” which are AI systems that are “capable of meeting a user’s social needs,” to disclose that a person is interacting with an AI system if a reasonable person would be misled to believe that the person is interacting with a human. The law also requires operators to prevent such companion chatbots from engaging with users unless the operator maintains a protocol for preventing the production of self-harm content to the user, including providing a notification that refers the user to crisis service providers if the user expresses suicidal ideation or interest in self-harm. The law contains an exception for chatbots used only for customer service.
Other States Advance AI Governance Frameworks
Colorado and Utah were first movers in establishing AI governance regimes in the United States. Those states, and Texas, have continued to consider AI proposals.
Colorado delayed the effective date of the Colorado AI Act during a special session in August by passing SB 25B-004. The Colorado AI Act’s effective date was delayed by six months to June 30, 2026. Earlier drafts proposed several major revisions to the law, but they did not survive the legislative process.
The Texas Responsible Artificial Intelligence Governance Act, HB 149 (TRAIGA), prohibits intentionally developing or deploying AI systems to incite or encourage a person to cause harm to themselves or others or engage in criminal activity, infringe on constitutional rights, engage in unlawful discrimination against a protected class in violation of state or federal law, or produce deepfake or child pornography. The enacted version of TRAIGA replaced an earlier proposal that was similar to the Colorado AI Act. TRAIGA goes into effect January 1, 2026.
Utah SB 226 amended the Utah Artificial Intelligence Policy Act (UAIPA) to make UAIPA less restrictive. Under this amendment, an entity that causes generative AI to interact with an individual must disclose, upon request of the individual, that they are interacting with generative AI during a consumer transaction and not a human.
In regulated occupations, individuals providing services must disclose the use of generative AI only in “high-risk” interactions. Moreover, SB 226 now provides a safe harbor for persons who disclose at the outset and throughout the interaction that an individual is interacting with AI. Separately, Utah’s SB 332 extended the sunset date in UAIPA from May 1, 2025, to July 1, 2027.
As states continue to adopt AI laws, the regulatory landscape remains fragmented for entities seeking to develop or deploy AI tools. It is critical to consider how any new AI tools will fit within your business’s compliance strategy.
If you have questions about whether (or how) an AI tool could impact your risk profile, or have other questions about data strategy, AI, or legislative tracking, please reach out to Venable’s Privacy & Data Security Group for assistance.