Utah scales back reach of generative AI consumer protection law
The Utah legislature amended its Artificial Intelligence Policy Act to focus on “high-risk” generative AI interactions and to establish a new safe harbor.
Background
In March 2024, Utah became the first state to enact legislation specifically targeting generative AI (Gen AI). The Utah Artificial Intelligence Policy Act (UAIPA), which took effect on May 1, 2024, imposed certain disclosure obligations on entities using Gen AI tools to interact with their customers and confirmed that those entities cannot avoid liability under Utah consumer protection law because the violative statement or act derived from Gen AI. The UAIPA also sought to encourage AI innovation by establishing the Office of Artificial Intelligence Policy to administer an AI Learning Laboratory Program, consult with businesses and other stakeholders about potential regulatory proposals, and grant temporary “regulatory mitigations” to entities developing AI systems.
In late March 2025, Utah enacted further legislation amending various aspects of the UAIPA (SB 226, SB 332). The amendments, which will become effective on May 7, 2025, include extension of the UAIPA’s repeal date from May 2025 to July 2027 and clarification that the UAIPA does not displace any other remedy or right authorized under state or federal laws. The most impactful amendments, however, narrow the scope of the UAIPA’s disclosure obligations and establish a new enforcement safe harbor.
Disclosure obligations
The UAIPA requires entities to provide clear and conspicuous disclosure that a consumer is interacting with Gen AI (rather than a human) if asked or prompted by the consumer. The amendments clarify that the predicate for that disclosure obligation is “a clear and unambiguous request” to determine whether the interaction is with a human or with artificial intelligence.
As originally enacted, the UAIPA also required all regulated businesses (defined as occupations requiring a state license or certification to practice) to prominently disclose the use of Gen AI at the outset of any consumer interaction. Utah Senator Kirk Cullimore, who sponsored both the UAIPA and the recent amendments, noted that the statutory language swept too broadly, encompassing a range of businesses that likely are not using AI in a way that poses significant harm to consumers. As amended, the UAIPA requires prominent disclosure at the outset of an interaction only if the use of Gen AI constitutes a “high-risk” AI interaction. A “high-risk” AI interaction is defined as an interaction involving (i) the collection of “sensitive personal information” such as financial, health or biometric data; and (ii) the provision of personalized recommendations or advice “that could reasonably be relied upon to make significant personal decisions,” including the provision of financial, legal, medical or mental health advice. The amendments also narrowed the definition of an “AI system” by including a requirement that the system is “designed to simulate human conversation.” The modified definition excludes, for example, routine appointment confirmations and reminders that may be AI-generated and delivered via text.
To counterbalance the reduced scope of the mandatory disclosure obligations, the UAIPA amendments establish a new enforcement safe harbor where there is clear and conspicuous disclosure that the consumer is interacting with Gen AI (rather than a human) at both the outset of any consumer interaction and throughout the interaction. Utah’s Division of Consumer Protection may impose administrative fines of $2,500 per violation and commence state court proceedings to seek injunctions, disgorgement, and other relief. Violations of court orders are subject to additional fines and penalties. The UAIPA does not provide a private right of action.
Regulatory sandbox
The UAIPA amendments do not impact the Office of Artificial Intelligence Policy or the AI Learning Laboratory Program. As noted above, the Office is empowered to grant temporary, “regulatory mitigations” to entities developing AI systems that meet certain eligibility requirements, including technical expertise, financial resources, and a plan to minimize testing risks. Among other things, a regulatory mitigation agreement provides for a cure period before penalties are assessed and reduced civil fines during the agreement’s term.
Key takeaways
The new, risk-based approach to the UAIPA’s mandatory disclosure regime and the enforcement safe harbor for companies that voluntarily incorporate prominent disclosures into their Gen AI tools establish helpful precedents for state legislatures considering regulation of AI-powered chatbots. Hawaii, Idaho, Illinois and Massachusetts, for example, are considering bills that would require chatbot providers to provide prominent disclosures if a reasonable user may mistakenly believe they are interacting with a human. Moreover, the UAIP’s continued focus on promoting innovation is not only consistent with the Trump administration’s federal AI policy to date (see our prior update) but also mitigates the concern about stifling growth in the early stages of a transformative industry that contributed to vetoes of AI legislation in California and, most recently, in Virginia (see our prior update). Given the volume and variety of recently introduced AI bills, companies developing or deploying AI technology should continue to closely monitor activity at both the state and federal levels, as well as internationally, for emerging trends.