Three States Just Passed AI Laws — What to Know

Three States Just Passed AI Laws — What to Know

March 23, 2026 · Martin Bowling

Washington, Utah, and Florida just drew the AI rulebook. Does your business comply?

March 2026 was the most active month for state AI legislation since California signed SB 243 into law last year. In a span of two weeks, Washington sent two AI bills to the governor’s desk, Utah passed nine AI-related measures before adjourning, and Florida’s AI Bill of Rights cleared the Senate before dying in the House.

If you operate an AI chatbot, use automated customer tools, or deploy any form of generative AI in your business, these laws set the compliance floor for the next two years. Here’s what each state did and what it means for you.

What each state passed

Washington: disclosure and chatbot safety

Washington’s legislature closed its 2026 session by passing two major AI bills, both now awaiting Governor Bob Ferguson’s signature.

HB 1170 — AI content disclosure. Modeled after California’s AI Transparency Act, this bill requires AI developers to embed provenance markers — watermarks and metadata — in AI-generated images, audio, and video. Providers must also offer users a detection tool so anyone can check whether content was AI-generated. The bill passed the legislature on March 12 and takes effect January 1, 2028.

HB 2225 — companion chatbot safety. This bill targets AI chatbots that sustain ongoing relationships with users. Operators must disclose that the chatbot is AI at the start of every interaction and repeat that disclosure every three hours — or every hour for minors. The bill bans manipulative engagement techniques like mimicking romantic relationships, discouraging breaks, or soliciting gifts. It also mandates safeguards for detecting suicidal ideation. HB 2225 passed the legislature on March 11 and takes effect January 1, 2027.

Both bills carry enforcement under Washington’s Consumer Protection Act, which includes a private right of action — meaning individual users can sue, not just the attorney general.

Utah: nine bills covering schools, deepfakes, and chatbots

Utah sent nine AI-related bills to Governor Spencer Cox before the session ended on March 6. The most significant for businesses:

HB 438 — Companion Chatbot Safety Act. Passed the House 68-1. Requires companion chatbot operators to comply with Utah’s consumer data privacy law, restricts advertising by chatbot operators, and creates specific obligations for chatbots directed at minors.

Digital Content Provenance Standards Act. Addresses deepfakes by requiring platforms to disclose provenance data on AI-generated content. With deepfakes becoming harder to distinguish from real media, this targets both political misinformation and commercial fraud.

HB 273 — The Balance Act. Focuses on AI in schools, requiring education agencies to create model policies for AI use in classrooms and expanding computer science standards to include AI literacy.

Utah has been building toward this since its Artificial Intelligence Policy Act took effect in May 2024 — one of the first state-level AI regulatory frameworks in the country. The 2026 session expanded the scope significantly.

One notable setback: the White House sent a letter opposing a separate Utah AI developer regulation bill, calling it “unfixable.” That bill stalled, but the other nine advanced.

Florida: strong Senate vote, dead in the House

Florida’s story is the most dramatic. Senator Tom Leek’s AI Bill of Rights (SB 482) passed the Senate 35-2 on March 5. Governor DeSantis backed it publicly, warning of an “age of darkness and deceit” without AI guardrails.

The bill would have:

  • Given parents the right to control children’s AI interactions
  • Required chatbots to remind users they are not human
  • Banned companion chatbots from engaging minors without parental consent
  • Prohibited AI platforms from selling non-deidentified user data
  • Required political ad AI disclosure
  • Authorized the attorney general to investigate violations

Despite near-unanimous Senate support, the House never took it up. Speaker Daniel Perez argued AI regulation belongs at the federal level — aligning with President Trump’s executive order discouraging state-level AI laws. SB 482 died in messages on March 13.

AI disclosure rules — what businesses must tell customers

The clearest trend across all three states: if your business uses AI to interact with customers, you must disclose it. Here’s what the requirements look like in practice.

Washington (HB 2225):

  • Notify users the chatbot is AI at the start of every session
  • Repeat the notification every three hours (every hour for minors)
  • No manipulative engagement techniques

Utah (HB 438):

  • Comply with consumer data privacy obligations
  • No targeted advertising through chatbot operators directed at minors
  • Follow companion chatbot safety standards

California (SB 243, already law):

  • Disclose AI status clearly and conspicuously
  • Remind minors every three hours
  • Penalties up to $1,000 per violation

For most small businesses, compliance starts with a simple step: make sure your chatbot or automated assistant identifies itself as AI in its opening message. If you use a managed platform like Hollr, check that disclosure features are enabled. If you built something custom using a chatbot creator, you need to add disclosure language yourself.

Chatbot safety protocols — new requirements for AI chat

Beyond disclosure, the new laws add safety requirements that go further than what most small businesses have considered.

Self-harm detection. Washington’s HB 2225 and Oregon’s SB 1546 require chatbot operators to detect users expressing suicidal ideation and connect them to crisis resources. If your chatbot handles sensitive conversations — especially in healthcare, counseling, or youth services — you need a protocol for this.

Anti-manipulation rules. Washington specifically bans chatbots from mimicking romantic partnerships, encouraging users to isolate from family or friends, discouraging breaks from conversation, or soliciting gifts framed as necessary to maintain the AI relationship. These rules target companion chatbots, but the broad definition in the bill could catch chatbots that recognize users between sessions.

Minor protections. Every state in this wave included provisions for minors. Age-gating, parental consent, and restrictions on sexually explicit content are becoming standard requirements. If your business could interact with anyone under 18, plan for it.

Content provenance. Washington’s HB 1170 and Utah’s Digital Content Provenance Standards Act require AI-generated images, audio, and video to carry embedded provenance markers. If you generate marketing content or social media visuals with AI tools, keep this on your radar for 2027-2028 when these rules take effect.

The federal wildcard

All of this is happening against a backdrop of federal uncertainty. President Trump signed an executive order discouraging state-level AI regulation, and the White House actively opposed Utah’s more aggressive developer regulation bill. In March, the Secretary of Commerce was required to identify state AI laws the administration considers “burdensome.”

Meanwhile, 36 state attorneys general are pushing back against federal preemption. Congress is also advancing its own legislation — the KIDS Act recently cleared committee with federal chatbot safety requirements.

The practical takeaway: don’t wait for federal clarity. States are moving. If you do business in Washington, Utah, California, or Oregon, these rules apply to you regardless of what happens at the federal level.

How to prepare your business for AI compliance

You don’t need a legal team to start. These four steps cover the basics.

1. Audit your AI-facing tools

List every customer-facing tool that uses AI: chatbots, intake forms, automated responders, AI-generated content. For each one, check whether it clearly identifies itself as AI-powered.

2. Enable disclosure in your chatbot platform

If you use a managed chatbot service, look for built-in disclosure and compliance settings. Turn them on. A clear opening message like “Hi, I’m an AI assistant for [Your Business]. How can I help?” satisfies most current state requirements.

3. Review your minor protections

If your business could interact with anyone under 18, check whether your chatbot has age-gating, parental consent flows, or content restrictions. These are becoming non-negotiable in the states leading AI legislation.

4. Track which states you’re exposed to

If you serve customers online, you may be subject to laws in states where your customers live — not just where you’re located. Washington’s HB 2225 applies to operators who make chatbots available to Washington users, regardless of where the operator is based.

What to watch next

  • Governor Ferguson’s decision on Washington’s HB 1170 and HB 2225 — expected within 20 days of delivery
  • Governor Cox’s action on Utah’s nine AI bills
  • Oregon’s SB 1546 — its private right of action makes it the most consequential chatbot bill for small businesses, as we covered in our state chatbot laws roundup
  • Federal preemption developments — any Commerce Department action on state AI laws

The pattern is clear: state AI regulation is accelerating, not slowing down. The 78 chatbot bills across 27 states we tracked earlier this month are becoming law. Basic AI disclosure and safety protocols are shifting from best practice to legal requirement.

If you need help making your AI tools compliant or want to deploy chatbot and intake solutions with disclosure and safety built in, talk to our consulting team.

AI Tools Industry News Small Business