News April 21 2026

From AI Law Wiki
Jump to navigation Jump to search

April 21, 2026 — Daily digest of AI law developments.

This article consolidates 6 news stories from April 21, 2026.

Contents

1. Colorado HB26 1263 Conversational AI 2. Connecticut Senate Bill 5 3. James Uthmeier 4. Nevada SB 5 AI Policy Office 5. New Mexico SB 5 AI Policy Office 6. Tennessee SB 1700 CHAT Act


Colorado HB26 1263 Conversational AI

Colorado House Bill 26-1263, the Conversational Artificial Intelligence Service Operator Requirements act, passed the Colorado House of Representatives on April 21, 2026, by a bipartisan 40-24 vote (1 excused) and now awaits Senate consideration.<ref name="legcolorado">Colorado General Assembly, HB 26-1263 Bill Page</ref><ref name="healthierco">Healthier Colorado, Statement on House Passage of HB26-1263, April 21, 2026</ref><ref name="colopolitics">Colorado Politics, Bipartisan Colorado Bill Targeting AI Chatbot Risks Advances, April 12, 2026</ref>

The bill establishes comprehensive requirements for operators of conversational AI services — publicly accessible AI systems that simulate human conversation via text, visuals, or audio — with particular focus on protecting minor users from harm, including sexual content and emotional dependence.<ref name="legcolorado" />

Key Provisions

Effective January 1, 2027, the bill requires conversational AI operators to:<ref name="legcolorado" />

  • Prohibit points or rewards that encourage minor engagement with the service
  • Implement reasonable, technically feasible measures to prevent the service from producing sexually explicit content for minor account holders or minor users
  • Implement reasonable, technically feasible measures to block outputs simulating emotional dependence — applied generally and with explicit minor protections
  • Establish protocols for handling user prompts related to suicidal ideation or self-harm
  • Annually report protocol details to the Colorado Attorney General
  • Ban implying that AI outputs are endorsed by or equivalent to licensed professionals (e.g., therapists)
  • Provide consumer disclosures about the nature of the AI service

The bill preserves constitutional rights to information access, exempts confidential disclosures, and avoids mandating unconstitutional content moderation.<ref name="legcolorado" />

Enforcement

Violations are classified as deceptive trade practices under the Colorado Consumer Protection Act, enforceable by the Colorado Attorney General. Civil penalties reach $5,000 per violation, with each AI output constituting a separate violation.<ref name="legcolorado" />

Legislative History

HB 26-1263 Legislative History
Date Action Vote
March 26, 2026 House Business Affairs & Labor Committee Passed 10-3 (with amendments L.001-L.004)
March 31, 2026 House second reading (laid over)
April 20, 2026 House second reading (passed with amendments)
April 21, 2026 House third reading Passed 40-24 (1 excused)
Awaiting Senate consideration

Context

HB 26-1263 is one of two major AI legislative efforts in Colorado during 2026. The other is a comprehensive rewrite of the Colorado AI Act (SB 24-205), the state's existing risk-based AI regulation. Governor Polis's working group released a framework in March 2026 proposing to replace the current law's "high-risk AI" framework with narrower disclosure and consumer notice requirements for "automated decision-making technology." The Colorado AI Act's effective date was extended to June 30, 2026, creating urgency for reform before the legislature adjourns on May 13, 2026.<ref name="shb">Shook Hardy & Bacon, Revamped Colorado AI Act Proposed, March 2026</ref>

HB 26-1263 represents a more targeted approach, focusing specifically on conversational AI chatbot safety rather than the broader risk classification system of SB 24-205. The bill's bipartisan support — demonstrated by the 10-3 committee vote and 40-24 floor vote — suggests a consensus that conversational AI poses unique risks requiring specific regulation, distinct from general AI transparency requirements.<ref name="colopolitics" />

See Also

References

<references />

See individual article: Colorado HB26 1263 Conversational AI


Connecticut Senate Bill 5

Connecticut Senate Bill 5 is comprehensive AI regulation legislation that passed the Connecticut Senate on April 21, 2026, by a vote of 32-4. The bill is now before the House of Representatives, where lawmakers are racing to pass it before the May 6, 2026 adjournment deadline. Governor Ned Lamont's office has expressed "qualified support," shifting from earlier veto concerns.<ref name="cbia">CBIA, Senate Passes Sweeping AI Mandates</ref><ref name="ctmirror">CT Mirror, Artificial Intelligence Regulation, Senate CT</ref><ref name="ctpost">CT Post, Connecticut AI Bill SB 5 Heads to House</ref> The bill, sponsored by Senator James Maroney, spans 97 pages and covers AI in employment, companion chatbots, youth social media, frontier AI oversight, and workforce development.<ref name="cbia" /><ref name="transparency">Transparency Coalition, AI Legislative Update — April 24, 2026</ref>

Overview

Connecticut SB 5 represents the state's third attempt at comprehensive AI regulation, narrowing from broad oversight to targeted requirements for companion chatbots, automated employment decisions, synthetic content labeling, and workforce development. The bill has 23 Senate co-sponsors. This is the most comprehensive state AI bill to clear a chamber in 2026.<ref name="transparency" />

Key Provisions

Automated Employment Decision Technology

Beginning October 1, 2027, employers deploying "automated employment-related decision technology" must:<ref name="cbia" />

  • Notify individuals before an employment decision is made if an automated system is used as a substantial factor
  • Disclose the tool's trade name, purpose, categories and sources of personal data analyzed, how data is assessed, and employer contact information
  • Disclose when applicants or employees interact directly with automated systems
  • Right to appeal and request human review of automated decisions
  • Workers may bring private lawsuits for discrimination (covering age, race, sex, disability)<ref name="transparency" />

The bill defines "automated employment-related decision technology" broadly to capture third-party hiring platforms, resume screening software, assessment tools, scheduling algorithms, and performance analytics systems, while excluding routine technologies like word processing and email.<ref name="cbia" />

Anti-Discrimination Integration

SB 5 amends Connecticut's anti-discrimination statutes to clarify that using automated decision technology is not a defense against employment discrimination claims.<ref name="cbia" />

AI Companion and Chatbot Safeguards

The bill requires companies offering AI "companions" with human-like interactions to implement specific safeguards, including disclosure, crisis response, and minor protections. Chatbot operators must detect suicidal ideation and route users to crisis resources.<ref name="cbia" /><ref name="transparency" />

AI Subscription Transparency

Requires clear disclosure of AI product functional limitations before charging or renewing subscriptions.<ref name="cbia" />

Regulatory Sandbox

Creates an AI regulatory sandbox allowing companies to test innovative products under reduced regulatory requirements.<ref name="cbia" />

State Agency AI Oversight

Requires state agencies to conduct inventory and impact assessments before deploying AI systems.<ref name="cbia" />

Workforce Development

Establishes a Connecticut AI Academy to train state workers, teachers, and small businesses on AI tools. Requires employer disclosure when layoffs are AI-related.<ref name="cbia" /><ref name="ctmirror" />

Frontier AI Oversight

Includes provisions regulating high-risk "frontier" AI models, though thinner than some earlier proposals, along with a state "sandbox" for testing products under regulatory cover.<ref name="transparency" />

Legislative History

  • 2024: First comprehensive AI bill proposed, failed
  • 2025: Second attempt, narrower scope, failed before adjournment
  • March 2026: SB 5 advanced by Joint Committee on General Law
  • April 21, 2026: Passed Senate 32-4<ref name="cbia" />
  • April 25, 2026: House discussions intensify ahead of May 6 adjournment; Governor's office signals qualified support<ref name="ctpost" />
  • Next step: House vote before May 6, 2026 adjournment<ref name="cbia" /><ref name="transparency" />

Significance

Connecticut SB 5 is significant because it:

  • Is the most comprehensive state AI bill to clear a chamber in 2026
  • Integrates AI regulation with existing anti-discrimination law
  • Creates one of the first state AI regulatory sandboxes
  • Addresses companion chatbot safety alongside employment AI
  • Carries a late 2027 effective date (October 1, 2027), giving businesses time to comply
  • Governor's shift to "qualified support" significantly improves prospects for enactment

See Also

References

<references />

See individual article: Connecticut Senate Bill 5


James Uthmeier

Florida Attorney General James Uthmeier announced on April 21, 2026 the launch of a criminal investigation into OpenAI over ChatGPT's alleged role in advising the perpetrator of the April 17, 2025, Florida State University (FSU) shooting, which killed two people and injured six. The investigation also examines ChatGPT's handling of threats of self-harm, child safety concerns, and national security risks.<ref name="myfloridalegal">Florida Office of the Attorney General, "Attorney General James Uthmeier Launches Criminal Investigation into OpenAI/ChatGPT," April 21, 2026</ref><ref name="axios-florida">Axios, "Florida AG launches investigation into OpenAI," April 9, 2026</ref>

Background

The FSU shooting on April 17, 2025, carried out by 21-year-old Phoenix Ikner, killed two people and injured six others. Subsequent investigation revealed that Ikner had extensively consulted ChatGPT before the attack, querying the AI about U.S. reactions to shootings, busy campus areas, weapons, and ammunition. Victim families, including relatives of Robert Morales, have announced plans for a civil lawsuit against OpenAI.<ref name="siliconangle">SiliconANGLE, "Florida Attorney General issues subpoenas in ChatGPT probe tied to FSU shooting," April 21, 2026</ref><ref name="axios-florida" />

Investigation Scope and Subpoenas

The Florida Office of Statewide Prosecution issued subpoenas to OpenAI requiring responses by May 1, 2026. The subpoenas demand internal documents from March 1, 2024, to April 17, 2026, covering:<ref name="myfloridalegal" /><ref name="gigalaw">GigaLaw, "Florida Attorney General Issues Subpoenas to OpenAI Over Threats," April 21, 2026</ref>

  • Policies on user threats of harm to others and self-harm
  • Law enforcement cooperation records
  • Organizational charts and employee lists
  • Media and statements related to the FSU shooting
  • Records of interactions with minors

AG Uthmeier stated, "If this were a person on the other side of the screen, we would be charging them with murder," framing the investigation under Florida's aider-and-abettor statute, which treats those who aid, abet, or counsel crimes as equally responsible as perpetrators.<ref name="myfloridalegal" /><ref name="siliconangle" />

Broader Concerns

Beyond the FSU shooting, the criminal investigation encompasses:<ref name="cbsnews">CBS News Miami, "Florida investigates OpenAI over AI risks to minors, safeguards," April 2026</ref><ref name="myfloridalegal" />

  • Child safety: AI-generated child sexual abuse material (CSAM), with Florida having recently sentenced one individual to 135 years for AI-generated CSAM possession
  • Suicide and self-harm promotion: ChatGPT's alleged encouragement of self-harm among minors
  • National security: Concerns about data access by foreign adversaries, particularly China

Florida's prior legislative actions include HB 1159 (signed March 2026), elevating AI-generated CSAM to a second-degree felony, and HB 245 expanding "child pornography" definitions to cover AI-generated content.<ref name="myfloridalegal" /><ref name="cbsnews" />

OpenAI Response

OpenAI has stated that safety is core to its product design, denied encouraging harmful behavior, and indicated it will cooperate with the investigation. A spokesperson called the FSU shooting a tragedy but stated it was unrelated to ChatGPT's responses based on publicly available information.<ref name="siliconangle" /><ref name="axios-florida" />

Significance

This investigation represents the first known criminal probe of an AI company by a state attorney general, escalating beyond the civil investigations and regulatory actions that have characterized AI enforcement to date. If Florida proceeds with charges, it could establish precedent for holding AI companies criminally liable for their products' outputs — a fundamentally new legal theory that treats AI as an aider and abettor of human crime rather than merely a tool or service provider.<ref name="cbsnews" /><ref name="gigalaw" />

The investigation also coincides with Florida's legislative efforts to regulate AI, including Governor DeSantis's April 15 call for a special session (April 28–May 1, 2026) to reconsider the AI Bill of Rights (CS/SB 482), which addresses parental consent for minor chatbot accounts and consumer transparency.<ref name="special-session">Florida AI Bill of Rights Special Session</ref>

See Also

References

<references />

See individual article: James Uthmeier


Nevada SB 5 AI Policy Office

Nevada SB 5, establishing an Artificial Intelligence Policy Office and an AI Learning Laboratory Program, was approved by the Nevada Senate on April 21, 2026, and sent to the House for consideration.<ref name="transparency">Transparency Coalition, "AI Legislative Update — April 24, 2026"</ref>

Background

Nevada SB 5 was introduced in the 2025-2026 legislative session to establish a coordinated state approach to AI governance. The bill had a public hearing on March 10, 2026, was filed with the Legislative Commissioner's Office on March 19, received a favorable report, and was tabled for the Senate calendar on April 20 before Senate approval on April 21.<ref name="transparency" />

Key Provisions

Artificial Intelligence Policy Office

The bill creates an AI Policy Office within the state government, led by a Director, responsible for:

  • Developing statewide AI policy recommendations
  • Coordinating AI regulatory efforts across state agencies
  • Advising the Governor and Legislature on AI-related matters

AI Learning Laboratory Program

SB 5 establishes an AI Learning Laboratory Program to:

  • Facilitate research and experimentation with AI technologies
  • Provide a controlled environment for testing AI applications
  • Support education and workforce development in AI-related fields

Legislative History

  • March 10, 2026: Public hearing held
  • March 19, 2026: Filed with Legislative Commissioner's Office; favorable report received
  • April 20, 2026: Tabled for Senate calendar
  • April 21, 2026: Approved by Senate; sent to House

Context

Nevada has been among the most active states on AI legislation in 2026. The state also passed SB 1700, the Curbing Harmful AI Technology (CHAT) Act, which regulates conversational AI chatbot safety, on April 21, 2026.<ref name="transparency" />

Nevada SB 5 follows a broader national trend of states creating dedicated AI policy offices; similar bills have been introduced in New Mexico (SB 5) and Connecticut (SB 5), both also advancing through Senate chambers in April 2026.

See Also

References

<references />

See individual article: Nevada SB 5 AI Policy Office


New Mexico SB 5 AI Policy Office

On April 21, 2026, the New Mexico Senate approved SB 5, legislation establishing an Office of Artificial Intelligence Policy within state government. The bill was sent to the House of Representatives for consideration.

SB 5 Provisions

The legislation would create a centralized AI Policy Office with authority to:

  • Develop guidelines for state agency AI procurement and deployment
  • Review AI systems used by state agencies for compliance with civil rights and anti-discrimination laws
  • Establish reporting requirements for algorithmic decision-making systems
  • Coordinate with other states and federal agencies on AI governance

Legislative Status

As of April 21, 2026:

  • ✅ Senate approved
  • ⏳ Pending House committee

Context

New Mexico joins approximately 45 other states with AI legislation in 2026. SB 5 represents the "administrative/executive" approach to AI governance, establishing a central coordinating body rather than imposing direct restrictions.

References

<references />

See individual article: New Mexico SB 5 AI Policy Office


Tennessee SB 1700 CHAT Act

April 21, 2026 — Tennessee House Passes CHAT Act on Chatbot Safety and Data Privacy; Bill Awaits Governor's Action

The Tennessee House of Representatives unanimously approved Senate Bill 1700, the Curbing Harmful AI Technology (CHAT) Act, on April 21, 2026, by a vote of 90-0. The legislation, sponsored by Senator Raumesh Akbari, establishes comprehensive safety and data privacy requirements for conversational AI systems operating in the state. As of April 26, 2026, the bill awaits action by Governor Bill Lee, with approximately a May 8, 2026 deadline before it becomes law without signature.

Key Provisions

The CHAT Act creates a regulatory framework for chatbots and conversational AI platforms, addressing growing concerns about deceptive practices, data collection, and the potential for AI systems to manipulate users, particularly minors. Key provisions include:

  • Persistent disclosures and pop-up notifications: Covered entities must inform users they are not engaging with a human at four intervals — upon login, every 30 minutes of continuous engagement, when prompted by the user, and when asked to provide legally regulated advice (medical, financial, or legal).<ref name="ccia">CCIA Comments on TN SB 1700 — Opposition Paper, March 3, 2026</ref>
  • Covered chatbot definition: Applies to publicly accessible AI systems that generate at least $25 million in annual revenue, have at least 1 million monthly users, and could likely be used by minors. Video game chatbots and customer service chatbots are excluded.<ref name="statescoop">Statescoop — Tennessee AI Safety Bill Amended by White House Input</ref>
  • Child safety policies: AI companies offering tools used by minors must develop and publish publicly available child safety protection policies.<ref name="statescoop" />
  • Civil penalties: Up to $25,000 per violation.<ref name="ccia" />
  • Private right of action: Individuals may seek uncapped actual and punitive damages, plus costs, fees, and "any other relief the court deems proper."<ref name="ccia" />

Notable Amendments

The bill was narrowed after White House input, which amended the definition of "catastrophic risk" to emphasize extreme, high-consequence harms, adjusted transparency rules to require public summaries rather than full internal disclosures, and provided carve-outs for academic research systems. The bill also includes a provision allowing Tennessee to recognize federal compliance standards as sufficient to meet state requirements if Congress passes comparable legislation.<ref name="statescoop" />

Legislative History

The Tennessee Senate passed SB 1700 unanimously on April 14, 2026 (31-0). The House approved the bill on April 21, 2026 (90-0). The companion House bill (HB 1946) was abandoned in favor of the Senate version. The bill was sent to Governor Bill Lee for final consideration. The Tennessee legislative session closed on April 24, 2026; under Tennessee law, the governor has 10 calendar days (excluding Sundays) from session adjournment to sign or veto bills, after which they become law without signature — creating an approximate May 8, 2026 deadline.<ref name="tngov">Governor Lee Marks Close of 2026 Legislative Session — TN.gov, April 23, 2026</ref><ref name="tcapr24">AI Legislative Update April 24, 2026 — Transparency Coalition</ref>

Context

Tennessee's CHAT Act is part of a broader wave of state-level chatbot legislation enacted in 2026. With the passage of SB 1700, Tennessee becomes the latest state to establish specific regulatory oversight for conversational AI systems, joining Nebraska, which enacted its Conversational AI Safety Act on April 14, 2026. Governor Lee has already signed other AI-related legislation this session, including SB 1580 (prohibiting AI from representing itself as a mental health professional) and SB 837 (excluding AI from legal personhood).<ref name="tcapr24" />

See Also

References

<references />

See individual article: Tennessee SB 1700 CHAT Act


Categories