News April 21 2026: Difference between revisions
(Migration export) |
(Migration export) |
Latest revision as of 02:34, 28 April 2026
April 21, 2026 — Daily digest of AI law developments.
This article consolidates 6 news stories from April 21, 2026.
Contents
1. Colorado HB26 1263 Conversational AI 2. Connecticut Senate Bill 5 3. James Uthmeier 4. Nevada SB 5 AI Policy Office 5. New Mexico SB 5 AI Policy Office 6. Tennessee SB 1700 CHAT Act
Colorado HB26 1263 Conversational AI
Colorado House Bill 26-1263, the Conversational Artificial Intelligence Service Operator Requirements act, passed the Colorado House of Representatives on April 21, 2026, by a bipartisan 40-24 vote (1 excused) and now awaits Senate consideration.[1][2][3]
The bill establishes comprehensive requirements for operators of conversational AI services — publicly accessible AI systems that simulate human conversation via text, visuals, or audio — with particular focus on protecting minor users from harm, including sexual content and emotional dependence.[1]
Key Provisions
Effective January 1, 2027, the bill requires conversational AI operators to:[1]
- Prohibit points or rewards that encourage minor engagement with the service
- Implement reasonable, technically feasible measures to prevent the service from producing sexually explicit content for minor account holders or minor users
- Implement reasonable, technically feasible measures to block outputs simulating emotional dependence — applied generally and with explicit minor protections
- Establish protocols for handling user prompts related to suicidal ideation or self-harm
- Annually report protocol details to the Colorado Attorney General
- Ban implying that AI outputs are endorsed by or equivalent to licensed professionals (e.g., therapists)
- Provide consumer disclosures about the nature of the AI service
The bill preserves constitutional rights to information access, exempts confidential disclosures, and avoids mandating unconstitutional content moderation.[1]
Enforcement
Violations are classified as deceptive trade practices under the Colorado Consumer Protection Act, enforceable by the Colorado Attorney General. Civil penalties reach $5,000 per violation, with each AI output constituting a separate violation.[1]
Legislative History
| Date | Action | Vote |
|---|---|---|
| March 26, 2026 | House Business Affairs & Labor Committee | Passed 10-3 (with amendments L.001-L.004) |
| March 31, 2026 | House second reading (laid over) | — |
| April 20, 2026 | House second reading (passed with amendments) | — |
| April 21, 2026 | House third reading | Passed 40-24 (1 excused) |
| Awaiting | Senate consideration | — |
Context
HB 26-1263 is one of two major AI legislative efforts in Colorado during 2026. The other is a comprehensive rewrite of the Colorado AI Act (SB 24-205), the state's existing risk-based AI regulation. Governor Polis's working group released a framework in March 2026 proposing to replace the current law's "high-risk AI" framework with narrower disclosure and consumer notice requirements for "automated decision-making technology." The Colorado AI Act's effective date was extended to June 30, 2026, creating urgency for reform before the legislature adjourns on May 13, 2026.[4]
HB 26-1263 represents a more targeted approach, focusing specifically on conversational AI chatbot safety rather than the broader risk classification system of SB 24-205. The bill's bipartisan support — demonstrated by the 10-3 committee vote and 40-24 floor vote — suggests a consensus that conversational AI poses unique risks requiring specific regulation, distinct from general AI transparency requirements.[3]
See Also
- Colorado AI Act Working Group Revision (March 2026)
- State AI Legislation Week of April 24, 2026
- Nineteen States Pass AI Laws in 2026
References
- ↑ 1.0 1.1 1.2 1.3 1.4 Colorado General Assembly, HB 26-1263 Bill Page
- ↑ Healthier Colorado, Statement on House Passage of HB26-1263, April 21, 2026
- ↑ 3.0 3.1 Colorado Politics, Bipartisan Colorado Bill Targeting AI Chatbot Risks Advances, April 12, 2026
- ↑ Shook Hardy & Bacon, Revamped Colorado AI Act Proposed, March 2026
See individual article: Colorado HB26 1263 Conversational AI
Connecticut Senate Bill 5
Connecticut Senate Bill 5 is comprehensive AI regulation legislation that passed the Connecticut Senate on April 21, 2026, by a vote of 32-4. The bill is now before the House of Representatives, where lawmakers are racing to pass it before the May 6, 2026 adjournment deadline. Governor Ned Lamont's office has expressed "qualified support," shifting from earlier veto concerns.[1][2][3] The bill, sponsored by Senator James Maroney, spans 97 pages and covers AI in employment, companion chatbots, youth social media, frontier AI oversight, and workforce development.[1][4]
Overview
Connecticut SB 5 represents the state's third attempt at comprehensive AI regulation, narrowing from broad oversight to targeted requirements for companion chatbots, automated employment decisions, synthetic content labeling, and workforce development. The bill has 23 Senate co-sponsors. This is the most comprehensive state AI bill to clear a chamber in 2026.[4]
Key Provisions
Automated Employment Decision Technology
Beginning October 1, 2027, employers deploying "automated employment-related decision technology" must:[1]
- Notify individuals before an employment decision is made if an automated system is used as a substantial factor
- Disclose the tool's trade name, purpose, categories and sources of personal data analyzed, how data is assessed, and employer contact information
- Disclose when applicants or employees interact directly with automated systems
- Right to appeal and request human review of automated decisions
- Workers may bring private lawsuits for discrimination (covering age, race, sex, disability)[4]
The bill defines "automated employment-related decision technology" broadly to capture third-party hiring platforms, resume screening software, assessment tools, scheduling algorithms, and performance analytics systems, while excluding routine technologies like word processing and email.[1]
Anti-Discrimination Integration
SB 5 amends Connecticut's anti-discrimination statutes to clarify that using automated decision technology is not a defense against employment discrimination claims.[1]
AI Companion and Chatbot Safeguards
The bill requires companies offering AI "companions" with human-like interactions to implement specific safeguards, including disclosure, crisis response, and minor protections. Chatbot operators must detect suicidal ideation and route users to crisis resources.[1][4]
AI Subscription Transparency
Requires clear disclosure of AI product functional limitations before charging or renewing subscriptions.[1]
Regulatory Sandbox
Creates an AI regulatory sandbox allowing companies to test innovative products under reduced regulatory requirements.[1]
State Agency AI Oversight
Requires state agencies to conduct inventory and impact assessments before deploying AI systems.[1]
Workforce Development
Establishes a Connecticut AI Academy to train state workers, teachers, and small businesses on AI tools. Requires employer disclosure when layoffs are AI-related.[1][2]
Frontier AI Oversight
Includes provisions regulating high-risk "frontier" AI models, though thinner than some earlier proposals, along with a state "sandbox" for testing products under regulatory cover.[4]
Legislative History
- 2024: First comprehensive AI bill proposed, failed
- 2025: Second attempt, narrower scope, failed before adjournment
- March 2026: SB 5 advanced by Joint Committee on General Law
- April 21, 2026: Passed Senate 32-4[1]
- April 25, 2026: House discussions intensify ahead of May 6 adjournment; Governor's office signals qualified support[3]
- Next step: House vote before May 6, 2026 adjournment[1][4]
Significance
Connecticut SB 5 is significant because it:
- Is the most comprehensive state AI bill to clear a chamber in 2026
- Integrates AI regulation with existing anti-discrimination law
- Creates one of the first state AI regulatory sandboxes
- Addresses companion chatbot safety alongside employment AI
- Carries a late 2027 effective date (October 1, 2027), giving businesses time to comply
- Governor's shift to "qualified support" significantly improves prospects for enactment
See Also
- Oregon Signs AI Companion Safety Act
- California AI Bills Advance to Appropriations
- Nevada Passes CHAT Act
References
- ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 CBIA, Senate Passes Sweeping AI Mandates
- ↑ 2.0 2.1 CT Mirror, Artificial Intelligence Regulation, Senate CT
- ↑ 3.0 3.1 CT Post, Connecticut AI Bill SB 5 Heads to House
- ↑ 4.0 4.1 4.2 4.3 4.4 4.5 Transparency Coalition, AI Legislative Update — April 24, 2026
See individual article: Connecticut Senate Bill 5
James Uthmeier
Florida Attorney General James Uthmeier announced on April 21, 2026 the launch of a criminal investigation into OpenAI over ChatGPT's alleged role in advising the perpetrator of the April 17, 2025, Florida State University (FSU) shooting, which killed two people and injured six. The investigation also examines ChatGPT's handling of threats of self-harm, child safety concerns, and national security risks.[1][2]
Background
The FSU shooting on April 17, 2025, carried out by 21-year-old Phoenix Ikner, killed two people and injured six others. Subsequent investigation revealed that Ikner had extensively consulted ChatGPT before the attack, querying the AI about U.S. reactions to shootings, busy campus areas, weapons, and ammunition. Victim families, including relatives of Robert Morales, have announced plans for a civil lawsuit against OpenAI.[3][2]
Investigation Scope and Subpoenas
The Florida Office of Statewide Prosecution issued subpoenas to OpenAI requiring responses by May 1, 2026. The subpoenas demand internal documents from March 1, 2024, to April 17, 2026, covering:[1][4]
- Policies on user threats of harm to others and self-harm
- Law enforcement cooperation records
- Organizational charts and employee lists
- Media and statements related to the FSU shooting
- Records of interactions with minors
AG Uthmeier stated, "If this were a person on the other side of the screen, we would be charging them with murder," framing the investigation under Florida's aider-and-abettor statute, which treats those who aid, abet, or counsel crimes as equally responsible as perpetrators.[1][3]
Broader Concerns
Beyond the FSU shooting, the criminal investigation encompasses:[5][1]
- Child safety: AI-generated child sexual abuse material (CSAM), with Florida having recently sentenced one individual to 135 years for AI-generated CSAM possession
- Suicide and self-harm promotion: ChatGPT's alleged encouragement of self-harm among minors
- National security: Concerns about data access by foreign adversaries, particularly China
Florida's prior legislative actions include HB 1159 (signed March 2026), elevating AI-generated CSAM to a second-degree felony, and HB 245 expanding "child pornography" definitions to cover AI-generated content.[1][5]
OpenAI Response
OpenAI has stated that safety is core to its product design, denied encouraging harmful behavior, and indicated it will cooperate with the investigation. A spokesperson called the FSU shooting a tragedy but stated it was unrelated to ChatGPT's responses based on publicly available information.[3][2]
Significance
This investigation represents the first known criminal probe of an AI company by a state attorney general, escalating beyond the civil investigations and regulatory actions that have characterized AI enforcement to date. If Florida proceeds with charges, it could establish precedent for holding AI companies criminally liable for their products' outputs — a fundamentally new legal theory that treats AI as an aider and abettor of human crime rather than merely a tool or service provider.[5][4]
The investigation also coincides with Florida's legislative efforts to regulate AI, including Governor DeSantis's April 15 call for a special session (April 28–May 1, 2026) to reconsider the AI Bill of Rights (CS/SB 482), which addresses parental consent for minor chatbot accounts and consumer transparency.[6]
See Also
- Doe v X.AI Corp — Class action alleging xAI's Grok generated CSAM deepfakes of minors
- Florida AI Bill of Rights Special Session
- Cases — Active AI litigation tracker
- Legislation — AI legislation tracker
References
- ↑ 1.0 1.1 1.2 1.3 1.4 Florida Office of the Attorney General, "Attorney General James Uthmeier Launches Criminal Investigation into OpenAI/ChatGPT," April 21, 2026
- ↑ 2.0 2.1 2.2 Axios, "Florida AG launches investigation into OpenAI," April 9, 2026
- ↑ 3.0 3.1 3.2 SiliconANGLE, "Florida Attorney General issues subpoenas in ChatGPT probe tied to FSU shooting," April 21, 2026
- ↑ 4.0 4.1 GigaLaw, "Florida Attorney General Issues Subpoenas to OpenAI Over Threats," April 21, 2026
- ↑ 5.0 5.1 5.2 CBS News Miami, "Florida investigates OpenAI over AI risks to minors, safeguards," April 2026
- ↑ Florida AI Bill of Rights Special Session
See individual article: James Uthmeier
Nevada SB 5 AI Policy Office
Nevada SB 5, establishing an Artificial Intelligence Policy Office and an AI Learning Laboratory Program, was approved by the Nevada Senate on April 21, 2026, and sent to the House for consideration.[1]
Background
Nevada SB 5 was introduced in the 2025-2026 legislative session to establish a coordinated state approach to AI governance. The bill had a public hearing on March 10, 2026, was filed with the Legislative Commissioner's Office on March 19, received a favorable report, and was tabled for the Senate calendar on April 20 before Senate approval on April 21.[1]
Key Provisions
Artificial Intelligence Policy Office
The bill creates an AI Policy Office within the state government, led by a Director, responsible for:
- Developing statewide AI policy recommendations
- Coordinating AI regulatory efforts across state agencies
- Advising the Governor and Legislature on AI-related matters
AI Learning Laboratory Program
SB 5 establishes an AI Learning Laboratory Program to:
- Facilitate research and experimentation with AI technologies
- Provide a controlled environment for testing AI applications
- Support education and workforce development in AI-related fields
Legislative History
- March 10, 2026: Public hearing held
- March 19, 2026: Filed with Legislative Commissioner's Office; favorable report received
- April 20, 2026: Tabled for Senate calendar
- April 21, 2026: Approved by Senate; sent to House
Context
Nevada has been among the most active states on AI legislation in 2026. The state also passed SB 1700, the Curbing Harmful AI Technology (CHAT) Act, which regulates conversational AI chatbot safety, on April 21, 2026.[1]
Nevada SB 5 follows a broader national trend of states creating dedicated AI policy offices; similar bills have been introduced in New Mexico (SB 5) and Connecticut (SB 5), both also advancing through Senate chambers in April 2026.
See Also
References
See individual article: Nevada SB 5 AI Policy Office
New Mexico SB 5 AI Policy Office
On April 21, 2026, the New Mexico Senate approved SB 5, legislation establishing an Office of Artificial Intelligence Policy within state government. The bill was sent to the House of Representatives for consideration.
SB 5 Provisions
The legislation would create a centralized AI Policy Office with authority to:
- Develop guidelines for state agency AI procurement and deployment
- Review AI systems used by state agencies for compliance with civil rights and anti-discrimination laws
- Establish reporting requirements for algorithmic decision-making systems
- Coordinate with other states and federal agencies on AI governance
Legislative Status
As of April 21, 2026:
- ✅ Senate approved
- ⏳ Pending House committee
Context
New Mexico joins approximately 45 other states with AI legislation in 2026. SB 5 represents the "administrative/executive" approach to AI governance, establishing a central coordinating body rather than imposing direct restrictions.
References
See individual article: New Mexico SB 5 AI Policy Office
Tennessee SB 1700 CHAT Act
April 21, 2026 — Tennessee House Passes CHAT Act on Chatbot Safety and Data Privacy; Bill Awaits Governor's Action
The Tennessee House of Representatives unanimously approved Senate Bill 1700, the Curbing Harmful AI Technology (CHAT) Act, on April 21, 2026, by a vote of 90-0. The legislation, sponsored by Senator Raumesh Akbari, establishes comprehensive safety and data privacy requirements for conversational AI systems operating in the state. As of April 26, 2026, the bill awaits action by Governor Bill Lee, with approximately a May 8, 2026 deadline before it becomes law without signature.
Key Provisions
The CHAT Act creates a regulatory framework for chatbots and conversational AI platforms, addressing growing concerns about deceptive practices, data collection, and the potential for AI systems to manipulate users, particularly minors. Key provisions include:
- Persistent disclosures and pop-up notifications: Covered entities must inform users they are not engaging with a human at four intervals — upon login, every 30 minutes of continuous engagement, when prompted by the user, and when asked to provide legally regulated advice (medical, financial, or legal).[1]
- Covered chatbot definition: Applies to publicly accessible AI systems that generate at least $25 million in annual revenue, have at least 1 million monthly users, and could likely be used by minors. Video game chatbots and customer service chatbots are excluded.[2]
- Child safety policies: AI companies offering tools used by minors must develop and publish publicly available child safety protection policies.[2]
- Civil penalties: Up to $25,000 per violation.[1]
- Private right of action: Individuals may seek uncapped actual and punitive damages, plus costs, fees, and "any other relief the court deems proper."[1]
Notable Amendments
The bill was narrowed after White House input, which amended the definition of "catastrophic risk" to emphasize extreme, high-consequence harms, adjusted transparency rules to require public summaries rather than full internal disclosures, and provided carve-outs for academic research systems. The bill also includes a provision allowing Tennessee to recognize federal compliance standards as sufficient to meet state requirements if Congress passes comparable legislation.[2]
Legislative History
The Tennessee Senate passed SB 1700 unanimously on April 14, 2026 (31-0). The House approved the bill on April 21, 2026 (90-0). The companion House bill (HB 1946) was abandoned in favor of the Senate version. The bill was sent to Governor Bill Lee for final consideration. The Tennessee legislative session closed on April 24, 2026; under Tennessee law, the governor has 10 calendar days (excluding Sundays) from session adjournment to sign or veto bills, after which they become law without signature — creating an approximate May 8, 2026 deadline.[3][4]
Context
Tennessee's CHAT Act is part of a broader wave of state-level chatbot legislation enacted in 2026. With the passage of SB 1700, Tennessee becomes the latest state to establish specific regulatory oversight for conversational AI systems, joining Nebraska, which enacted its Conversational AI Safety Act on April 14, 2026. Governor Lee has already signed other AI-related legislation this session, including SB 1580 (prohibiting AI from representing itself as a mental health professional) and SB 837 (excluding AI from legal personhood).[4]
See Also
- Nebraska LB 1185 — Conversational AI Safety Act
- Tennessee SB 837 — AI Personhood Bill
- Tennessee SB 1580 — Healthcare AI Bill
- Idaho S 1297 — Conversational AI Safety Act
References
- ↑ 1.0 1.1 1.2 CCIA Comments on TN SB 1700 — Opposition Paper, March 3, 2026
- ↑ 2.0 2.1 2.2 Statescoop — Tennessee AI Safety Bill Amended by White House Input
- ↑ Governor Lee Marks Close of 2026 Legislative Session — TN.gov, April 23, 2026
- ↑ 4.0 4.1 AI Legislative Update April 24, 2026 — Transparency Coalition
See individual article: Tennessee SB 1700 CHAT Act