News March 13 2026
March 13, 2026 — Daily digest of AI law developments.
This article consolidates 3 news stories from March 13, 2026.
Contents
1. California AI Bills March 2. Encyclopaedia Britannica v OpenAI 3. Washington State AI Bills
California AI Bills March
California AI Legislation — Multiple AI-related bills advanced in the California legislature in March 2026, covering disclosure, advertising, workplace, and deepfake protections.[1][2]
Key Bills
SB 1000 — AI Disclosure and Provenance
SB 1000 requires disclosure and provenance tracking for AI-generated content, ensuring users can identify when content was produced by AI systems.[1]
SB 1050 — AI in Advertising
SB 1050 regulates the use of AI in commercial advertising, requiring disclosure when AI-generated personas or content are used in marketing.[1]
AB 1898 — Workplace AI Notice
AB 1898 requires employers to provide advance notice to employees when AI systems are used in workplace decision-making, including hiring, performance evaluation, and termination decisions.[1][2]
AB 1883 — Workplace AI Surveillance
AB 1883 places restrictions on AI-powered workplace surveillance and monitoring systems, establishing employee consent requirements.[1][2]
SB 928 — CSU Employee AI Protections
SB 928 extends AI-related protections to California State University employees, governing how AI systems can be used in academic employment decisions.[1]
SB 1015 — AI Deepfakes in Minor Extortion
SB 1015 criminalizes the use of AI-generated deepfakes in extortion targeting minors, addressing the growing threat of synthetic intimate imagery used for blackmail.[1]
Context
California's legislative activity comes despite the Commerce Department's March 11 assessment specifically calling out two California laws — SB 53 (Transparency in Frontier AI Act) and AB 2013 (Generative AI Training Data Transparency Act) — as conflicting with national policy.[3] The state continues to advance new AI bills even as federal preemption efforts intensify.[2][3]
See Also
- Commerce Dept Assessment of State AI Laws
- White House National Policy Framework for AI
- Washington State AI Bills
References
See individual article: California AI Bills March
Encyclopaedia Britannica v OpenAI
Encyclopaedia Britannica and Merriam-Webster Sue OpenAI for Copyright and Trademark Infringement
On March 13, 2026, Encyclopaedia Britannica, Inc. and its subsidiary Merriam-Webster, Inc. filed a copyright and trademark infringement lawsuit against OpenAI in the U.S. District Court for the Southern District of New York, alleging that OpenAI scraped nearly 100,000 Britannica articles and Merriam-Webster dictionary entries without authorization to train ChatGPT.[1][2]
Claims
The complaint asserts two primary claims:[1]
- Copyright infringement: OpenAI unauthorizedly copied and used nearly 100,000 Britannica encyclopedia articles and Merriam-Webster dictionary entries — including definitions, etymologies, and usage examples — as training data for ChatGPT. The scale and commercial nature of the copying challenges any fair use defense.
- Trademark dilution: ChatGPT generates outputs that mimic the style and language of Britannica and Merriam-Webster, potentially misleading users about the source and damaging the brands' reputations for accuracy and reliability. Notably, ChatGPT has been known to generate content falsely attributed to Britannica and Merriam-Webster, compounding the harm.[3]
The plaintiffs seek unspecified monetary damages and an injunction to block OpenAI's alleged ongoing infringement.[2]
Significance
The lawsuit is notable for several reasons:[3]
- It involves reference works (encyclopedias and dictionaries) rather than the creative works (books, music, video) that dominate most AI copyright cases, testing whether factual reference materials receive similar copyright protection against AI training.
- The trademark dilution claim is novel in the AI copyright context, adding a brand-protection dimension not present in most AI training lawsuits.
- It joins over 90 active copyright suits against U.S. AI companies, most filed in the S.D.N.Y.
The case also highlights the issue of AI hallucinations attributed to authoritative sources: when ChatGPT generates content and falsely attributes it to Britannica or Merriam-Webster, it damages those brands' reputations for accuracy — a unique harm that distinguishes reference publishers from other copyright plaintiffs.[3]
Procedural Status
The case was filed on March 13, 2026, and remains in early stages as of April 2026, with no reported motions, responses, or rulings.[1]
See Also
- Encyclopaedia Britannica v OpenAI Inc — Case page with detailed information
- Gracenote Media Services v OpenAI — Nielsen subsidiary sues OpenAI over copyrighted metadata database
- Kadrey v Meta Platforms Inc — Authors v. Meta over book training data
- Cases — Active AI litigation tracker
References
- ↑ 1.0 1.1 1.2 Courthouse News: Complaint PDF, Encyclopaedia Britannica v. OpenAI, March 13, 2026
- ↑ 2.0 2.1 Washington Times, "Encyclopaedia Britannica, Merriam-Webster Sue OpenAI for Massive Copyright Infringement," March 17, 2026
- ↑ 3.0 3.1 3.2 AI Automation Global, "Britannica & Merriam-Webster Sue OpenAI for Copyright Infringement," March 2026
See individual article: Encyclopaedia Britannica v OpenAI
Washington State AI Bills
Washington State AI Legislation — Governor Bob Ferguson signed four significant AI bills into law in March 2026, making Washington one of the most active states in regulating artificial intelligence.[1][2][3]
Bills Signed Into Law
HB 1170 — AI Content Provenance and Disclosure
HB 1170, signed by Governor Ferguson on March 25, 2026, requires large AI providers (those with over 1 million monthly users and annual revenues exceeding $500 million) to disclose when content is AI-generated or substantially modified.[3][4] Key provisions include:
- Manifest disclosures: Providers must offer users the option to include clear, conspicuous labels identifying AI-generated or AI-modified content that are difficult to remove[4]
- Latent disclosures: Watermarks or metadata must be embedded in AI-generated or materially altered images, video, and audio[3][4]
- Free detection tools: Covered providers must provide public AI detection tools[4]
- Material alteration standard: Significant changes require disclosure, but minor edits like resizing, cropping, or color adjustments do not[4]
- Effective date: February 1, 2027[4]
HB 2225 — AI Companion Chatbot Safety
HB 2225, signed on March 24, 2026, regulates AI companion chatbots — defined as AI systems with natural language interfaces that provide adaptive, human-like responses, exhibit anthropomorphic features, and sustain relationships across multiple interactions.[5][6] Key requirements include:
- Operators must clearly disclose that the chatbot is artificial and not human[5]
- Disclosures must appear at the start of interaction, with reminders every three hours for adults and every hour for minors[5]
- For minors: prohibition on sexually explicit content, mandatory "take a break" prompts, restrictions on engagement-maximizing tactics, and bans on emotional manipulation[6]
- The definition excludes bots used solely for business operations, customer service, or technical assistance that do not sustain relationships, as well as video game bots and standalone voice assistants[5]
- Effective date: January 1, 2027[5]
SB 5395 — AI in Health Insurance Prior Authorization
SB 5395 increases restrictions on the use of AI in prior authorizations by health insurance carriers, requiring human oversight and preventing purely algorithmic denial of coverage determinations.[1][7]
SSB 5886 — AI Deepfakes and Personality Rights
SSB 5886, signed on March 16, 2026, amends Washington's Personality Rights Law to address the use of a person's "forged digital likeness" without consent, expanding existing property rights law to cover AI-created realistic but deceptive audio and video.[8] The law takes effect on June 11, 2026.[8]
Context
Washington's legislative action is part of a broader wave of state-level AI regulation in early 2026, even as the federal government moves toward preempting state laws.[7] These bills passed the same week as the Commerce Department identified multiple state AI laws as conflicting with national policy.[9]
Washington's HB 1170 parallels Oregon's SB 1546 and California's SB 1000 in establishing AI content provenance requirements, while HB 2225 mirrors companion chatbot safety laws in California (SB 243) and Oregon (SB 1546).[6]
See Also
- Oregon SB 1546 (AI Companion Law)
- California SB 1000 (AI Content Provenance)
- Commerce Dept Assessment of State AI Laws
- White House National Policy Framework for AI
References
- ↑ 1.0 1.1 Transparency Coalition, "AI Legislative Update: March 13, 2026"
- ↑ KUOW, "Washington passes new AI laws to crack down on misinformation, protect minors," April 3, 2026
- ↑ 3.0 3.1 3.2 OPB, "Washington passes AI laws dealing with misinformation and protecting minors," March 25, 2026
- ↑ 4.0 4.1 4.2 4.3 4.4 4.5 Troutman Pepper, "Analyzing Utah and Washington's New AI Provenance Laws," April 2026
- ↑ 5.0 5.1 5.2 5.3 5.4 Hunton, "Washington State Enacts Law Regulating AI Companion Chatbots With Private Right of Action," March 2026
- ↑ 6.0 6.1 6.2 Mayer Brown, "Washington and Oregon Regulate AI Companions: Key Compliance Changes," April 2026
- ↑ 7.0 7.1 Tech Policy Press, "March 2026 US Tech Policy Roundup"
- ↑ 8.0 8.1 JD Supra, "Washington State Expands Personality Rights Law to Address AI Deepfakes," 2026
- ↑ Baker Botts, "March 2026 Federal Deadlines That Will Reshape the AI Regulatory Landscape"
See individual article: Washington State AI Bills