News April 06 2026: Difference between revisions
(Migration export) |
(Migration export) |
Latest revision as of 02:34, 28 April 2026
April 6, 2026 — Daily digest of AI law developments.
This article consolidates 8 news stories from April 6, 2026.
Contents
1. Anthropic 1.5B Settlement 2. FBI IC3 AI Cybercrime Report 3. GEMA v Suno 4. Georgia AI Bills Governor April 5. Indie Artists v Google Lyria 6. Kadrey v Meta Fourth Amended Complaint April 7. Warner Music Group 8. Penguin Random House v OpenAI
Anthropic 1.5B Settlement
'April 6, 2026 — Anthropic has agreed to a $1.5 billion settlement in Bartz et al. v. Anthropic PBC, case No. 3:24-cv-05417-AMO (N.D. Cal.), resolving claims by authors whose books were downloaded from pirated sites LibGen and PiLiMi for AI training.[1][2][3]
Background
The lead plaintiffs are authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber, who alleged that Anthropic's Claude AI assistant was trained on pirated copies of books from "shadow library" websites LibGen and PiLiMi.[1][2] The settlement followed partial summary judgment favoring plaintiffs on pirated books, finding that such use was not fair use.[1]
Details
The $1.5 billion fund covers approximately 500,000 works at roughly $3,000 per work, with additional payments if more works are added.[1][2] Payments are structured in installments: $300M by October 2, 2025; $300M post-final approval; $450M by September 25, 2026; and $450M by September 25, 2027.[1][3] The settlement covers only past ingestion claims up to August 25, 2025; it does not release claims for future training, AI outputs, or commercial model use from these datasets, and includes dataset destruction provisions.[1][4]
Timeline
- Preliminary approval: September 25, 2025 (Judge William Alsup)[2]
- Claimants deadline: March 30, 2026 (440,490 of 482,460 eligible works claimed — 91.3% claim rate)[5][6]
- Opt-outs: <0.5% of Works List; 41 objections filed[6]
- Objections unsealed: April 2026 (Dkt. Nos. 544, 596, 598, 600, 601, 602, and subsequently Dkt. 630, 640, 641)[7]
- Judge re-assignment: Case randomly reassigned to Judge Araceli Martínez-Olguín following Judge Alsup's move to inactive status[5]
- Final approval hearing: Rescheduled to May 14, 2026 at 2:00 PM PT (originally April 23, 2026)[6]
- Distributions expected from June 2026[2]
Objections
Class objections include:[7][8]
- Publisher favoritism: Objector Professor Bishop (Dkt. 630) argues the settlement systematically favors publishers over authors — publishers could claim roughly 50% or more of the total settlement through royalty presumptions, despite the suit being author-led[7]
- Foreign work exclusion: Bishop also challenges the exclusion of over 2 million foreign and non-U.S.-registered works, arguing this improperly narrows the class[7]
- Late notice: Objector Esquivel received notice of the settlement on approximately March 3, 2026 — nearly a month after the February 9 opt-out deadline had expired[7]
- Group registration undercounting (Dkt. 641): Multiple books registered under a single copyright registration are treated as one "claimable work," dramatically undercompensating prolific authors (e.g., 40+ books counted as a single work)[7]
- Dangerous precedent (Dkt. 640): The settlement allows AI companies to settle mass piracy at a discounted rate — cheaper than licensing — setting a harmful precedent for future AI copyright disputes[7]
- Class counsel conflicts: References Judge Alsup's December 2025 concerns about undisclosed fee-sharing arrangements between class counsel[8]
- Inadequate compensation: Base awards of approximately $3,000 per work before deductions for attorneys' fees and costs, with a tiered system paying more for "important" (nonfiction) books over fiction[8]
Objectors may participate in the May 14, 2026 fairness hearing via Zoom.[9]
Broader context
Anthropic was the first major AI company to settle one of the foundational AI training copyright cases. Class counsel seeks approximately $319M in fees and costs (21.26% of the fund), with named plaintiffs requesting $50K each.[1][10]
Related cases
- Bartz v Anthropic PBC — Full case page with procedural history and current status
- Kadrey v Meta Platforms Inc
References
See individual article: Anthropic 1.5B Settlement
FBI IC3 AI Cybercrime Report
The FBI's Internet Crime Complaint Center (IC3) released its 2025 Annual Report on April 6, 2026, documenting for the first time AI-enabled cybercrime as a dedicated category. The report logged 22,364 AI-related complaints with $893 million in associated losses, marking a watershed moment in the recognition of AI as a tool for fraud and cybercrime.[1][2]
Overall Cybercrime Trends
The 2025 IC3 report documented over 1 million total complaints with $20.877 billion in losses, representing a 26% increase from 2024. Cyber-enabled fraud dominated, comprising approximately 85% of losses ($17.7 billion from approximately 453,000 complaints). Investment fraud (often cryptocurrency-related) accounted for $8.648 billion, and business email compromise (BEC) accounted for $3.046 billion.[1][3]
Ransomware complaints reached 3,611 with $32.32 million in losses, a 259% increase from 2024, with 63 new variants identified.[1][3]
AI-Specific Findings
This is the first IC3 report in its 25-year history to include a dedicated AI section, underscoring AI's shift from an edge case to a core fraud driver.[2][4]
AI-Enabled Fraud Types
The report documents several categories of AI-enabled fraud:
- Business Email Compromise (BEC) with AI component: Over $30 million in losses, as AI generates context-specific, high-quality emails for impersonation and sustained scams[1][3]
- Impersonation scams: AI creates realistic identities, tones, and scenarios (e.g., impersonating government officials, executives, or vendors) for trust-building in investment fraud and social engineering[2][1]
- Investment and sustained fraud: AI adapts messaging over time to build credibility in high-loss schemes such as cryptocurrency scams[1]
- AI-generated phishing emails: AI enhances email quality, tone-matching, and sustained impersonation, lowering barriers for threat actors[2]
- Synthetic video and voice cloning: AI-generated deepfake video content and cloned voices used in fraud schemes[1][4]
Underreporting Concerns
The report acknowledges that AI-related losses are likely significantly underreported, as victims often fail to recognize AI involvement in fraud schemes. The true scale of AI-enabled cybercrime may be substantially larger than the documented $893 million.[2][1]
Regulatory Implications
The report's dedicated AI section has significant implications for AI regulation:
- It provides empirical data supporting regulatory efforts targeting AI misuse in BEC, impersonation, and scalable fraud
- It signals the need for victim awareness campaigns and AI-detection tools, as AI obscures its own involvement
- International cooperation is highlighted, with FBI-CBI operations yielding 175 arrests[1]
- The data supports policies requiring AI watermarking, provenance tracking, and detection mechanisms in AI-generated content
See Also
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 FBI IC3, "2025 Internet Crime Report," April 6, 2026
- ↑ 2.0 2.1 2.2 2.3 2.4 Abnormal Security, "AI Cybercrime Soars: Key Takeaways from the FBI IC3 2025 Report," April 2026
- ↑ 3.0 3.1 3.2 McDonald Hopkins, "The Sobering Truth of the FBI 2025 IC3 Report," April 2026
- ↑ 4.0 4.1 WaterISAC, "FBI IC3 Releases 2025 Internet Crime Report," April 2026
See individual article: FBI IC3 AI Cybercrime Report
GEMA v Suno
April 6, 2026 — A German court is expected to issue a ruling on June 12, 2026, in GEMA v. Suno, a landmark case at the Munich Regional Court I concerning whether Suno, Inc. infringed copyrights by using protected sound recordings to train its generative AI music model without licenses.[1][2][3]
Background
GEMA, Germany's music collecting society representing over 100,000 members and two million rightsholders worldwide, sued US-based AI music generator Suno on January 21, 2025, alleging unauthorized use, storage, and reproduction of copyrighted song recordings to train its AI tool.[1][3] GEMA claims Suno's outputs are "misleadingly similar" to originals in melody, harmony, and rhythm, constituting unauthorized reproduction and making available to the public under German copyright law.[1][3]
Proceedings
The oral hearing was held March 9, 2026, at the Munich Regional Court I.[1][3] The hearing concluded without a ruling; Suno must respond in writing by April 7, 2026.[3] Suno argues no infringement occurred, as outputs are not recognizable copies but mathematical patterns derived from training data.[3]
Significance
The GEMA case is closely watched internationally because it could set precedent for AI music training liability in Europe.[2] Unlike U.S. fair use doctrine, German copyright law has no general fair use exception, potentially making AI companies more vulnerable to infringement claims in European jurisdictions.[2][1] This follows GEMA's separate win against OpenAI (Munich Regional Court, case no. 42 O 14139/24, ruled November 11, 2025), where the court found unauthorized reproduction of song lyrics in GPT-4/4o models.[4][5]
Related cases
- UMG Recordings, Inc. v. Suno, Inc. — U.S. litigation by major record labels
- UMG Recordings, Inc. v. Uncharted Labs, Inc. — U.S. case against Udio parent company
References
See individual article: GEMA v Suno
Georgia AI Bills Governor April
April 2026 — Georgia Sends Two AI Bills to Governor, Covering Chatbot Safety and Healthcare AI Decisions
Georgia's legislature adjourned on April 6, 2026, after approving two AI-related bills and sending them to Governor Brian Kemp for signature. As of April 26, 2026, neither bill has been signed by the governor. The bills address chatbot disclosure and child safety, and AI in healthcare insurance decisions.[1]
SB 540: Chatbot Disclosure and Child Safety
SB 540 is a chatbot disclosure and child safety bill requiring notification of the AI nature of chatbot interactions, steps to limit certain actions by minors, provision of privacy tools, and protocols for response to suicidal ideation or self-harm. The bill was passed by the Senate on March 6, approved by the House on March 25, and the Senate agreed to the House reconciliation version on March 27.[1]
Sponsored by Senator Anavitarte and others, the bill aligns Georgia with a growing national wave of chatbot safety legislation, including measures enacted in Idaho (Conversational AI Safety Act, S 1297), Nebraska (Conversational AI Safety Act, LB 1185), Oregon (SB 1546), and Tennessee (CHAT Act, SB 1700).[1][2]
SB 444: AI in Healthcare Insurance
SB 444 prohibits decisions regarding insurance coverage of healthcare decisions from being based solely on AI systems or software tools. The bill was adopted by the Senate on February 11, passed by the House on March 19, and the Senate agreed to House amendments on March 25.[1]
Sponsored by Senator Kirtkpatrick and others, the bill reflects a broader trend of states restricting AI's role in healthcare coverage determinations, similar to Alabama's SB 63 (signed April 17, 2026) and Idaho's HB 542 (signed April 2, 2026).[1][2]
SR 789: AI Study Committee
The Georgia Senate also approved SR 789, a Senate Resolution creating a Senate Study Committee on the Impact of Artificial Intelligence. The resolution received full Senate approval on March 31. As a Senate Resolution, it does not require gubernatorial action and takes effect upon passage.[1]
Significance
Georgia's 2026 AI legislative session reflects the growing focus among Southern states on regulating AI in both consumer-facing and healthcare contexts. Alongside Alabama, Tennessee, and Mississippi, Georgia is part of a regional trend of states with Republican-controlled legislatures advancing AI safety measures — particularly chatbot safety and healthcare AI restrictions.[1][2]
See Also
- Idaho Enacts Conversational AI Safety Act
- Nebraska Enacts Conversational AI Safety Act
- Tennessee Passes CHAT Act
- Alabama Signs SB 63 on Healthcare AI
References
See individual article: Georgia AI Bills Governor April
Indie Artists v Google Lyria
April 6, 2026 — A coalition of independent musicians has filed a lawsuit against Google in the U.S. District Court for the Northern District of Illinois, alleging that the tech giant's Lyria 3 AI music model was trained on over 44 million copyrighted clips (280,000 hours) from YouTube without proper compensation or consent.[1][2][3]
Overview
The 118-page complaint, filed March 6, 2026, in the U.S. District Court for the Northern District of Illinois (Case No. 1:26-cv-02582), claims Google DeepMind copied millions of copyrighted sound recordings, musical compositions, and lyrics to develop Lyria 3.[1][2] The model launched publicly on February 18, 2026, via the Gemini chatbot app for over 750 million monthly users and generates up to 30-second audio clips with vocals and lyrics from text prompts or uploaded images and videos.[1]
Background
Plaintiffs argue Google had the resources to license rights legally but chose not to, gaining unfair leverage over artists, labels, and publishers.[1] They cite Google's prior MusicLM project (completed 2023), which was withheld due to legal risks like cultural appropriation.[1] Unlike the major label lawsuits against Suno and Udio, this case focuses specifically on independent artists who historically have less leverage in licensing negotiations with large platforms.[3]
Context
This lawsuit joins a growing roster of AI music copyright cases, including the major label cases against Suno and Udio, and comes as courts are beginning to grapple with foundational questions about whether training AI models on copyrighted works constitutes fair use.[1][3]
See Also
- Kogon v. Google LLC — Case page for this litigation
- Cases — Active AI litigation tracker
References
See individual article: Indie Artists v Google Lyria
Kadrey v Meta Fourth Amended Complaint April
April 6, 2026 — Judge Allows Fourth Amended Complaint in Kadrey v. Meta, Adding Contributory Infringement Claim Over Torrenting
Judge Vince Chhabria of the U.S. District Court for the Northern District of California reluctantly granted plaintiffs' motion to file a fourth amended complaint in Kadrey et al. v. Meta Platforms, Inc. (Case No. 3:23-cv-03417), adding a contributory copyright infringement claim related to Meta's alleged torrenting of copyrighted books from shadow libraries.[1]
The new claim targets Meta's alleged distribution of copyrighted works through torrenting—a practice distinct from the direct infringement claims for training AI models on pirated books that were already resolved on fair use grounds in the court's June 2025 partial summary judgment ruling. Plaintiffs allege Meta knowingly induced or materially contributed to third-party infringement by downloading and uploading their copyrighted books via BitTorrent from shadow libraries including Library Genesis and Z-Library.[1][2]
Despite granting the motion, Judge Chhabria strongly rebuked plaintiffs' counsel at Boies Schiller, describing a "pattern" of blaming Meta for case delays and noting that plaintiffs could have added these claims as early as November 2024. The court described the delay as "inexcusable" but ultimately prioritized the interests of absent class members, finding no prejudice to Meta given the ongoing related litigation and shared discovery schedules with the Entrepreneur Media v. Meta case.[1]
The court also denied class discovery until named plaintiffs survive summary judgment on both the existing distribution claims and the new contributory infringement claims, a significant limitation on the plaintiffs' ability to pursue class-wide relief.[1]
Background
The Kadrey v. Meta case was filed in July 2023 by authors including Richard Kadrey, Sarah Silverman, and Christopher Golden, alleging Meta used pirated books to train its LLaMA AI models. In June 2025, Judge Chhabria granted Meta's motion for partial summary judgment on fair use grounds for the direct infringement (training) claims, but distribution claims remained unresolved.[3]
Significance
This ruling marks an important shift in the case's trajectory, expanding the litigation beyond the direct training infringement claims (already decided on fair use) to include contributory infringement based on distribution activities. The contributory infringement theory could have broader implications for AI companies' liability when their training data acquisition practices involve distributing copyrighted works through peer-to-peer networks.
See Also
- Kadrey v Meta Platforms Inc — Full case page
- Carreyrou v Anthropic PBC — Related opt-out copyright litigation against multiple AI companies
- Cases — Active AI litigation tracker
References
See individual article: Kadrey v Meta Fourth Amended Complaint April
Warner Music Group
Correction: An earlier version of this article inaccurately stated that all three major labels settled with both Suno and Udio. In fact, only Warner Music Group settled with Suno; UMG and Sony's lawsuits against Suno remain active. UMG settled with Udio but Sony's lawsuit against Udio also continues.
April 6, 2026 — Warner Music Group has settled its copyright infringement lawsuits against both AI music companies Suno and Udio, transitioning to licensed partnerships. Universal Music Group settled only with Udio, while its lawsuit against Suno continues. Sony Music Entertainment has not settled with either company, and both of its lawsuits remain active.[1][2][3][4]
Settlement Status (as of April 2026)
| Label | Suno Status | Udio Status |
|---|---|---|
| Warner Music Group | Settled (Nov 2025) | Settled (Nov 2025) |
| Universal Music Group | Active lawsuit (ongoing discovery; settlement talks stalled over fees and equity) | Settled (Oct 2025) |
| Sony Music | Active lawsuit (stalled talks; seeking Warner-Suno deal terms in discovery) | Active lawsuit |
Background
The lawsuits were filed by the major record labels in 2024, alleging that Suno and Udio trained their generative AI music systems on copyrighted sound recordings without authorization.[1][3]
Settlement Details
- WMG-Suno (November 25, 2025): Warner Music and Suno announced a "first-of-its-kind" partnership settling the lawsuit, providing compensation and protections for artists and songwriters, including control over names, images, likenesses, voices, and compositions in AI music.[1][2] Suno retired its unlicensed models and launched new licensed models using the Warner catalog; free users can only play and share, while downloads require payment.[1] WMG also sold Songkick, a concert-discovery platform, to Suno.[2]
- WMG-Udio (November 19-24, 2025): WMG settled its lawsuit and entered a licensing deal for a "next-generation" AI music platform launching in 2026, with Udio becoming a "walled garden" with no downloads or exports allowed.[1][2]
- UMG-Udio (October 2025): Universal Music Group settled its lawsuit against Udio and signed a licensing deal for an AI music platform set to launch in 2026.[3]
Active Litigation
- UMG v. Suno: Universal's lawsuit against Suno remains active in U.S. District Court (Massachusetts), with ongoing discovery. A protective order was filed in December 2025. Settlement talks have stalled over licensing fees for training data and reported demands for equity stakes in Suno.[4][5]
- Sony v. Suno and Sony v. Udio: Both lawsuits continue. Sony is seeking details of the Warner-Suno settlement terms through discovery.[4][5]
Significance
The partial settlements mark a split in the music industry's approach to AI, with Warner Music embracing licensing partnerships while UMG and Sony continue pursuing litigation to establish stronger precedent and compensation terms. The ongoing UMG and Sony cases against Suno may produce the first judicial rulings on whether training AI on copyrighted sound recordings constitutes fair use — a question the Bartz v. Anthropic summary judgment addressed only for books and text.[3][4]
Suno and Udio continue to face separate litigation from independent artists and international disputes, including the pending GEMA v. Suno case in Germany.[2]
Key dates
- June 24, 2024 — UMG Recordings, Inc. v. Suno, Inc. filed[3]
- October 2025 — UMG settles with Udio[3]
- November 19-25, 2025 — Warner Music settles with Udio and Suno[1]
- December 2025 — UMG v. Suno: protective order filed; Sony continues separate litigation[4]
- April 2026 — UMG/Sony settlement talks with Suno stalled; both pursue discovery, including Warner-Suno deal terms[4][5]
- June 12, 2026 — GEMA v. Suno ruling expected (Germany)[2]
Related pages
- UMG Recordings Inc v Suno Inc
- UMG Recordings, Inc. v. Uncharted Labs, Inc.
- April 2026 — UMG and Sony Hit Settlement Impasse With Suno
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 TechCrunch, "Warner Music Signs Deal With AI Music Startup Suno, Settles Lawsuit", November 25, 2025
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 Music Business Worldwide, "Warner Music Group Settles With Suno, Strikes First-of-its-Kind Deal", November 2025
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 Courthouse News, "AI Song Generator Startups Suno and Udio Angered the Music Industry", 2025
- ↑ 4.0 4.1 4.2 4.3 4.4 4.5 Digital Music News, "Suno Universal Music Lawsuit: Where Things Stand", April 24, 2026
- ↑ 5.0 5.1 5.2 HappyCapyGuide, "Suno UMG Sony Licensing Stalemate", 2026
See individual article: Warner Music Group
Penguin Random House v OpenAI
April 6, 2026 — Penguin Random House Germany has filed a copyright infringement lawsuit against OpenAI's Ireland-based European subsidiary in a German court, alleging that ChatGPT reproduced content virtually indistinguishable from the Coconut the Little Dragon children's book series by Ingo Siegner.[1][2][3]
Overview
PRH claims that when prompted to create a similar story, ChatGPT generated text and images "virtually indistinguishable from the original," due to OpenAI's "memorization" of training data.[1][2] The case was filed in Germany, likely at the Munich Regional Court, following a November 2025 ruling by that court that ChatGPT violated German copyright by reproducing musicians' songs.[1][3]
Context
OpenAI has faced a steady accumulation of copyright lawsuits from publishers, authors, and media organizations. Prior to this case, OpenAI was already defending itself against suits from Encyclopedia Britannica, Merriam-World, and a group of authors whose case survived a motion to dismiss.[4][5]
See also
References
See individual article: Penguin Random House v OpenAI