News April 13 2026: Difference between revisions

From AI Law Wiki
Jump to navigation Jump to search
(Migration export)
 
(Migration export)
 

Latest revision as of 02:34, 28 April 2026

April 13, 2026 — Daily digest of AI law developments.

This article consolidates 4 news stories from April 13, 2026.

Contents

1. California SB 1000 AI Content Provenance 2. Department of Education AI Education Rule 3. Disney v MiniMax Motions Dismiss 4. Maryland AI Bills Governor April


California SB 1000 AI Content Provenance

California SB 1000, which modifies existing law on AI disclosure and content provenance data for generative AI systems, was approved by the California Senate Committee on Privacy, Digital Technologies, and Consumer Protection on April 13, 2026, and sent to the Senate Appropriations Committee with a hearing scheduled for April 27, 2026.[1][2]

Background

SB 1000 builds on California's existing AI Transparency Act (Bus. & Prof. Code § 22757 et seq.) enacted in 2024, which requires generative AI systems to provide visible and latent disclosures on AI-generated content. The bill strengthens and expands these requirements, addressing gaps identified during the initial implementation period.[1][3]

Key Provisions

The bill requires clear and conspicuous disclosures identifying content as AI-generated, tailored to the medium (images, video, audio). Specific provisions include:[2][1]

  • Generative AI system providers must offer users options to add visible indicators (labels) on AI-generated images, videos, or audio, plus embed undetectable latent disclosures in the content
  • By January 1, 2027, platforms hosting generative AI systems cannot make available non-compliant systems lacking these disclosure mechanisms
  • Large online platforms must detect compliant provenance data (per standards-body specifications) in distributed content and provide user interfaces to reveal if content was AI-generated, substantially altered by generative AI, or captured by devices with provenance capabilities
  • From January 1, 2028, capture device makers (cameras, phones) must embed latent disclosures by default and offer opt-in visible disclosures

Legislative History

  • February 18, 2026: Referred to Senate Committee on Privacy[1]
  • March–April 2026: Committee consideration[1]
  • April 13, 2026: Approved by Senate Privacy, Digital Technologies, and Consumer Protection Committee[1]
  • April 13, 2026: Sent to Senate Appropriations Committee; hearing scheduled for April 27, 2026[1]

Context

California SB 1000 is part of a broader state effort to combat AI-generated misinformation and deepfakes through content provenance and transparency requirements. The bill follows similar disclosure requirements in California's SB 942 (AI Transparency Act) and complements the federal push for content provenance standards. Other states including Nevada have also advanced AI disclosure and provenance legislation in 2026.[1]

See Also

References

See individual article: California SB 1000 AI Content Provenance


Department of Education AI Education Rule

The Department of Education AI in Education Final Rule, published in the Federal Register on April 13, 2026, establishes supplemental priorities and definitions for discretionary grant competitions that promote the integration of artificial intelligence in education.[1]

The final rule takes effect on May 13, 2026 and gives preference to grant applications that incorporate AI in education.[2][1]

K-12 Priorities

For elementary and secondary education, the final rule prioritizes programs that:[3][1]

  • Expand age-appropriate AI and computer science education offerings
  • Embed AI and computer science lessons into teacher preparation programs
  • Provide professional development for educators to integrate AI into their subject areas
  • Offer dual-enrollment credit opportunities for high schoolers to earn college credits or industry credentials in AI
  • Use AI to support K-12 services, including early intervention and special education, for students with disabilities and their families
  • Use AI technology to improve program outcomes and operational efficiency

Higher Education Priorities

For postsecondary education, the priorities encompass integrating AI literacy into teaching practices, expanding AI and computer science education in institutions of higher education, and supporting professional development for postsecondary educators.[4]

AI Literacy Definition

The final rule broadened its definition of AI literacy to include ethics, critical thinking, and the societal impacts of AI, while maintaining flexibility for local implementation.[2] The Department emphasized that AI tools should support students with disabilities and underserved populations through universal design principles.[1]

Regulatory Approach

The Department of Education declined to impose new federal mandates on privacy, security, and implementation, stating these decisions are best handled at the state and local levels.[2] It also declined to establish national standards for age-appropriate AI instruction, though the rule was revised to emphasize tailoring AI use to student age groups and investing in teacher training.[1]

The Department reinforced that existing federal laws already apply to AI use in education and declined to add new compliance mandates, but revised the policy to highlight ethical considerations and responsible deployment.[2]

Context

This final rule follows a proposed rule published on July 21, 2025, and reflects the integration of public comments received during the comment period.[1] It is part of a broader federal effort to address AI in education, alongside separate guidance from the Department on AI use in schools.[4][3]

References

See individual article: Department of Education AI Education Rule


Disney v MiniMax Motions Dismiss

On April 13, 2026, defendants in the copyright infringement lawsuit Disney Enterprises, Inc. et al. v. MiniMax et al.[1] filed motions to dismiss, raising questions about personal jurisdiction, copyright registration of characters, and secondary liability for AI-generated content. A hearing on the motions is scheduled for May 29, 2026.[2][3]

Background

Disney and 11 other plaintiffs filed the lawsuit on September 16, 2025, alleging that Hailuo AI, operated by Chinese company MiniMax (brand name) with entities including Shanghai Xiyu Jizhi Technology (SXJT) and Nanonoble, "pirates and plunders Plaintiffs' copyrighted works on a massive scale" by generating infringing images and videos of copyrighted characters.[4]

The complaint alleged that users could submit text prompts requesting images of characters like Darth Vader, Spider-Man, the Simpsons, Batman, the Joker, and Superman, and Hailuo AI would generate high-quality, downloadable infringing content.[4] Disney sought statutory damages of up to $150,000 per infringed work and permanent injunctive relief.[4]

Motions to Dismiss

MiniMax and SXJT Motion (12(b)(2))

MiniMax and SXJT filed a motion to dismiss for lack of personal jurisdiction, arguing that MiniMax is not a legal entity but merely a brand name, and therefore the court cannot exercise jurisdiction over it. Regarding SXJT, the motion contends that the court lacks personal jurisdiction because SXJT is a Chinese company that has not directed activities to the United States, with any U.S. contacts stemming from Nanonoble rather than SXJT itself.[2][5]

Nanonoble Motion (12(b)(6))

Nanonoble filed a motion to dismiss for failure to state a claim, raising four distinct arguments:[2][3][6]

1. Copyright Registration and Protectability: Nanonoble argues that Disney has not demonstrated it registered copyrights on individual characters as opposed to the works containing them, and that Disney may be unable to copyright characters under Ninth Circuit law.[2]

2. Extraterritorial Training: Nanonoble contends that any copying related to direct infringement did not occur in the United States because the AI models were trained in China, placing the conduct outside the reach of the U.S. Copyright Act under the prohibition on extraterritorial application established in Subafilms, Ltd. v. MGM-Pathe Communications Co.[6][2]

3. No Direct Infringement from Plaintiff-Generated Outputs: Nanonoble argues that Disney's own generation of 52 allegedly infringing videos via Hailuo AI does not constitute infringement, citing the principle that "a copyright owner cannot infringe its own copyright" (Richmond v. Weiner, 353 F.2d 41 (9th Cir. 1965)).[6][2]

4. No Secondary (Contributory/Induced) Infringement: Nanonoble argues that Disney's contributory infringement claims fail the Cox Communications test—there was no tailoring to infringement or affirmative inducement of users, and Disney did not plausibly allege encouragement of infringement.[6][2][7]

Nanonoble acknowledged that Disney also provided organic evidence of third-party infringement, including subscriber-posted Instagram videos and third-party posts across Reddit, TikTok, and YouTube.[2]

Procedural Schedule

  • April 24, 2026: Filing deadline for motions to dismiss[6][2]
  • April 29, 2026: Opening briefs due for status conference[2]
  • May 4, 2026: Opposition briefs due[2]
  • May 5, 2026: Joint dispute chart due[2]
  • May 12, 2026: In-person status conference before Magistrate Judge Wang[2]
  • May 29, 2026: Hearing on motions to dismiss[7][1]

Significance

The case is one of the first major copyright infringement lawsuits by Hollywood studios against a Chinese AI company, raising novel questions about:

  • Whether AI-generated character images constitute direct copyright infringement
  • Whether training AI models on copyrighted works outside the United States falls within U.S. copyright law (extraterritoriality)
  • Whether platform operators can be held secondarily liable for user-generated AI content, particularly after the Supreme Court's Cox Communications decision
  • Whether copyright holders can generate their own infringing content using the defendant's tool and then sue for it
  • Personal jurisdiction over Chinese AI companies whose products are used in the U.S. but whose entities and training occur abroad

Case Information

  • Court: U.S. District Court for the Central District of California (Case No. 2:25-cv-08768)
  • Judge: Judge Blumenfeld, Jr.
  • Plaintiffs: Disney Enterprises, Inc., Lucasfilm Ltd., Twentieth Century Fox Film Corp., Warner Bros. Entertainment Inc., DC Comics Inc.
  • Defendants: MiniMax (brand name), Hailuo AI, Shanghai Xiyu Jizhi Technology (SXJT), Nanonoble

See Also

References

See individual article: Disney v MiniMax Motions Dismiss


Maryland AI Bills Governor April

April 13, 2026 — Maryland Sends Four AI-Related Bills to Governor, Covering Deepfakes, Education, Dynamic Pricing, and Election Integrity

Maryland lawmakers adjourned their 2026 legislative session on April 13 after sending four AI-related bills to Governor Wes Moore for signature. As of April 26, 2026, none of the four bills have been signed by the governor. Governor Moore held his first 2026 bill signing ceremony on April 14, but the AI bills were not included. Maryland has a 30-day window for gubernatorial action following adjournment. The bills address dynamic pricing, deepfake protections, AI in education, and deepfakes in political campaigns.[1]

HB 895: AI Dynamic Pricing

HB 895 would prohibit a food retailer and a third-party delivery service provider from engaging in the use of consumer personal data to set a price for consumer goods or services. The bill addresses the growing concern over AI-driven dynamic pricing algorithms that charge different prices based on individual consumer data. Passed the House on March 21, passed the Senate on April 10, with final concurrence approved on April 11. Now awaiting the governor's signature.[1]

SB 8: Deepfake Protection

SB 8 is a deepfake protection bill addressing the use of artificial intelligence to create non-consensual synthetic media. Passed by the Senate on third reading, 45-0, on March 19. Approved by the House on April 10, and in concurrence by the Senate on April 10. Now with the governor.[1]

Sponsored by Senator Hester and others, the bill adds protections against AI-generated deepfakes to Maryland's existing legal framework.[1]

SB 720: AI in Education

SB 720 would require the Maryland State Department of Education to provide guidance on artificial intelligence to local school systems. The bill recognizes the need for educational institutions to develop coherent policies for AI use in classrooms and administrative functions. Passed by the Senate on March 20, passed by the House on April 13, with final reconciliation passage on April 13. Now with the governor.[1]

SB 141: Deepfakes in Political Campaigns

SB 141 deals with deepfakes in political campaign materials, prohibiting the distribution of deceptive synthetic media in election contexts. Approved 44-0 by the Senate on February 12, by the House on April 10, and in reconciliation on April 14. Now with the governor.[1]

Sponsored by Senator Hester, the bill adds to a growing wave of state legislation addressing AI-generated content in elections, following similar laws enacted in states including Oregon, Idaho, and Utah.[1][2]

Significance

Maryland's four AI bills reflect a broader trend in state legislatures in 2026, where lawmakers are addressing AI risks across multiple domains simultaneously. The state joins a growing list of jurisdictions that have enacted or advanced AI legislation in 2026, including Alabama (SB 63 on healthcare AI), Idaho (four AI laws effective July 1), Nebraska (Conversational AI Safety Act), Tennessee (CHAT Act and AI personhood bill), and Utah (nine AI bills signed).[1][2]

As of April 26, 2026, Governor Moore has not yet signed any of the four bills. Maryland governors typically have 30 days from the end of session to sign or veto legislation.

See Also

References

See individual article: Maryland AI Bills Governor April


Categories