News March 11 2026
March 11, 2026 — Daily digest of AI law developments.
This article consolidates 2 news stories from March 11, 2026.
Contents
1. Department of Commerce Assessment of State AI Laws 2. FTC AI Policy Statement
Department of Commerce Assessment of State AI Laws
The Department of Commerce Assessment of State AI Laws was published on March 11, 2026, identifying several state-level AI laws as "onerous" and in conflict with national AI policy.[1][2]
Background
The assessment was mandated by the December 2025 Executive Order on AI, which directed the Commerce Department to evaluate whether state AI laws were creating undue burdens on AI development and interstate commerce.[1][2]
Laws Identified as Onerous
The assessment specifically called out the following state laws:[1]
- Colorado AI Act (effective June 30, 2026) — imposes obligations on developers and deployers of high-risk AI systems[1]
- California Transparency in Frontier AI Act (SB 53) — requires transparency for frontier AI model development[1]
- California Generative AI Training Data Transparency Act (AB 2013) — mandates disclosure of training data used in generative AI[1]
- New York RAISE Act (signed December 19, 2025) — imposes safety and transparency requirements on AI systems[1]
Key Findings
- These state laws may violate the First Amendment by requiring AI output alterations or mandatory disclosures[1]
- They create an inconsistent patchwork of regulations burdening interstate commerce[2]
- Federal preemption is warranted to maintain national competitiveness in AI development[2]
Significance
This assessment is a key piece of the federal government's strategy to preempt state AI regulation.[2] It provides the factual basis for the DOJ AI Litigation Task Force's court challenges and the White House National AI Legislative Framework's call for broad federal preemption.[1][2]
See Also
References
See individual article: Department of Commerce Assessment of State AI Laws
FTC AI Policy Statement
The FTC AI Policy Statement was issued on March 11, 2026, by the Federal Trade Commission, applying Section 5 of the FTC Act (unfair or deceptive acts or practices) to artificial intelligence models.[1][2]
Background
The policy statement addresses the FTC's authority to take enforcement action against AI developers and deployers whose models produce outputs that are unfair or deceptive to consumers.[1] It was issued in coordination with the Department of Commerce's assessment of state AI laws and the broader Trump administration effort to establish federal preemption over state AI regulation.[1][2]
Key Provisions
- Applies Section 5 of the FTC Act to AI model outputs, including generative AI[1]
- Addresses preemption of state laws that mandate alterations to AI outputs[1]
- Positions the FTC as the primary federal enforcer for AI-related consumer protection[2]
- Signals that requiring AI developers to modify model outputs could conflict with federal policy[1][2]
Significance
The statement establishes the FTC's jurisdictional claim over AI-related consumer harms at the federal level.[2] It also supports the administration's preemption argument by suggesting that state laws requiring changes to AI outputs may be preempted by federal authority.[1][2]
See Also
- DOJ AI Litigation Task Force
- White House National Policy Framework for AI
- Commerce Dept Assessment of State AI Laws
References
See individual article: FTC AI Policy Statement