<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ailawwiki.com/index.php?action=history&amp;feed=atom&amp;title=News_March_26_2026</id>
	<title>News March 26 2026 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://ailawwiki.com/index.php?action=history&amp;feed=atom&amp;title=News_March_26_2026"/>
	<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_March_26_2026&amp;action=history"/>
	<updated>2026-05-01T08:08:10Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.40.1</generator>
	<entry>
		<id>https://ailawwiki.com/index.php?title=News_March_26_2026&amp;diff=118&amp;oldid=prev</id>
		<title>AILawWikiAdmin: Migration export</title>
		<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_March_26_2026&amp;diff=118&amp;oldid=prev"/>
		<updated>2026-04-28T02:34:16Z</updated>

		<summary type="html">&lt;p&gt;Migration export&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;March 26, 2026&amp;#039;&amp;#039;&amp;#039; — Daily digest of AI law developments.&lt;br /&gt;
&lt;br /&gt;
This article consolidates 4 news stories from March 26, 2026.&lt;br /&gt;
&lt;br /&gt;
== Contents ==&lt;br /&gt;
&lt;br /&gt;
1. AI Foundation Model Transparency Act&lt;br /&gt;
2. Anthropic Preliminary Injunction Trump AI Safety&lt;br /&gt;
3. EU Parliament Digital Omnibus AI Act&lt;br /&gt;
4. GUARDRAILS Act Repeal AI Moratorium&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== AI Foundation Model Transparency Act ==&lt;br /&gt;
&lt;br /&gt;
On March 26, 2026, a bipartisan group of U.S. lawmakers introduced &amp;#039;&amp;#039;&amp;#039;H.R. 8094&amp;#039;&amp;#039;&amp;#039;, the &amp;#039;&amp;#039;&amp;#039;AI Foundation Model Transparency Act (AI FMTA)&amp;#039;&amp;#039;&amp;#039;, marking the first federal legislation focused specifically on AI transparency rather than direct regulation.&amp;lt;ref&amp;gt;[https://www.congress.gov/bill/119th-congress/house-bill/8094 Congress.gov: H.R. 8094 - AI Foundation Model Transparency Act]&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://pluralpolicy.com/blog/the-ai-governance-watch-april-2026-nineteen-new-ai-bills-passed-into-law/ Plural Policy: AI Governance Watch, April 2026]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Purpose ==&lt;br /&gt;
&lt;br /&gt;
The AI FMTA would require developers of large AI models (such as ChatGPT, Claude, and similar systems) to publicly disclose:&lt;br /&gt;
&lt;br /&gt;
* Training methods and data sources used&lt;br /&gt;
* Intended capabilities and limitations of the model&lt;br /&gt;
* Known risks and safety evaluations&lt;br /&gt;
* Performance benchmarks and evaluation practices&lt;br /&gt;
&lt;br /&gt;
The legislation aims to increase transparency in AI development without imposing direct regulatory restrictions on model capabilities or deployment.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
&lt;br /&gt;
Covered entities (developers of large foundation models) would be required to:&lt;br /&gt;
&lt;br /&gt;
# Publish transparency reports detailing training data composition&lt;br /&gt;
# Disclose energy consumption and environmental impact of training&lt;br /&gt;
# Provide information on red teaming and safety testing conducted&lt;br /&gt;
# Report on known limitations and failure modes&lt;br /&gt;
&lt;br /&gt;
== Significance ==&lt;br /&gt;
&lt;br /&gt;
The AI FMTA represents a shift toward transparency-focused AI governance at the federal level. Unlike the sector-specific approaches proposed in other legislation, this bill targets the foundational layer of AI systems.&lt;br /&gt;
&lt;br /&gt;
The bipartisan sponsorship suggests growing consensus around &amp;quot;sunlight&amp;quot; as a regulatory tool for AI governance, potentially serving as a template for other jurisdictions.&lt;br /&gt;
&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
As of April 2026, H.R. 8094 has been introduced and referred to committee. No markup or floor vote has been scheduled.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
&lt;br /&gt;
* [[News TRUMP-AMERICA-AI-Act-2026]]&lt;br /&gt;
* [[News White-House-National-Policy-Framework-AI-2026]]&lt;br /&gt;
* [[Legislation]]&lt;br /&gt;
* [[Federal Legislation]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News AI-Foundation-Model-Transparency-Act-2026|AI Foundation Model Transparency Act]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Anthropic Preliminary Injunction Trump AI Safety ==&lt;br /&gt;
&lt;br /&gt;
A federal judge granted &amp;#039;&amp;#039;&amp;#039;Anthropic PBC&amp;#039;&amp;#039;&amp;#039; a preliminary injunction on &amp;#039;&amp;#039;&amp;#039;March 26, 2026&amp;#039;&amp;#039;&amp;#039;, blocking the Trump administration&amp;#039;s government-wide ban on the company&amp;#039;s technology and the Pentagon&amp;#039;s designation of Anthropic as a supply-chain risk to national security. U.S. District Judge &amp;#039;&amp;#039;&amp;#039;Rita F. Lin&amp;#039;&amp;#039;&amp;#039; of the Northern District of California found that the government&amp;#039;s actions appeared designed to punish Anthropic for refusing to remove AI safety restrictions from its Claude model, rather than addressing genuine security concerns.&amp;lt;ref name=&amp;quot;clearinghouse&amp;quot;&amp;gt;[https://clearinghouse.net/case/47876/ Clearinghouse, Anthropic PBC v. U.S. Department of War]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;bloomberg&amp;quot;&amp;gt;[https://assets.bwbx.io/documents/users/iqjWHBFdfxIU/rYKRX7EU4j5U/v0 Bloomberg Law, Preliminary Injunction Order, March 26, 2026]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;cand&amp;quot;&amp;gt;[https://cand.uscourts.gov/cases-e-filing/cases/326-cv-01996/ U.S. District Court, N.D. Cal., Case 3:26-cv-01996]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
&lt;br /&gt;
Anthropic filed suit on &amp;#039;&amp;#039;&amp;#039;March 9, 2026&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;Case No. 3:26-cv-01996-RFL&amp;#039;&amp;#039;&amp;#039;) after the Department of War, under Secretary Pete Hegseth, designated Anthropic as a supply-chain risk under &amp;#039;&amp;#039;&amp;#039;41 U.S.C. § 4713&amp;#039;&amp;#039;&amp;#039; and &amp;#039;&amp;#039;&amp;#039;10 U.S.C. § 3252&amp;#039;&amp;#039;&amp;#039;. The designation followed Anthropic&amp;#039;s public refusal to remove two usage restrictions from its Claude AI system: prohibitions on &amp;#039;&amp;#039;&amp;#039;lethal autonomous warfare&amp;#039;&amp;#039;&amp;#039; and &amp;#039;&amp;#039;&amp;#039;mass surveillance of Americans&amp;#039;&amp;#039;&amp;#039;. The government-wide directive would have barred all federal agencies from using Anthropic&amp;#039;s products or services.&amp;lt;ref name=&amp;quot;clearinghouse&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;catiobrief&amp;quot;&amp;gt;[https://www.cato.org/sites/cato.org/files/2026-03/(27-1)%20Brief.pdf Cato Institute, Amicus Brief in Support of Anthropic, March 2026]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== The Injunction ==&lt;br /&gt;
&lt;br /&gt;
Judge Lin granted the preliminary injunction on March 26, 2026, making several key findings:&amp;lt;ref name=&amp;quot;bloomberg&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Anthropic demonstrated a &amp;#039;&amp;#039;&amp;#039;high likelihood of success&amp;#039;&amp;#039;&amp;#039; on its First Amendment retaliation claim&lt;br /&gt;
* The company was likely to succeed on its Fifth Amendment due process claim&lt;br /&gt;
* The government &amp;#039;&amp;#039;&amp;#039;failed to prove&amp;#039;&amp;#039;&amp;#039; that Anthropic&amp;#039;s conduct qualifies as a supply-chain risk&lt;br /&gt;
* The designation appeared &amp;#039;&amp;#039;&amp;#039;pretextual&amp;#039;&amp;#039;&amp;#039; — motivated by Anthropic&amp;#039;s protected speech rather than genuine national security concerns&lt;br /&gt;
&lt;br /&gt;
Judge Lin granted a seven-day administrative stay to allow the government to seek an emergency stay from the Court of Appeals. The General Services Administration (GSA) restored Anthropic&amp;#039;s status by early April 2026.&amp;lt;ref name=&amp;quot;gsa&amp;quot;&amp;gt;[https://www.gsa.gov/about-us/newsroom/news-releases/gsa-issues-statement-on-anthropic-preliminary-injunction-04032026 GSA Statement on Anthropic Preliminary Injunction, April 3, 2026]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Amicus Support ==&lt;br /&gt;
&lt;br /&gt;
The case attracted broad amicus support from the Cato Institute, Electronic Frontier Foundation, Society for the Rule of Law (representing former national security officials), Yale Law School&amp;#039;s Rule of Law Clinic, and industry trade associations including the Information Technology Industry Council (ITI). The briefs uniformly argued that the government&amp;#039;s retaliation against AI safety restrictions violates constitutional principles and undermines national security by politicizing procurement.&amp;lt;ref name=&amp;quot;catiobrief&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;sfrl&amp;quot;&amp;gt;[https://societyfortheruleoflaw.org/amicus-brief-anthropic-dow/ Society for the Rule of Law, Amicus Brief in Support of Anthropic]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;yale&amp;quot;&amp;gt;[https://law.yale.edu/sites/default/files/documents/Admin/Documents/News/anthropic-rolc-amicus-brief-filed.pdf Yale Law School, Amicus Brief in Support of Anthropic]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Broader Significance ==&lt;br /&gt;
&lt;br /&gt;
The case represents one of the first major legal confrontations between an AI company and the U.S. government over AI safety guardrails. A ruling against Anthropic could have chilled AI companies from implementing safety restrictions that conflict with government preferences, while the injunction preserves Anthropic&amp;#039;s ability to set terms for its own products. The case remains ongoing, with a separate D.C. Circuit challenge to the 41 U.S.C. § 4713 designation also pending.&amp;lt;ref name=&amp;quot;clearinghouse&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
&lt;br /&gt;
* [[Anthropic PBC v U.S. Department of War]] — Full case page&lt;br /&gt;
* [[Bartz v Anthropic PBC]] — Copyright litigation involving Anthropic&lt;br /&gt;
* [[Cases]] — Active AI litigation tracker&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News Anthropic-Preliminary-Injunction-Trump-AI-Safety-2026|Anthropic Preliminary Injunction Trump AI Safety]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== EU Parliament Digital Omnibus AI Act ==&lt;br /&gt;
&lt;br /&gt;
March 26, 2026 — European Parliament Adopts Position on Digital Omnibus AI Act Amendments&lt;br /&gt;
&lt;br /&gt;
The European Parliament plenary adopted its negotiating position on proposed Digital Omnibus amendments to the EU AI Act on March 26, 2026, advancing to trilogue negotiations with the Council of the European Union (which adopted its own mandate on March 13, 2026) and the European Commission.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot;&amp;gt;[https://www.globalpolicywatch.com/2026/03/meps-adopt-joint-position-on-proposed-digital-omnibus-on-ai/ Global Policy Watch, &amp;quot;MEPs Adopt Joint Position on Proposed Digital Omnibus on AI&amp;quot;]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;parliament-text&amp;quot;&amp;gt;[https://www.europarl.europa.eu/doceo/document/TA-10-2026-0098_EN.html European Parliament, &amp;quot;TA-10-2026-0098_EN&amp;quot;]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;medialaws&amp;quot;&amp;gt;[https://www.medialaws.eu/the-eu-parliament-plenary-adopts-text-to-amend-the-digital-omnibus-on-ai-ahead-of-council-negotiations/ MediaLaws, &amp;quot;The EU Parliament Plenary Adopts Text to Amend the Digital Omnibus on AI&amp;quot;]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Amendments ==&lt;br /&gt;
&lt;br /&gt;
The Parliament&amp;#039;s position includes several significant changes to the Commission&amp;#039;s original Omnibus proposal:&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Fixed application dates&amp;#039;&amp;#039;&amp;#039;: Replaces the Commission&amp;#039;s flexible backstop dates with fixed dates, eliminating regulatory uncertainty about when rules take effect. This aligns with the Council position.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Ban on non-consensual AI-generated intimate imagery&amp;#039;&amp;#039;&amp;#039;: Introduces a prohibition under Article 5 on AI systems generating realistic sexually explicit images or videos of identifiable individuals without consent. The Parliament&amp;#039;s ban is broader than the Council&amp;#039;s, which limited the prohibition to systems lacking safeguards. The measure also explicitly bans &amp;quot;nudifier apps.&amp;quot;&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;medialaws&amp;quot; /&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Shortened transparency grace period&amp;#039;&amp;#039;&amp;#039;: Reduces the compliance period for Article 50(2) marking obligations for AI systems placed on the market before August 2, 2026 from six months (to February 2, 2027) to just three months (to November 2, 2026).&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Reinstated registration requirements&amp;#039;&amp;#039;&amp;#039;: Retains EU database registration for self-assessed non-high-risk AI systems under Article 6(3), rejecting the Commission&amp;#039;s proposed removal and aligning with the Council.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Stricter data processing thresholds&amp;#039;&amp;#039;&amp;#039;: Restores the &amp;quot;strict necessity&amp;quot; standard for processing special categories of personal data in bias detection, limited to high-risk systems with exceptional extensions requiring necessity, proportionality, and links to health, safety, fundamental rights, or discrimination.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Annex I restructuring&amp;#039;&amp;#039;&amp;#039;: Deletes Section A and moves New Legislative Framework legislation to Section B, altering the AI Act&amp;#039;s interaction with sectoral product rules.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;AI Office enhancements&amp;#039;&amp;#039;&amp;#039;: Strengthens supervision over general-purpose AI models with resourcing requirements, while preserving national authority competence through exceptions. The Parliament&amp;#039;s position adds detailed GPAI/DSA integration provisions.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alignment With Council Position ==&lt;br /&gt;
&lt;br /&gt;
Parliament and Council show broad alignment on rolling back Commission simplifications, including fixed dates, registration requirements, data processing thresholds, non-consensual imagery bans, and AI Office oversight. Key distinctions include the Parliament&amp;#039;s shorter transparency grace period, explicit AI Office resourcing mandates, and Annex I restructuring.&amp;lt;ref name=&amp;quot;globalpolicy&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;medialaws&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Timing ==&lt;br /&gt;
&lt;br /&gt;
The amendments were fast-tracked due to time pressures before the AI Act&amp;#039;s general application date of August 2, 2026. Trilogue negotiations are expected to proceed quickly given the broad alignment between Parliament and Council positions.&amp;lt;ref name=&amp;quot;timeline&amp;quot;&amp;gt;[https://artificialintelligenceact.eu/implementation-timeline/ ArtificialIntelligenceAct.eu, &amp;quot;Implementation Timeline&amp;quot;]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;medialaws&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Significance ==&lt;br /&gt;
&lt;br /&gt;
The Parliament&amp;#039;s position represents a significant pushback against the Commission&amp;#039;s simplification agenda, restoring many compliance obligations that the Omnibus proposal sought to streamline. The shortened transparency grace period and reinstated registration requirements will require companies to accelerate compliance timelines ahead of the August 2026 enforcement deadline. The ban on non-consensual AI-generated intimate imagery marks one of the most explicit legislative responses to deepfake technology to date.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
&lt;br /&gt;
* [[News EU-AI-Act-Implementation-Milestones-2026|EU AI Act Implementation Milestones]]&lt;br /&gt;
* [[Legislation]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News EU-Parliament-Digital-Omnibus-AI-Act-2026|EU Parliament Digital Omnibus AI Act]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== GUARDRAILS Act Repeal AI Moratorium ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;March 26, 2026 — Rep. Beyer Introduces GUARDRAILS Act to Repeal Trump AI Moratorium and Restore State Regulation Authority&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
On March 26, 2026, Representative Don Beyer (D-VA) introduced the &amp;#039;&amp;#039;&amp;#039;GUARDRAILS Act&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;G&amp;#039;&amp;#039;&amp;#039;uaranteeing and &amp;#039;&amp;#039;&amp;#039;U&amp;#039;&amp;#039;&amp;#039;pholding &amp;#039;&amp;#039;&amp;#039;A&amp;#039;&amp;#039;&amp;#039;mericans&amp;#039; &amp;#039;&amp;#039;&amp;#039;R&amp;#039;&amp;#039;&amp;#039;ight to &amp;#039;&amp;#039;&amp;#039;D&amp;#039;&amp;#039;&amp;#039;ecide &amp;#039;&amp;#039;&amp;#039;R&amp;#039;&amp;#039;&amp;#039;esponsible &amp;#039;&amp;#039;&amp;#039;A&amp;#039;&amp;#039;&amp;#039;I &amp;#039;&amp;#039;&amp;#039;L&amp;#039;&amp;#039;&amp;#039;aws and &amp;#039;&amp;#039;&amp;#039;S&amp;#039;&amp;#039;&amp;#039;tandards Act), legislation that would repeal President Trump&amp;#039;s December 11, 2025 Executive Order 14365 imposing a moratorium on state-level AI regulation.&amp;lt;ref name=&amp;quot;beyer&amp;quot;&amp;gt;[https://beyer.house.gov/news/documentsingle.aspx?DocumentID=9009 Rep. Beyer Press Release: Beyer and Colleagues Introduce Bill to Repeal White House AI Moratorium]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
&lt;br /&gt;
Executive Order 14365, titled &amp;quot;Removing Barriers to American Leadership in Artificial Intelligence,&amp;quot; was signed by President Trump on December 11, 2025, and established a framework prioritizing federal preemption of state AI laws. The order directed federal agencies to review state regulations that could impede AI development and created an AI Litigation Task Force announced January 9, 2026.&amp;lt;ref name=&amp;quot;whitehouse&amp;quot;&amp;gt;[https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/ White House: Executive Order on Removing Barriers to American Leadership in AI]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The GUARDRAILS Act was introduced in direct response, seeking to nullify the executive order, prohibit federal funds from implementing it, and restore states&amp;#039; authority to enact their own AI regulations covering safety, privacy, and bias concerns.&amp;lt;ref name=&amp;quot;beyer&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cosponsors ==&lt;br /&gt;
&lt;br /&gt;
The bill&amp;#039;s cosponsors include Reps. Doris Matsui (D-CA), Ted Lieu (D-CA), Sara Jacobs (D-CA), and April McClain Delaney (D-MD). Rep. Beyer serves as co-chair of the Congressional Artificial Intelligence Caucus.&amp;lt;ref name=&amp;quot;beyer&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;quiver&amp;quot;&amp;gt;[https://www.quiverquant.com/news/Press+Release:+Beyer+and+Colleagues+Introduce+Bill+to+Repeal+White+House+AI+Moratorium Quiver Quant: Beyer and Colleagues Introduce Bill to Repeal White House AI Moratorium]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Legislative Context ==&lt;br /&gt;
&lt;br /&gt;
The GUARDRAILS Act was introduced the same week as the White House&amp;#039;s March 20, 2026 release of its National Policy Framework for Artificial Intelligence, which recommended federal preemption across seven pillars. The bill positions itself as the legislative counterweight to both the executive order and the proposed [[News Blackburn-Trump-America-AI-Act-Momentum-April-2026|Trump America AI Act]] championed by Senator Marsha Blackburn (R-TN).&amp;lt;ref name=&amp;quot;alston&amp;quot;&amp;gt;[https://www.alston.com/en/insights/publications/2026/04/ai-quarterly-april-2026 Alston &amp;amp; Bird: AI Quarterly Update - April 2026]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Status ==&lt;br /&gt;
&lt;br /&gt;
As of April 26, 2026, the bill has been introduced and referred to committee. No hearings, votes, or further legislative action has been reported.&amp;lt;ref name=&amp;quot;beyer_bill&amp;quot;&amp;gt;[https://beyer.house.gov/uploadedfiles/the_guardrails_act.pdf Full Text of the GUARDRAILS Act]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News GUARDRAILS-Act-Repeal-AI-Moratorium-2026|GUARDRAILS Act Repeal AI Moratorium]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Categories ==&lt;br /&gt;
&lt;br /&gt;
[[Category:Congress]]&lt;br /&gt;
[[Category:Deepfakes]]&lt;br /&gt;
[[Category:EU AI Act]]&lt;br /&gt;
[[Category:European Union]]&lt;br /&gt;
[[Category:Executive Branch]]&lt;br /&gt;
[[Category:Federal Legislation]]&lt;br /&gt;
[[Category:Federal Preemption]]&lt;br /&gt;
[[Category:Federal Regulation]]&lt;br /&gt;
[[Category:International]]&lt;br /&gt;
[[Category:Transparency]]&lt;br /&gt;
[[Category:Daily News]]&lt;/div&gt;</summary>
		<author><name>AILawWikiAdmin</name></author>
	</entry>
	<entry>
		<id>https://ailawwiki.com/index.php?title=News_March_26_2026&amp;diff=292&amp;oldid=prev</id>
		<title>AILawWikiAdmin: Migration export</title>
		<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_March_26_2026&amp;diff=292&amp;oldid=prev"/>
		<updated>2026-04-28T02:34:16Z</updated>

		<summary type="html">&lt;p&gt;Migration export&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 02:34, 28 April 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>AILawWikiAdmin</name></author>
	</entry>
</feed>