<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ailawwiki.com/index.php?action=history&amp;feed=atom&amp;title=News_January_01_2026</id>
	<title>News January 01 2026 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://ailawwiki.com/index.php?action=history&amp;feed=atom&amp;title=News_January_01_2026"/>
	<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_January_01_2026&amp;action=history"/>
	<updated>2026-04-29T14:41:59Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.40.1</generator>
	<entry>
		<id>https://ailawwiki.com/index.php?title=News_January_01_2026&amp;diff=104&amp;oldid=prev</id>
		<title>AILawWikiAdmin: Migration export</title>
		<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_January_01_2026&amp;diff=104&amp;oldid=prev"/>
		<updated>2026-04-28T02:34:16Z</updated>

		<summary type="html">&lt;p&gt;Migration export&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;January 1, 2026&amp;#039;&amp;#039;&amp;#039; — Daily digest of AI law developments.&lt;br /&gt;
&lt;br /&gt;
This article consolidates 2 news stories from January 1, 2026.&lt;br /&gt;
&lt;br /&gt;
== Contents ==&lt;br /&gt;
&lt;br /&gt;
1. Asia Pacific AI Law Developments&lt;br /&gt;
2. xAI v Bonta California AI Transparency Ruling&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Asia Pacific AI Law Developments ==&lt;br /&gt;
&lt;br /&gt;
Several major Asia-Pacific jurisdictions have enacted or advanced significant AI legislation in early 2026, establishing new regulatory frameworks for transparency, risk assessment, and human oversight.&amp;lt;ref name=&amp;quot;onetrust&amp;quot;&amp;gt;[https://www.onetrust.com/blog/where-ai-regulation-is-heading-in-2026-a-global-outlook/ OneTrust, &amp;quot;Where AI Regulation Is Heading in 2026: A Global Outlook&amp;quot;]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== South Korea: Basic AI Act ==&lt;br /&gt;
&lt;br /&gt;
South Korea&amp;#039;s &amp;#039;&amp;#039;&amp;#039;Basic AI Act&amp;#039;&amp;#039;&amp;#039; entered into force on January 1, 2026.&amp;lt;ref name=&amp;quot;onetrust&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;iapp&amp;quot;&amp;gt;[https://iapp.org/resources/article/global-legislative-predictions IAPP, &amp;quot;Global Legislative Predictions&amp;quot;]&amp;lt;/ref&amp;gt; Key features:&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Extraterritorial Application:&amp;#039;&amp;#039;&amp;#039; Applies to AI systems affecting Korean users regardless of where the provider is located&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Transparency Requirements:&amp;#039;&amp;#039;&amp;#039; Mandates disclosure when AI is used in decision-making&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Risk Assessments:&amp;#039;&amp;#039;&amp;#039; Requires impact assessments for high-impact and large-scale AI systems&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Human Oversight:&amp;#039;&amp;#039;&amp;#039; Mandates documentation and human oversight for specified categories of AI&amp;lt;ref name=&amp;quot;onetrust&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;oxford&amp;quot;&amp;gt;[https://oxfordinsights.com/wp-content/uploads/2026/01/Government-AI-Readiness-Report-2025-1.pdf Oxford Insights, &amp;quot;Government AI Readiness Report 2025&amp;quot;]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== China: Cybersecurity Law Amendments ==&lt;br /&gt;
&lt;br /&gt;
Amendments to China&amp;#039;s &amp;#039;&amp;#039;&amp;#039;Cybersecurity Law&amp;#039;&amp;#039;&amp;#039; took effect on January 1, 2026, representing the first inclusion of AI governance provisions within a major national law.&amp;lt;ref name=&amp;quot;iapp&amp;quot; /&amp;gt; These amendments build on China&amp;#039;s existing body of AI-specific regulations covering algorithms, deepfakes, generative AI, labeling, and ethics reviews.&lt;br /&gt;
&lt;br /&gt;
China had previously enacted detailed regulations including the &amp;#039;&amp;#039;&amp;#039;Algorithmic Recommendation Provisions&amp;#039;&amp;#039;&amp;#039; (effective March 2022), &amp;#039;&amp;#039;&amp;#039;Deep Synthesis Provisions&amp;#039;&amp;#039;&amp;#039; (effective January 2023), and &amp;#039;&amp;#039;&amp;#039;Generative AI Measures&amp;#039;&amp;#039;&amp;#039; (effective August 2023). The Cybersecurity Law amendments integrate these governance principles into the broader national legal framework.&lt;br /&gt;
&lt;br /&gt;
== Japan: APPI Amendments and IP Code ==&lt;br /&gt;
&lt;br /&gt;
Two significant developments in Japan&amp;#039;s AI policy landscape occurred in early 2026:&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;January 9, 2026:&amp;#039;&amp;#039;&amp;#039; The Personal Information Protection Commission published the &amp;#039;&amp;#039;&amp;#039;Draft Policy for Amendments to the APPI&amp;#039;&amp;#039;&amp;#039; (Act on the Protection of Personal Information), proposing conditions under which consent may be unnecessary for third-party provision of personal data or acquisition of sensitive public information used in statistical processing, including AI development.&amp;lt;ref name=&amp;quot;araki&amp;quot;&amp;gt;[https://arakiplaw.com/en/insight/2665/ Araki Law, &amp;quot;Draft Policy for Amendments to the APPI&amp;quot; (January 2026)]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;December 26, 2025 — January 26, 2026:&amp;#039;&amp;#039;&amp;#039; The Cabinet Office released a &amp;#039;&amp;#039;&amp;#039;Draft Principles Code&amp;#039;&amp;#039;&amp;#039; on intellectual property protection and transparency for generative AI, with public comments open through January 26, 2026. The code is positioned as soft law without binding force, outlining principles for generative AI developers and providers.&amp;lt;ref name=&amp;quot;araki&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Vietnam: Digital Technology Law ==&lt;br /&gt;
&lt;br /&gt;
Vietnam&amp;#039;s &amp;#039;&amp;#039;&amp;#039;Law on Digital Technology&amp;#039;&amp;#039;&amp;#039; introduced AI provisions effective in 2026, including requirements for labeling, transparency, and prohibitions linked to human rights and public order.&amp;lt;ref name=&amp;quot;onetrust&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;sumsub&amp;quot;&amp;gt;[https://sumsub.com/blog/comprehensive-guide-to-ai-laws-and-regulations-worldwide/ Sumsub, &amp;quot;Comprehensive Guide to AI Laws and Regulations Worldwide&amp;quot;]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Regional Context ==&lt;br /&gt;
&lt;br /&gt;
These developments reflect a broader trend across the Asia-Pacific region toward AI-specific legislation, contrasting with the EU&amp;#039;s comprehensive approach and the US&amp;#039;s patchwork of state laws. South Korea&amp;#039;s extraterritorial provisions are particularly significant for international AI companies operating in the Korean market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News Asia-Pacific-AI-Law-Developments-2026|Asia Pacific AI Law Developments]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== xAI v Bonta California AI Transparency Ruling ==&lt;br /&gt;
&lt;br /&gt;
Elon Musk&amp;#039;s xAI filed a federal lawsuit against California Attorney General Rob Bonta, challenging California Assembly Bill 2013 (the AI Training Data Transparency law), which took effect January 1, 2026.&amp;lt;ref name=&amp;quot;fisher&amp;quot;&amp;gt;[https://www.fisherphillips.com/en/insights/insights/court-upholds-california-ai-transparency-law Court Upholds California AI Transparency Law Against xAI Challenge - Fisher Phillips]&amp;lt;/ref&amp;gt; The law requires developers of generative AI systems to post a high-level summary of their training datasets, including whether they contain personal data or copyrighted content.&amp;lt;ref name=&amp;quot;fisher&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On March 5, 2026, U.S. District Judge Jesus Bernal in the Central District of California denied xAI&amp;#039;s motion for a preliminary injunction, finding the company failed to show it was likely to succeed on the merits.&amp;lt;ref name=&amp;quot;iapp&amp;quot;&amp;gt;[https://iapp.org/news/a/xai-v-bonta-a-constitutional-clash-for-training-data-transparency xAI v. Bonta: A Constitutional Clash for Training Data Transparency - IAPP]&amp;lt;/ref&amp;gt; The ruling allows California to continue enforcing the transparency requirements while litigation proceeds.&lt;br /&gt;
&lt;br /&gt;
xAI argued the law constituted a &amp;quot;trade-secrets-destroying disclosure regime&amp;quot; that would force the company to reveal proprietary, uniquely curated datasets that represent valuable trade secrets under both California and federal law.&amp;lt;ref name=&amp;quot;joneswalker&amp;quot;&amp;gt;[https://www.joneswalker.com/en/insights/blogs/ai-law-blog/when-courts-become-the-regulator-the-xai-decision-and-what-californias-ai-trans When Courts Become the Regulator: The xAI Decision and California&amp;#039;s AI Transparency Law - Jones Walker]&amp;lt;/ref&amp;gt; The company also contended the law compelled speech in violation of the First Amendment and was unconstitutionally vague in defining terms like &amp;quot;dataset&amp;quot; and &amp;quot;data point.&amp;quot;&amp;lt;ref name=&amp;quot;joneswalker&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Judge Bernal rejected these arguments. On trade secrets, he acknowledged that training datasets could qualify as protected trade secrets but found xAI&amp;#039;s pleadings too general and hypothetical, lacking specific evidence that its datasets differed from competitors&amp;#039;.&amp;lt;ref name=&amp;quot;fisher&amp;quot; /&amp;gt; The court noted that OpenAI and Anthropic had already complied with the disclosure requirements without apparent difficulty.&amp;lt;ref name=&amp;quot;fisher&amp;quot; /&amp;gt; On the vagueness claim, the court dismissed it, pointing out that xAI itself used the term &amp;quot;dataset&amp;quot; clearly and consistently in its complaint.&amp;lt;ref name=&amp;quot;fisher&amp;quot; /&amp;gt; Regarding the First Amendment challenge, Judge Bernal held that xAI had not shown a constitutional violation at this stage, noting that the disclosure requirement would likely survive intermediate scrutiny under the Central Hudson test because it advances substantial government interests without being excessively burdensome.&amp;lt;ref name=&amp;quot;reason&amp;quot;&amp;gt;[https://reason.com/volokh/2026/03/10/california-ai-model-training-disclosure-law-likely-doesnt-violate-first-amendment/ California AI Model Training Disclosure Law Likely Doesn&amp;#039;t Violate First Amendment - Reason/Volokh Conspiracy]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ruling represents a significant early victory for state-level AI regulation and establishes that transparency requirements for AI training data can survive constitutional challenges based on trade secrets and free speech.&amp;lt;ref name=&amp;quot;joneswalker&amp;quot; /&amp;gt; The case continues to proceed on the merits, but xAI must comply with the disclosure requirements during litigation.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;See individual article: [[News xAI-v-Bonta-California-AI-Transparency-Ruling-2026|xAI v Bonta California AI Transparency Ruling]]&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Categories ==&lt;br /&gt;
&lt;br /&gt;
[[Category:California]]&lt;br /&gt;
[[Category:Cases Against xAI]]&lt;br /&gt;
[[Category:China]]&lt;br /&gt;
[[Category:Data Privacy]]&lt;br /&gt;
[[Category:Federal Regulation]]&lt;br /&gt;
[[Category:International]]&lt;br /&gt;
[[Category:Japan]]&lt;br /&gt;
[[Category:South Korea]]&lt;br /&gt;
[[Category:Transparency]]&lt;br /&gt;
[[Category:Vietnam]]&lt;br /&gt;
[[Category:Daily News]]&lt;/div&gt;</summary>
		<author><name>AILawWikiAdmin</name></author>
	</entry>
	<entry>
		<id>https://ailawwiki.com/index.php?title=News_January_01_2026&amp;diff=278&amp;oldid=prev</id>
		<title>AILawWikiAdmin: Migration export</title>
		<link rel="alternate" type="text/html" href="https://ailawwiki.com/index.php?title=News_January_01_2026&amp;diff=278&amp;oldid=prev"/>
		<updated>2026-04-28T02:34:16Z</updated>

		<summary type="html">&lt;p&gt;Migration export&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 02:34, 28 April 2026&lt;/td&gt;
				&lt;/tr&gt;
&lt;!-- diff cache key mediawiki:diff::1.12:old-104:rev-278 --&gt;
&lt;/table&gt;</summary>
		<author><name>AILawWikiAdmin</name></author>
	</entry>
</feed>