News-May-07-2026: Difference between revisions
(Update May 7 digest: add Meta Section 230 verdict and Chrome Gemini Nano stories) |
(Replace 5 dead/blocked citation URLs with accessible sources (jenner→btlaw, reuters→cnbc/yahoo, axios→wired, techspot→cnet)) |
||
| Line 12: | Line 12: | ||
== DOJ Files Formal Complaint in Intervention Against Colorado AI Law == | == DOJ Files Formal Complaint in Intervention Against Colorado AI Law == | ||
The U.S. Department of Justice filed a formal '''Complaint in Intervention''' on May 5, 2026, in '''''xAI Corp v. Weiser''''', escalating the federal government's opposition to Colorado's SB 24-205 (the Colorado AI Act). The filing goes beyond the DOJ's earlier April 24 Statement of Interest, making the United States a formal party to the case. The DOJ argues that Colorado's algorithmic discrimination provisions violate the Equal Protection Clause and conflict with federal AI policy. Separately, a federal judge on April 27 paused enforcement of SB 24-205 pending further proceedings.<ref name="doj">[https://www.justice.gov/opa/pr/justice-department-intervenes-xai-lawsuit-challenging-colorados-algorithmic-discrimination DOJ: Justice Department Intervenes in xAI Lawsuit]</ref><ref name="jenner">[https:// | The U.S. Department of Justice filed a formal '''Complaint in Intervention''' on May 5, 2026, in '''''xAI Corp v. Weiser''''', escalating the federal government's opposition to Colorado's SB 24-205 (the Colorado AI Act). The filing goes beyond the DOJ's earlier April 24 Statement of Interest, making the United States a formal party to the case. The DOJ argues that Colorado's algorithmic discrimination provisions violate the Equal Protection Clause and conflict with federal AI policy. Separately, a federal judge on April 27 paused enforcement of SB 24-205 pending further proceedings.<ref name="doj">[https://www.justice.gov/opa/pr/justice-department-intervenes-xai-lawsuit-challenging-colorados-algorithmic-discrimination DOJ: Justice Department Intervenes in xAI Lawsuit]</ref><ref name="jenner">[https://btlaw.com/en/insights/alerts/2026/doj-intervenes-in-lawsuit-challenging-colorados-algorithmic-discrimination-law Baker Tilly: DOJ Intervenes in Lawsuit Challenging Colorado AI Law]</ref> | ||
''See full article: [[News-DOJ-Complaint-xAI-Colorado-May-2026|May 5, 2026 — DOJ Files Complaint in Intervention Against Colorado AI Law]]'' | ''See full article: [[News-DOJ-Complaint-xAI-Colorado-May-2026|May 5, 2026 — DOJ Files Complaint in Intervention Against Colorado AI Law]]'' | ||
| Line 28: | Line 28: | ||
== Anthropic Signs Compute Deal With SpaceX == | == Anthropic Signs Compute Deal With SpaceX == | ||
'''Anthropic''' announced on May 7, 2026 that it has signed an agreement with '''SpaceX''' to access all of the compute capacity at Colossus 1, one of the world's largest AI supercomputers. The deal provides Anthropic with over 300 MW of capacity and more than 220,000 Nvidia GPUs, substantially expanding the company's AI inference capabilities. In conjunction with the deal, Anthropic doubled Claude Code's five-hour rate limits for paid plans and removed peak-hours reductions for Pro and Max plans. The partnership comes amid Anthropic's ongoing dispute with the U.S. government over a supply-chain risk designation, and raises questions about the competitive dynamics between Musk's xAI/SpaceX and Anthropic, which has positioned itself as an AI safety-focused alternative.<ref name="anthropic">[https://www.anthropic.com/news/higher-limits-spacex Anthropic: Higher usage limits for Claude and a compute deal with SpaceX]</ref><ref name="reuters-spx">[https://www. | '''Anthropic''' announced on May 7, 2026 that it has signed an agreement with '''SpaceX''' to access all of the compute capacity at Colossus 1, one of the world's largest AI supercomputers. The deal provides Anthropic with over 300 MW of capacity and more than 220,000 Nvidia GPUs, substantially expanding the company's AI inference capabilities. In conjunction with the deal, Anthropic doubled Claude Code's five-hour rate limits for paid plans and removed peak-hours reductions for Pro and Max plans. The partnership comes amid Anthropic's ongoing dispute with the U.S. government over a supply-chain risk designation, and raises questions about the competitive dynamics between Musk's xAI/SpaceX and Anthropic, which has positioned itself as an AI safety-focused alternative.<ref name="anthropic">[https://www.anthropic.com/news/higher-limits-spacex Anthropic: Higher usage limits for Claude and a compute deal with SpaceX]</ref><ref name="reuters-spx">[https://www.cnbc.com/2026/05/06/anthropic-spacex-data-center-capacity.html CNBC: Anthropic, SpaceX announce compute deal, includes space data center capacity]</ref><ref name="axios">[https://www.wired.com/story/anthropic-spacex-compute-deal-colossus/ Wired: Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird]</ref> | ||
---- | ---- | ||
| Line 34: | Line 34: | ||
== Meta Asks Judge to Overturn Landmark Social Media Addiction Verdict == | == Meta Asks Judge to Overturn Landmark Social Media Addiction Verdict == | ||
'''Meta Platforms''' asked a Los Angeles judge on May 6, 2026, to throw out the landmark social media addiction verdict or order a new trial, arguing that '''Section 230''' of the Communications Decency Act shields it from liability and that the jury's finding was based on the content the plaintiff viewed rather than platform design features. The March 2026 jury verdict had found Meta liable for '''$4.2 million''' and Google for '''$1.8 million''' for negligently designing platforms that harmed a young woman's mental health. Google has also asked the court to set aside the verdict and plans to appeal. The case is the first bellwether trial in a coordinated proceeding involving thousands of similar lawsuits, and the Section 230 defense could reach the Ninth Circuit and potentially the Supreme Court.<ref name="reuters-meta">[https:// | '''Meta Platforms''' asked a Los Angeles judge on May 6, 2026, to throw out the landmark social media addiction verdict or order a new trial, arguing that '''Section 230''' of the Communications Decency Act shields it from liability and that the jury's finding was based on the content the plaintiff viewed rather than platform design features. The March 2026 jury verdict had found Meta liable for '''$4.2 million''' and Google for '''$1.8 million''' for negligently designing platforms that harmed a young woman's mental health. Google has also asked the court to set aside the verdict and plans to appeal. The case is the first bellwether trial in a coordinated proceeding involving thousands of similar lawsuits, and the Section 230 defense could reach the Ninth Circuit and potentially the Supreme Court.<ref name="reuters-meta">[https://finance.yahoo.com/sectors/technology/articles/meta-asks-california-judge-throw-191444251.html Yahoo Finance: Meta asks California judge to throw out landmark social media addiction verdict]</ref><ref name="cna">[https://www.channelnewsasia.com/business/meta-asks-california-judge-throw-out-landmark-social-media-addiction-verdict-6104981 Channel News Asia: Meta asks judge to throw out landmark social media addiction verdict]</ref> | ||
''See full article: [[News-Meta-Section-230-Addiction-Verdict-May-2026|May 6, 2026 — Meta Asks Judge to Overturn Social Media Addiction Verdict]]'' | ''See full article: [[News-Meta-Section-230-Addiction-Verdict-May-2026|May 6, 2026 — Meta Asks Judge to Overturn Social Media Addiction Verdict]]'' | ||
| Line 42: | Line 42: | ||
== Google Chrome Silently Installs 4GB Gemini Nano AI Model Without Consent == | == Google Chrome Silently Installs 4GB Gemini Nano AI Model Without Consent == | ||
Privacy researchers revealed that '''Google Chrome''' has been silently downloading a '''4 GB AI model''' (Gemini Nano) to hundreds of millions of users' devices without consent, notice, or functional opt-out. Forensic analysis by privacy researcher Alexander Hanff confirmed that Chrome profiles accumulated the model within 14 minutes of creation with zero human interaction, and that Chrome automatically re-downloads the model if deleted. Chrome 147's "AI Mode" omnibox pill creates a misleading impression that queries are processed on-device, when in fact AI Mode sends all queries to Google's cloud servers. The installation may violate the EU GDPR and ePrivacy Directive, and a German court ruling in March 2025 established precedent that browser-installed code requires informed user consent.<ref name="privacyguy">[https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/ That Privacy Guy: Google Chrome silently installs a 4 GB AI model on your device without consent]</ref><ref name="9to5">[https://9to5google.com/2026/05/06/google-chrome-4gb-storage-ai-details/ 9to5Google: Google Chrome 4GB AI storage, Gemini Nano details]</ref><ref name="techspot">[https://www. | Privacy researchers revealed that '''Google Chrome''' has been silently downloading a '''4 GB AI model''' (Gemini Nano) to hundreds of millions of users' devices without consent, notice, or functional opt-out. Forensic analysis by privacy researcher Alexander Hanff confirmed that Chrome profiles accumulated the model within 14 minutes of creation with zero human interaction, and that Chrome automatically re-downloads the model if deleted. Chrome 147's "AI Mode" omnibox pill creates a misleading impression that queries are processed on-device, when in fact AI Mode sends all queries to Google's cloud servers. The installation may violate the EU GDPR and ePrivacy Directive, and a German court ruling in March 2025 established precedent that browser-installed code requires informed user consent.<ref name="privacyguy">[https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/ That Privacy Guy: Google Chrome silently installs a 4 GB AI model on your device without consent]</ref><ref name="9to5">[https://9to5google.com/2026/05/06/google-chrome-4gb-storage-ai-details/ 9to5Google: Google Chrome 4GB AI storage, Gemini Nano details]</ref><ref name="techspot">[https://www.cnet.com/tech/services-and-software/chrome-installing-4gb-ai-model-gemini-nano/ CNET: Google Chrome Might Have Installed an AI Model Onto Your Device]</ref> | ||
''See full article: [[News-Google-Chrome-Gemini-Nano-Privacy-May-2026|May 7, 2026 — Google Chrome Silently Installs 4GB Gemini Nano Without Consent]]'' | ''See full article: [[News-Google-Chrome-Gemini-Nano-Privacy-May-2026|May 7, 2026 — Google Chrome Silently Installs 4GB Gemini Nano Without Consent]]'' | ||
Revision as of 17:11, 7 May 2026
May 7, 2026 — Daily digest of AI law developments.
Contents
1. DOJ Files Formal Complaint in Intervention Against Colorado AI Law 2. Shivon Zilis Testifies in Musk v. Altman Trial 3. Anthropic Signs Compute Deal With SpaceX 4. Meta Asks Judge to Overturn Landmark Social Media Addiction Verdict 5. Google Chrome Silently Installs 4GB Gemini Nano AI Model Without Consent
DOJ Files Formal Complaint in Intervention Against Colorado AI Law
The U.S. Department of Justice filed a formal Complaint in Intervention on May 5, 2026, in xAI Corp v. Weiser, escalating the federal government's opposition to Colorado's SB 24-205 (the Colorado AI Act). The filing goes beyond the DOJ's earlier April 24 Statement of Interest, making the United States a formal party to the case. The DOJ argues that Colorado's algorithmic discrimination provisions violate the Equal Protection Clause and conflict with federal AI policy. Separately, a federal judge on April 27 paused enforcement of SB 24-205 pending further proceedings.[1][2]
See full article: May 5, 2026 — DOJ Files Complaint in Intervention Against Colorado AI Law
Shivon Zilis Testifies in Musk v. Altman Trial
Shivon Zilis, former OpenAI board member and the mother of four of Elon Musk's children, testified for hours on May 6, 2026, in the ongoing Musk v. Altman trial in Oakland. Zilis pushed back on OpenAI's characterization of her relationship with Musk as a secret allegiance, testifying that her relationship with Musk did not influence her duties as a board member. She confirmed that Musk had offered her the opportunity to use his sperm for conception. Zilis left the OpenAI board in 2023 after Musk started xAI. Her testimony came on the same day that former OpenAI CTO Mira Murati testified that Sam Altman "sowed chaos and distrust" among senior leadership.[3][4][5]
See full article: May 6, 2026 — Musk v. Altman Trial Day 6: Zilis and Murati Testify
Anthropic Signs Compute Deal With SpaceX
Anthropic announced on May 7, 2026 that it has signed an agreement with SpaceX to access all of the compute capacity at Colossus 1, one of the world's largest AI supercomputers. The deal provides Anthropic with over 300 MW of capacity and more than 220,000 Nvidia GPUs, substantially expanding the company's AI inference capabilities. In conjunction with the deal, Anthropic doubled Claude Code's five-hour rate limits for paid plans and removed peak-hours reductions for Pro and Max plans. The partnership comes amid Anthropic's ongoing dispute with the U.S. government over a supply-chain risk designation, and raises questions about the competitive dynamics between Musk's xAI/SpaceX and Anthropic, which has positioned itself as an AI safety-focused alternative.[6][7][8]
Meta Asks Judge to Overturn Landmark Social Media Addiction Verdict
Meta Platforms asked a Los Angeles judge on May 6, 2026, to throw out the landmark social media addiction verdict or order a new trial, arguing that Section 230 of the Communications Decency Act shields it from liability and that the jury's finding was based on the content the plaintiff viewed rather than platform design features. The March 2026 jury verdict had found Meta liable for $4.2 million and Google for $1.8 million for negligently designing platforms that harmed a young woman's mental health. Google has also asked the court to set aside the verdict and plans to appeal. The case is the first bellwether trial in a coordinated proceeding involving thousands of similar lawsuits, and the Section 230 defense could reach the Ninth Circuit and potentially the Supreme Court.[9][10]
See full article: May 6, 2026 — Meta Asks Judge to Overturn Social Media Addiction Verdict
Google Chrome Silently Installs 4GB Gemini Nano AI Model Without Consent
Privacy researchers revealed that Google Chrome has been silently downloading a 4 GB AI model (Gemini Nano) to hundreds of millions of users' devices without consent, notice, or functional opt-out. Forensic analysis by privacy researcher Alexander Hanff confirmed that Chrome profiles accumulated the model within 14 minutes of creation with zero human interaction, and that Chrome automatically re-downloads the model if deleted. Chrome 147's "AI Mode" omnibox pill creates a misleading impression that queries are processed on-device, when in fact AI Mode sends all queries to Google's cloud servers. The installation may violate the EU GDPR and ePrivacy Directive, and a German court ruling in March 2025 established precedent that browser-installed code requires informed user consent.[11][12][13]
See full article: May 7, 2026 — Google Chrome Silently Installs 4GB Gemini Nano Without Consent
References
- ↑ DOJ: Justice Department Intervenes in xAI Lawsuit
- ↑ Baker Tilly: DOJ Intervenes in Lawsuit Challenging Colorado AI Law
- ↑ BBC: Former OpenAI board member says Elon Musk offered her sperm donations
- ↑ Yahoo: Mother of four of Musk's children, Shivon Zilis, takes the stand
- ↑ Wired: How Shivon Zilis Operated as Elon Musk's OpenAI Insider
- ↑ Anthropic: Higher usage limits for Claude and a compute deal with SpaceX
- ↑ CNBC: Anthropic, SpaceX announce compute deal, includes space data center capacity
- ↑ Wired: Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird
- ↑ Yahoo Finance: Meta asks California judge to throw out landmark social media addiction verdict
- ↑ Channel News Asia: Meta asks judge to throw out landmark social media addiction verdict
- ↑ That Privacy Guy: Google Chrome silently installs a 4 GB AI model on your device without consent
- ↑ 9to5Google: Google Chrome 4GB AI storage, Gemini Nano details
- ↑ CNET: Google Chrome Might Have Installed an AI Model Onto Your Device