693
edits
(Add: White House expands AI model vetting with Google, Microsoft, xAI; considers security EOs) |
(Fix ABA Journal URL (was 404, added missing "rules")) |
||
| (3 intermediate revisions by the same user not shown) | |||
| Line 5: | Line 5: | ||
2. White House Expands AI Model Vetting; Considers Security Executive Orders | 2. White House Expands AI Model Vetting; Considers Security Executive Orders | ||
3. Brockman Testifies Musk Sought OpenAI Control to Fund Mars City | 3. Brockman Testifies Musk Sought OpenAI Control to Fund Mars City | ||
4. Murati Testifies Altman Sowed Chaos and Distrust at OpenAI | |||
5. Canadian Privacy Regulators Find OpenAI Violated Law | |||
6. California Bar Proposes First AI-Specific Ethics Rules | |||
7. Eighth Circuit Strikes Down FCC Broadband Anti-Discrimination Rule | |||
---- | ---- | ||
| Line 18: | Line 22: | ||
== White House Expands AI Model Vetting; Considers Security Executive Orders == | == White House Expands AI Model Vetting; Considers Security Executive Orders == | ||
The Commerce Department's Center for AI Standards and Innovation (CAISI) announced agreements with Google DeepMind, Microsoft, and xAI on May 5, 2026 to conduct pre-deployment evaluations of their frontier AI models for national security risks. The agreements expand CAISI's testing program beyond its 2024 partnerships with OpenAI and Anthropic, and come as the White House weighs executive orders to create an AI working group and bar companies from "interfering" with the government's use of AI models. The latter proposal follows the Defense Department's standoff with Anthropic over military use of Claude.<ref name="cnbc-caisi">[https://www.cnbc.com/2026/05/05/ai-oversight-trump-google-microsoft-xai.html CNBC: Trump admin moves further into AI oversight, will test Google, Microsoft and xAI models]</ref><ref name=" | The Commerce Department's Center for AI Standards and Innovation (CAISI) announced agreements with Google DeepMind, Microsoft, and xAI on May 5, 2026 to conduct pre-deployment evaluations of their frontier AI models for national security risks. The agreements expand CAISI's testing program beyond its 2024 partnerships with OpenAI and Anthropic, and come as the White House weighs executive orders to create an AI working group and bar companies from "interfering" with the government's use of AI models. The latter proposal follows the Defense Department's standoff with Anthropic over military use of Claude.<ref name="cnbc-caisi">[https://www.cnbc.com/2026/05/05/ai-oversight-trump-google-microsoft-xai.html CNBC: Trump admin moves further into AI oversight, will test Google, Microsoft and xAI models]</ref><ref name="engadget-caisi">[https://www.engadget.com/2164870/google-microsoft-and-xai-agree-to-provide-us-government-with-early-ai-model-access/ Engadget — Google, Microsoft and xAI agree to provide US government with early AI model access]</ref> | ||
''See full article: [[News-CAISI-Google-Microsoft-xAI-May-2026|May 5, 2026 — White House Expands AI Model Vetting and Considers Security Executive Orders]]'' | ''See full article: [[News-CAISI-Google-Microsoft-xAI-May-2026|May 5, 2026 — White House Expands AI Model Vetting and Considers Security Executive Orders]]'' | ||
| Line 31: | Line 35: | ||
---- | ---- | ||
---- | |||
== Murati Testifies Altman Sowed Chaos and Distrust at OpenAI == | |||
Former OpenAI Chief Technology Officer Mira Murati testified via video deposition on May 6, 2026 that CEO Sam Altman sowed distrust among top executives and created persistent chaos at the company. Murati, who briefly served as interim CEO after Altman's November 2023 ouster, told the court that Altman was "saying one thing to one person and completely the opposite to another person" and described his leadership as "creating chaos." She warned that "OpenAI was at catastrophic risk of falling apart" during the leadership turmoil. The testimony came during the second week of the ''Musk v. Altman'' trial, where Musk seeks $150 billion in damages over OpenAI's for-profit conversion.<ref name="reuters-murati">[https://www.reuters.com/legal/litigation/openai-trial-former-technology-chief-says-altman-sowed-chaos-distrust-among-top-2026-05-06/ Reuters: In OpenAI trial, former technology chief says Altman sowed 'chaos,' distrust among top executives]</ref> | |||
''See full article: [[News-Musk-v-Altman-Trial-Day-6-2026|May 6, 2026 — Murati Testifies Altman Sowed Chaos and Distrust at OpenAI]]'' | |||
---- | |||
== Canadian Privacy Regulators Find OpenAI Violated Law == | |||
Canadian federal and provincial privacy commissioners found on May 6, 2026 that OpenAI did not respect Canadian privacy laws when training ChatGPT, collecting personal information including health conditions, political views, and data about children without adequate safeguards. Privacy Commissioner Philippe Dufresne said OpenAI launched ChatGPT "without having fully addressed known privacy issues." The investigation, conducted by the federal commissioner and counterparts in Quebec, British Columbia, and Alberta, was initiated in 2023 following a complaint about unlawful collection and disclosure of personal information. OpenAI disagreed with the findings but agreed to implement further privacy measures.<ref name="cbc">[https://www.cbc.ca/news/politics/privacy-investigation-chatgpt-open-ai-9.7188538 CBC: OpenAI didn't respect Canadian privacy law when it trained ChatGPT]</ref> | |||
''See full article: [[News-Canada-OpenAI-Privacy-Investigation-2026|May 6, 2026 — Canadian Privacy Regulators Find OpenAI Violated Law]]'' | |||
---- | |||
== California Bar Proposes First AI-Specific Ethics Rules == | |||
The State Bar of California proposed the first AI-specific amendments to the California Rules of Professional Conduct on May 6, 2026, addressing competence, client communication, confidentiality, candor toward tribunals, and supervision of AI use by lawyers. The six proposals, introduced by the Standing Committee on Professional Responsibility and Conduct (COPRAC), require lawyers to independently verify AI-generated output, inform clients when AI materially affects representation, and ensure cited authorities are not fabricated by AI tools. The public comment period closed May 5.<ref name="aba">[https://www.abajournal.com/news/article/california-bar-proposes-first-ai-specific-changes-to-ethics-rulesrules ABA Journal: State Bar of California proposes first AI-specific changes to ethics rules]</ref> | |||
''See full article: [[News-California-Bar-AI-Ethics-Rules-2026|May 6, 2026 — California Bar Proposes First AI-Specific Ethics Rules]]'' | |||
---- | |||
== Eighth Circuit Strikes Down FCC Broadband Anti-Discrimination Rule == | |||
The US Court of Appeals for the Eighth Circuit unanimously struck down on May 6, 2026 the FCC rules prohibiting discrimination in broadband access, ruling the agency exceeded its statutory authority. The three-judge panel held that Congress did not authorize disparate impact liability under the Infrastructure Investment and Jobs Act, and that the FCC improperly extended rules beyond broadband providers to contractors and infrastructure owners. FCC Chairman Brendan Carr welcomed the decision, while Public Knowledge criticized the ruling for eliminating protections addressing documented disparities in broadband access for lower-income communities.<ref name="arst">[https://arstechnica.com/tech-policy/2026/05/court-strikes-down-fcc-anti-discrimination-rule-opposed-by-internet-providers/ Ars Technica: Court strikes down FCC anti-discrimination rule opposed by Internet providers]</ref> | |||
''See full article: [[News-FCC-Digital-Discrimination-Rule-Struck-Down-2026|May 6, 2026 — Eighth Circuit Strikes Down FCC Broadband Anti-Discrimination Rule]]'' | |||
== References == | == References == | ||
| Line 43: | Line 79: | ||
[[Category:Executive Branch]] | [[Category:Executive Branch]] | ||
[[Category:Cases Against Anthropic]] | [[Category:Cases Against Anthropic]] | ||
[[Category:Corporate Governance]] | |||
[[Category:Data Privacy]] | |||
[[Category:International]] | |||
[[Category:Canada]] | |||
[[Category:Cases Against OpenAI]] | |||