News-Canada-OpenAI-Privacy-Investigation-2026

May 6, 2026 — Canadian federal and provincial privacy regulators found that OpenAI did not respect Canadian privacy laws when training ChatGPT, in a joint investigation that exposed the collection of personal information including health conditions, political views, and data about children without adequate safeguards.[1]

The investigation, initiated in 2023 following a complaint about unlawful collection and disclosure of personal information, was conducted by the federal Privacy Commissioner of Canada alongside counterparts in Quebec, British Columbia, and Alberta. Privacy Commissioner Philippe Dufresne said that OpenAI launched ChatGPT "without having fully addressed known privacy issues," exposing Canadians to potential risks including breaches and discrimination.[1]

Dufresne criticized OpenAI's lack of accountability, noting statements from leadership acknowledging they rushed to launch despite known privacy concerns. The investigation also revealed connections to the February 2026 fatal shooting in Tumbler Ridge, British Columbia, where the shooter's ChatGPT account had been banned for "disturbing content" including violent scenario planning. Despite approximately 12 OpenAI employees urging the company to notify Canadian law enforcement, no action was taken.[1]

OpenAI disagreed with the findings, asserting compliance with privacy laws "in most respects," but has since taken steps to improve privacy protections and agreed to implement further measures. Dufresne said the case reinforces the need to modernize Canada's privacy laws for AI deployment.[1]

The finding comes as seven lawsuits are pending in California accusing OpenAI and Sam Altman of negligence related to the Tumbler Ridge shooting.

See also: Tumbler Ridge Families Sue OpenAI, Canada School Shooting and ChatGPT

References