Canada's privacy commissioners say OpenAI breached federal and provincial data laws

Canada's privacy commissioners say OpenAI breached federal and provincial data laws
Engadget

Key Points

  • Privacy Commissioner Philippe Dufresne finds OpenAI non‑compliant with Canadian federal and provincial privacy laws.
  • Investigation cites massive personal data collection without consent or adequate safeguards.
  • Users lack ability to access, correct, or delete personal information used to train ChatGPT.
  • OpenAI commits to new user notices, stronger data‑filtering tools, and improved data‑export processes.
  • Company will protect retired datasets and test safeguards for minor relatives of public figures.
  • Findings intensified after OpenAI’s handling of a warning about the Tumbler Ridge shooter.
  • Regulators will monitor OpenAI’s compliance and issue follow‑up reports.

Canada’s privacy commissioner, Philippe Dufresne, concluded that OpenAI failed to comply with the country’s federal and provincial privacy statutes while training its AI models. The investigation found the company collected massive amounts of personal data without adequate safeguards or consent, and that users have no way to correct or delete that information. OpenAI has pledged a series of remedial steps, including new user notices, stronger data‑filtering tools and tighter protections for retired datasets. The findings come amid heightened scrutiny after the firm’s handling of a warning about a shooter in the February 2026 Tumbler Ridge attack.

Canada’s privacy commissioner, Philippe Dufresne, announced that OpenAI violated both federal and provincial privacy laws during the development of its artificial‑intelligence models. The conclusion follows a joint investigation with privacy regulators in Alberta, Quebec and British Columbia, which identified a pattern of data‑collection practices that ran afoul of the Personal Information Protection and Electronic Documents Act (PIPEDA) and comparable provincial statutes.

Commissioners said OpenAI gathered “vast amounts of personal information without adequate safeguards” and failed to obtain consent before using that data for model training. While ChatGPT displays a disclaimer that interactions may be used for training, the regulators pointed out that OpenAI also relied on third‑party datasets—scraped or purchased from the public internet—that contain personal details many individuals never knew were being harvested.

Another point of contention is the lack of user control. The commissioners noted that ChatGPT users cannot access, correct, or delete the personal data that may have been incorporated into the system’s knowledge base. Moreover, the agency criticized OpenAI’s “lackluster attempts” to acknowledge and correct inaccurate responses generated by the model.

OpenAI’s pledged reforms

OpenAI, which the commissioners described as “open and responsive,” has agreed to a slate of corrective actions. The company has already retired earlier model versions that the investigation deemed non‑compliant. It now employs a filtering tool designed to detect and mask personal identifiers—such as names and phone numbers—in publicly accessible internet data and licensed datasets used for training.

Within three months, OpenAI will add a new notice to the signed‑out version of ChatGPT warning users that their chats may be used for training and advising against sharing sensitive information. Within six months, the firm will simplify its data‑export tools and clarify how users can challenge the accuracy of the information ChatGPT provides. The company also pledged to confirm to the privacy commissioners that retired datasets are protected from future development use.

Additional safeguards include testing protective measures for minor relatives of public figures, ensuring the model denies requests to disclose their personal details. These steps aim to address the commissioners’ concerns about inadvertent exposure of private data.

The privacy probe, opened in 2023, gained renewed urgency after the February 2026 mass shooting in Tumbler Ridge, British Columbia. OpenAI had flagged the alleged shooter’s account in 2025 for containing violent threats but did not forward the warning to Canadian law‑enforcement agencies. Regulators subsequently demanded stronger safety protocols, and OpenAI agreed to collaborate more closely with law‑enforcement and health agencies moving forward.

While the commissioners acknowledged OpenAI’s cooperation, they emphasized that compliance with privacy legislation will remain a “continuous obligation.” The agency plans to monitor the company’s implementation of the agreed‑upon measures and will issue follow‑up reports as needed.

#OpenAI#Canada#privacy#PIPEDA#privacy commissioners#ChatGPT#data protection#AI regulation#mass shooting#Tumbler Ridge
Generated with  News Factory -  Source: Engadget

Also available in: