OpenAI Faces GDPR Complaint for Violating Data Protection Laws
A complaint has been lodged against OpenAI by privacy organization Noyb, alleging that its product ChatGPT violates several data protection laws of the European Union (EU). Noyb claims that ChatGPT disseminates inaccurate information about individuals, which goes against the EU’s General Data Protection Regulation (GDPR). The GDPR stipulates that information pertaining to individuals must be accurate, and individuals must have complete access to their personal data.
Noyb, founded by renowned lawyer and activist Max Schrems, asserts that ChatGPT provided false information about the birthdate of a prominent public figure. When Noyb requested permission to access and delete the data associated with this individual, OpenAI denied the request.
According to Noyb, the GDPR requires that any information about an individual must be accurate, and individuals must have knowledge of the source of that information. However, OpenAI claims it is unable to rectify the inaccuracies in its ChatGPT model. Furthermore, the company cannot disclose the origin of the information and is unaware of the specific data ChatGPT stores about individuals.
Noyb argues that OpenAI is aware of this issue but appears to disregard it. OpenAI’s argument revolves around the notion that erroneous information may be acceptable when ChatGPT is used by students for their homework. However, Noyb insists that this is unacceptable for individual users, as EU law mandates that personal data must be accurate.
Noyb highlights that AI models are prone to generating false information due to hallucinations. Therefore, it questions OpenAI’s technical process for generating information. OpenAI’s justification for this is that, despite having extensive datasets for training its model, it cannot guarantee the factual accuracy of the answers provided to users.
Maartje de Gaaf, Noyb’s data protection lawyer, asserts that all technologies must adhere to laws and cannot simply circumvent them. He emphasizes that if a tool cannot produce accurate results regarding individuals, it cannot be used for such purposes. Gaaf also suggests that companies have not yet developed the technical capabilities to create chatbots that comply with EU laws in this regard.
Generative AI tools are currently under strict scrutiny from European privacy regulators. In 2023, the Italian Data Protection Authority temporarily restricted data protection in this area. The outcome of OpenAI’s case remains uncertain, but Noyb claims that OpenAI does not even pretend to comply with EU law.