OpenAI aims to remove private data from its models “where feasible”


Europe discovers ChatGPT’s privacy risks, some of which are deeply embedded in the models’ function. Can they be eliminated? OpenAI makes first concessions.

OpenAI has published a safety statement on its website in parallel with Italy’s GDPR push. One point concerns privacy, which has been criticized by European data protection authorities. On the one hand, training data can contain personal information, and on the other hand, people enter personal information into the ChatGPT interface when creating personal documents such as emails.

OpenAI aims to remove private data “where possible”

OpenAI now acknowledges that training data contains personal information from the public Internet. But the models should learn about the world, not individuals. OpenAI does not sell services, advertise or create personality profiles based on the data, the statement said.

At least the first point is debatable, since ChatGPT works indirectly on the basis of personal information and is sold by OpenAI as a service with a monthly usage fee. OpenAI points out in its own privacy policy that personal data entered can also be used to further develop its own services.


“We use data to make our models more helpful for people. ChatGPT, for instance, improves by further training on the conversations people have with it,” OpenAI writes.

In the future, OpenAI intends to remove personal data from the training dataset “where feasible”. It also wants to align models so that they can reject queries about individuals and allow users to have their personal data deleted from OpenAI’s “systems” at the model’s request. These steps minimized the possibility of models generating responses that contain personal data about individuals.

OpenAI account gets age verification and individual security standards in models

Another criticism from the Italian DPA is the lack of age verification in account creation for ChatGPT, which allows children under 13 to access the service. OpenAI is currently “looking into” verification technologies, according to the statement.

In its terms of service, OpenAI states that the minimum age for use is 18, or 13 and older with parental permission. Without access restrictions, however, this measure is ineffective – and since ChatGPT is an excellent homework tool, children in particular are likely to be drawn to the service.

OpenAI says it has made “significant effort” to ensure that its models don’t produce content harmful to children, and is working with the nonprofit Khan Academy on an AI classroom assistant to help both students and teachers. In the future, developers will be able to implement even higher safety standards than OpenAI provides by default.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top