In a company blog on Tuesday, 25 April, Open AI announced a new privacy feature allowing users to disable chat history in ChatGPT. It means ChatGPT conversations cannot be used to train or improve its AI models or appear in the history sidebar when the feature is switched off. Such conversations will be retained for 30 days and can only be reviewed for monitoring for abuse before being permanently deleted. An export feature has also been announced, enabling users to obtain details of their conversations and data.
The news comes the day after an AFP report in Barrons revealed that a German regional data protection authority launched an investigation into OpenAI's compliance with the EU General Data Protection Regulation. In a statement, the Commissioner for the northern German state of Schleswig-Holstein said: "We want to know if a data protection impact assessment has been carried out and if the data protection risks are under control."
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.