Study ranks data protection risks of LLMs
25/06/2025 | Incogni
New research by Incogni examines the growing privacy and data protection challenges posed by generative AI (Gen AI) and large language models (LLMs). The study highlights the risk of unauthorised data sharing, data misuse, and personal data exposure has outpaced regulatory oversight. To help users better understand and compare these risks, Incogni has developed an 11-criteria framework to assess and rank the privacy risks associated with different LLMs.
Key findings indicate that Mistral AI's Le Chat is the least privacy-invasive platform, with ChatGPT and Grok close behind. The report finds these platforms provide the highest levels of data collection transparency and user opt-out simplicity. In contrast, Meta AI was found to be the most privacy-invasive, followed by Gemini (Google) and Copilot (Microsoft).
In addition, Gemini, DeepSeek, Pi AI, and Meta AI reportedly do not allow users to opt out of having their prompts used for model training. ChatGPT emerged as the most transparent regarding prompt usage and features a clear privacy policy.

What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.