Paper explores how to preserve privacy within AI foundation models
Published: 08/04/2026
| Stanford University
A new research paper from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) warns that foundation models present unprecedented privacy risks that are significantly more complex than those of traditional artificial intelligence (AI) systems. These risks exist across the entire life cycle of a model, beginning with the mass scraping of personally identifiable information for training, through to how models memorise and regurgitate sensitive information, and to when users unknowingly disclose intimate data during interactions.
The paper also identifies specific technical threats, such as data poisoning, prompt injection, and model inversion, which allow adversarial attackers to bypass safeguards. Current regulatory frameworks, including the EU General Data Protection Regulation (GDPR), are described as fundamentally incompatible with the construction of these models. Furthermore, neither the UK, the US, nor the EU has yet implemented comprehensive rules to effectively alter developer behaviour.
To address these gaps, the HAI researchers urge policymakers to establish clearer guardrails. Proposed governance mechanisms include mandating the removal of personal information from training pipelines, enhancing model transparency, and enforcing privacy-by-design principles. Without such intervention, the report concludes that the public remains reliant on the voluntary actions of developers to protect their personal data.
Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professional. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 3,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.