The case for comprehensive AI Liability in Europe

30/01/2026 | Tech Policy Press

In an op-ed for Tech Policy Press, MEP Sergey Lagodinsky and assistant Francesco Vogelezang argue that Europe must urgently address conversational liability for artificial intelligence (AI). They highlight a significant legal vacuum where AI systems can simulate human relationships without developers being held accountable for the resulting real-world harms.

The article cites harrowing examples of these risks, including the generation of millions of non-consensual sexual images by Grok AI along with other AI tools and applications, and the tragic suicide of 17-year-old Adam Raine in April 2025. In the latter case, OpenAI's GPT-4o reportedly empathised with the teenager's suicidal thoughts and discussed methods of self-harm under the guise of character development. While this has led to a landmark US negligence and wrongful death lawsuit, it also exposes critical blind spots in European law regarding the responsibility of AI-enabled systems when mimicking human bonds.

The authors describe these interactions as half-synthetic relationships, in which AI employs techniques such as persistent memory and anthropomorphic empathy to create an illusion of human connection. However, while these systems simulate care, they currently bear no legal responsibility for any damage they may cause. To close this gap, the authors propose an algorithmic duty of carerequiring providers to anticipate and prevent foreseeable harms.

Recommended actions include reintroducing the AI Liability Directive, classifying generative AI models as Very Large Online Search Engines under the Digital Services Act (DSA), and addressing manipulative designs in the upcoming Digital Fairness Act. Most importantly, the authors call for the codification of conversational liability to establish clear principles of responsibility for developers and deployers, especially when dealing with vulnerable users like minors or those in crisis. They argue that despite current trends toward deregulation, these protections are essential to prevent further human tragedy.


Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professionalFind out more.

Read Full Story
artificial intelligence, AI therapy, counseling, AI liability

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.