EU Commission launches investigation into X over non-consensual sexual imagery

26/01/2026 | European Commission

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA) to determine whether the platform has sufficiently mitigated the risks associated with its Grok AI chatbot. Specifically, this latest investigation focuses on the recent dissemination of manipulated, non-consensual sexual images and potential child sexual abuse material (CSAM). The investigation will examine whether X fulfilled its obligations to assess systemic risks related to gender-based violence and mental well-being before deploying Grok. In addition, the Commission has extended its existing December 2023 proceedings to include X’s recently announced transition to a Grok-based recommender system.

The Commission intends to gather evidence through interviews, inspections, and information requests. If infringements are proven, the regulator may adopt non-compliance decisions or impose interim measures. This formal action centralises enforcement power at the EU level, relieving member state coordinators of their supervisory roles regarding these specific suspected violations as the priority investigation proceeds.

In an interview with Euronews, Henna Virkkunen, Executive Vice President of the European Commission for Technological Sovereignty, Security and Democracy, clarified that online service providers "have to have practices in place to make sure illegal content is not spread online." She added that the Commission wants to understand how "X has been assessing and mitigating the risks," and that "Grok is now more and more integrated into X services, so it’s important to look at how those risks are being taken care of." 

During a European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE) hearing on 26 January, Commission officials faced questions about how the investigation into X will proceed and whether apps that enable the sharing of non-consensual sexual images are allowed under current law. 


Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professionalFind out more.

Read Full Story
Grok

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.