Ofcom launches deepfake imagery investigation into X under OSA
12/01/2026 | Ofcom
The furore over the non-consensual sexualised images generated by the Grok AI chatbot on X has continued after the platform confirmed that image generation and editing had been restricted to paying subscribers.
In a statement on Friday, Liz Kendall, Secretary of State for Science, Innovation and Technology, said: "Sexually manipulating images of women and children is despicable and abhorrent. It is an insult and totally unacceptable for Grok to still allow this if you're willing to pay for it. I expect Ofcom to use the full legal powers Parliament has given them.
Kendall went on to call for the UK's communications regulator, Ofcom, to investigate the next steps "in days not weeks."
Downing Street is also unhappy with the change to restrict access to paying users only. Speaking to The Guardian, a spokesperson said: "The move simply turns an AI feature that allows the creation of unlawful images into a premium service."
In response to the comments, Ofcom confirmed it would accelerate its investigation into X. Over the weekend, Elon Musk responded, accusing the UK government of seeking to suppress free speech.
The standoff continued to escalate on Monday after Peter Kyle, Secretary of State for Business and Trade, told Sky News: "Let me be really clear... X is not doing enough to keep its customers safe online." Kyle added that the government would fully support any action taken by Ofcom, the media regulator, against X, including a possible ban in the UK.
Ofcom has also confirmed it has opened a formal investigation into X under the Online Safety Act 2023 (OSA) to determine whether it has complied with its duties to protect users from illegal content and children from harmful content.
The investigation will establish whether X has taken the necessary steps required under the OSA to:
- Assess the risk of UK citizens encountering illegal content.
- Take appropriate measures to prevent individuals in the UK from accessing 'priority' illegal content, including non-consensual intimate images and child sexual abuse material (CSAM).
- Swiftly remove illegal content as soon as they become aware of it.
- Ensure the protection of users' privacy and comply with relevant UK data protection laws.
- Evaluate the potential risks their service poses to UK children, and conduct an updated risk assessment before implementing any significant changes to their service.
- Implement effective age verification methods to protect UK children from viewing pornography.
Meanwhile, the deepfake case involving Grok is also making waves across the Atlantic. Three US senators have written to Google and Apple asking them to remove Grok and X apps from their respective app stores, arguing that the spread of non-consensual sexualised images violates app store rules.
Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professional. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.