Meta and YouTube found guilty of designing addictive platforms that harm children

Published: 25/03/2026
| The Guardian

A Los Angeles jury has found Meta and YouTube liable for the negligent design of addictive platforms that caused significant harm to a young user. In the first lawsuit of its kind to go to trial, jurors ruled that the tech companies failed to provide adequate warnings regarding the potential dangers of their products. The plaintiff, a 20-year-old woman identified as KGM, was awarded $6 million in damages, with Meta ordered to pay 70% of the total and YouTube the remainder.

During the six-week trial, KGM testified that she became addicted to YouTube at age six and Instagram at nine, leading to depression, self-harm, and diagnoses of body dysmorphic disorder and social phobia. Her legal team argued that features such as infinite scrolling and video autoplay were engineered to prevent children from putting their phones down, comparing the companies' conduct to the tobacco industry's historical denial of product harms.

The jury reached a 10-2 verdict, finding that the companies' negligence was a substantial factor in the plaintiff's injuries. 

The verdict follows a separate ruling in New Mexico, where Meta was ordered to pay $375 million for misleading consumers about platform safety. Together, these cases mark the first instances of Meta being held legally responsible for the impact of its product design on the well-being of young people.

While it's too early to predict the outcome of the inevitable appeals, the doors to social media accountability may finally be opening. 

In a statement responding to the news, James Baker, Platform Power Programme Manager at Big Brother Watch, said: "Social media giants have pushed engagement-driven designs that keep us on their platforms so they can harvest our data for advertising revenues. This landmark verdict is an important step in acknowledging that Big Tech harmful business models are creating toxic online spaces.

"The UK Government should take note. Forcing age ID checks for internet access will not solve the problem, because it does nothing to tackle the root structural causes of harm: advertising-driven business models built on surveillance, profiling and maximising engagement. Age-gates threaten freedom of expression, undermine privacy, and create new cybersecurity risks by requiring people to hand over ever more sensitive personal data. The real solution is to confront the business models driving these harms, not to restrict access to the open internet."


Training Announcement: Find out more about our range of independent accredited data protection and AI governance qualifications from IAPP and BCS.  

Read Full Story Doom scrolling
Doom scrolling

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 3,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.