A Los Angeles jury ruled against Meta (formerly Facebook) and Alphabet Inc. (Google) on March 25, finding the companies liable for designing addictive social media platforms that harmed young users. The verdict awarded $6 million in damages, with Meta ordered to pay $4.2 million and Alphabet $1.8 million.
The case centered on a 20-year-old woman, identified as K.G.M., who alleged that YouTube and Instagram's design features contributed to her addiction, depression, and suicidal thoughts during her teenage years. The jury concluded that both companies failed to warn users about potential harms and were negligent in their product design.
Both companies plan to appeal the decision. Meta stated in a response that the ruling oversimplifies teen mental health issues and that many young people rely on digital communities for support. Google disputed the verdict, arguing that YouTube is a streaming platform, not a social media site, and that the case misrepresented its purpose.
The ruling could set a precedent for future lawsuits targeting tech companies over product design and user safety. It also raises questions about how platforms handle harmful content, particularly when algorithms drive engagement and monetization.
Meta's internal research, which previously highlighted risks associated with its platforms, became a key piece of evidence in the trial. The company has faced scrutiny over suppressing such research in recent years. Meanwhile, newer AI companies like OpenAI and Anthropic are grappling with whether to continue funding research into the impacts of their technologies.
The case did not directly address Section 230, which typically shields platforms from liability over user-generated content. Instead, it focused on the companies' design choices, which could influence how platforms approach algorithmic content moderation in the future.