In a seismic shift for internet law, two recent jury verdicts against Meta and Google have handed plaintiffs’ attorneys something they have sought for decades: a proven blueprint for holding social media liability accountable in court. The rulings—a $375 million verdict in New Mexico and a $6 million award in California—mark the first time juries have accepted legal theories that directly blame platform design, rather than just user content, for causing real-world harm. As one attorney put it, these cases have created nothing less than “a road map to beating the big tech companies,” signaling a potential turning point that should resonate with legal professionals and policymakers concerned about accountability.
The Verdicts That Changed Everything
The twin decisions emerged from separate courtrooms within days of each other. Yet, they share a common thread: both juries rejected the long-standing argument that Section 230 of the Communications Decency Act shields platforms from accountability for harms stemming from their own design choices.
In New Mexico, a jury awarded $375 million against Meta Platforms, finding the company liable for using features that contributed to child exploitation risks on its platforms. Days later, a Los Angeles jury delivered a $6 million award against both Meta and Google (owner of YouTube) in a case brought by a young woman identified as Kaley, who claimed the platforms’ addictive design left her with body dysmorphia, depression, and suicidal thoughts.
Crucially, jurors in the California case also found that both Meta and YouTube had acted with malice, oppression, and fraud, tacking on $3 million in punitive damages—a finding that suggests juries are receptive to arguments that tech companies knowingly prioritised profits over user safety.
A ‘Big Tobacco’ Moment for Silicon Valley
Legal experts and advocates have been quick to draw comparisons to another industry that once seemed untouchable: tobacco.
“This is a game-changing moment for social media,” said Dr Mary Franks, a law professor at George Washington University, following the verdicts. “The era of impunity is over”. The analogy is apt: just as internal tobacco industry documents eventually revealed decades of knowing deception about the health risks of cigarettes, both trials introduced jurors to internal Meta emails and documents discussing the engagement impacts of platform features on young users.
Dr Rob Nicholls of the University of Sydney echoed the sentiment, noting that “this landmark verdict, along with many other similar lawsuits against social media companies, signals a shift in how courts view platform design as a set of choices that can carry real legal and social consequences”.
The Legal Strategy That Worked
For years, plaintiffs attempting to sue social media companies ran into the formidable barrier of Section 230, the 1996 law that states no online platform “shall be treated as the publisher or speaker of any information provided by another user”. The statute was instrumental in allowing the internet to flourish, but critics on both the left and right argue it has been stretched far beyond its original intent.
What made these recent victories different was a shift in legal strategy. Rather than suing over content posted by other users—which Section 230 clearly protects—plaintiffs targeted the algorithms and design features of the platforms themselves. This change offers tech industry stakeholders and legal professionals a new avenue to influence platform accountability and shape future regulation.
“The content could not possibly have the catastrophic real-world impact it does without Facebook’s manipulation-by-design,” argued attorneys in a related case brought by the family of a Charleston church shooting victim. While the Supreme Court recently declined that specific case, the legal theory it advanced—distinguishing between user-generated content and platform-generated architecture—has now found success before juries.
Challenges to the ‘Section 230 Shield’ Multiply
The recent verdicts arrive amid a broader assault on tech industry legal protections. Just last week, Senator Josh Hawley (R-Mo.) filed an amicus brief urging the Supreme Court to hear Doe v. X Corp., a case in which the Ninth Circuit held that Section 230 shielded X (formerly Twitter) from liability even for knowingly hosting child sexual abuse material.
“Lower courts have distorted Section 230 far beyond its original purpose,” Hawley argued, “extending what was intended as a narrow protection for good-faith content moderation into a blanket shield for social media platforms that knowingly host criminal material”.
The Supreme Court has thus far declined to take up major Section 230 cases this term, including appeals from far-right activist Laura Loomer and the family of a Charleston church shooting victim. But Justice Clarence Thomas has repeatedly signalled his desire to revisit the statute, writing in 2021 that “we will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms”.
What Comes Next: More Trials, More Challenges
The legal landscape is poised for further upheaval. The first federal trials in the multidistrict litigation against social media companies are scheduled for June 15 (brought by a school district) and August 6 (brought by a state attorney general) in the Northern District of California. Meanwhile, Meta and Google have already announced plans to appeal the $6 million California verdict, setting up potential circuit court clashes that could eventually reach the Supreme Court.
For U.S. readers, the implications extend far beyond courtrooms. The verdicts have given ammunition to state and federal lawmakers seeking to regulate tech platforms, with California’s Age-Appropriate Design Code recently surviving a major Ninth Circuit challenge that allowed parts of the law to take effect. As the BBC noted, the outcomes of these cases “could be the beginning of the end of the social media era as we know it”-a call to action for policymakers and industry stakeholders to consider the future of online safety and regulation.
Plaintiff’s attorney W. Mark Lanier, who secured the California verdict, described the decision as a “huge precedent”. For the tech industry, it is a warning: the legal theories that once protected platforms from accountability are showing cracks, and plaintiffs now have a proven roadmap for litigation that targets the design choices at the heart of social media’s business model. As more cases move toward trial, the question is no longer whether tech companies can be sued—it is how many more verdicts it will take before the industry fundamentally changes the way it builds products for young users.