Meta’s court losses spell trouble for AI research, consumer safety

Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a major social media addiction case on February 19, 2026 in Los Angeles, United States.
Jon Putman | Anatolia | Getty Images
More than ten years ago, Meta — then known as Facebook — hired social science researchers to analyze how social networking services affected users. It was a way to show that the company and its colleagues were serious about understanding the benefits and potential risks of their innovation.
But as Meta’s court losses this week show, researchers’ work can become a burden. Former Facebook executive Brian Boland, who testified in both trials, one in New Mexico and the other in Los Angeles, said damning findings from Meta’s internal investigations and documents contradict the way the company has presented itself to the public. Juries in two cases found Meta failed to adequately monitor the site and endangered children.
Mark Zuckerberg’s company began cracking down on investigative teams a few years ago after Facebook researcher Frances Haugen became a prominent whistleblower. Newer tech companies like OpenAI and Anthropic then invested heavily in researchers, tasking them with studying the impact of modern AI on users and publishing their findings.
As AI now receives more attention for its harmful effects on some users, these companies need to ask whether it is in their best interest to continue funding the research or to stop it.
“There was a period of time where there were teams that were created within the company that could start looking at things, and for a short period of time, there were absolutely elite researchers who were looking at what was going on in these products a little more freely than I understand today,” Boland said in an interview. Boland said.
Meta’s two defeats this week focused on different cases, but they had a common theme: The company did not publicly share what it knew about the harms of its products.
Jurors were required to evaluate millions of corporate documents, including executive emails, presentations and internal investigations conducted by Meta staff. The documents included internal surveys that showed an alarming percentage of young users on Instagram who were subjected to unwanted sexual advances. There was also a study that Meta eventually stopped that implied that people who restricted their Facebook use had less depression and anxiety.
Plaintiffs’ attorneys did not rely solely on internal research to make their claims in the lawsuits; however, these studies helped strengthen their position regarding Meta’s alleged guilt. Meta’s defense teams argued that some of the research was outdated, taken out of context and misleading, offering a flawed view of how the company operates and views security.
‘Both sides of the story’
“The jury should hear both sides of the story and hear a fairly fair presentation of the facts and make a decision based on what they see,” Boland said. “And both juries came back with clear verdicts on very different cases.”
commodity and Google’s YouTube, which is also a defendant in the LA case, said they would appeal.
Lisa Strohman, a psychologist and attorney who served as in-house expert counsel in the New Mexico case, said leaders at Meta and in the tech industry may have thought they could use internal research to their advantage to gain favor with the public.
“I think what researchers don’t realize is the parents and family members,” Strohman said. “And I think what they didn’t realize was that these people weren’t going to be bought.”
As the research began to become public, the public relations victory that executives had been hoping for was expected to backfire. The most damaging event for Meta occurred in 2021, when Haugen, a former Facebook product manager turned whistleblower, leaked a series of documents suggesting the company knew about the potential harms of its products.
Former Facebook employee Frances Haugen speaks at a hearing of the Energy and Commerce Committee Communications and Technology Subcommittee on Capitol Hill on December 1, 2021 in Washington, DC.
Brendan Smialowski | AFP | Getty Images
“Haugen’s disclosures were a significant turning point globally—not just for the companies themselves, but for researchers, policymakers, and the broader public,” said Kate Blocker, director of research and programs at the nonprofit Children and Screens: Digital Media and Child Development Institute.
The leaks also led to major changes in Meta and the tech industry, which began to weed out research that could be seen as counterproductive for companies. Many teams investigating the alleged damages and related issues have been disconnected, CNBC previously reported.
Some companies have also begun removing certain tools and features of their services that third-party researchers use to examine their platforms.
“Companies may now view ongoing research as a liability, but independent, third-party research should continue to be supported,” Blocker said.
Sacha Haworth, chief executive of the Tech Oversight Project, said much of the internal research used in this week’s trials did not include new disclosures and that many of the documents had already been released by other whistleblowers. Haworth said what the essays include are “emails, words, screenshots, internal marketing presentations, notes” that provide necessary context.
As the tech industry now moves aggressively towards AI, companies like Meta, OpenAI, and Google are prioritizing products over research and security. It’s a trend that worries Blocker, noting that “as with social media before, there is limited public visibility into what AI companies are working on with their products.”
“AI companies seem to mostly study the models themselves (model behavior, model interpretability, and alignment), but there is a significant gap in research on the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to avoid repeating the mistakes of the past; we urgently need to establish systems of transparency and access that share with the public what these companies know about their platforms and support greater independent evaluation.”
WRISTWATCH: Regulatory pressure to comply following landmark ruling on social media.





