A decisive ruling against Meta and Google in a closely-watched trial regarding social media addiction may expand liability for platforms when it comes to hateful content.
The case focused on a 20-year-old California woman, identified as K.G.M., who alleged the platforms fueled addictive use as a minor and contributed to her depression and suicidal thoughts through their engagement-driven design.
The companies have denied wrongdoing, pointing to their safety tools and parental controls.
“We respectfully disagree with these verdicts and will appeal. Reducing something as complex as teen mental health to a single cause risks leaving the many, broader issues teens face today unaddressed and overlooks the fact that many teens rely on digital communities to connect and find belonging. We remain committed to building safe, supportive environments for young people and will defend our record vigorously,” Meta said in a statement provided to Fox News Digital.
José Castañeda, a spokesperson for Google, told FOX Business the company disagreed with the verdict and planned to appeal. He also stated that “the case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”
META VOWS TO ‘AGGRESSIVELY’ FIGHT AFTER LANDMARK VERDICTS FIND TECH GIANT LIABLE FOR ADDICTING KIDS
The case notably sidestepped Section 230, which protects platforms from being liable for the content of posts. Instead, the lawsuit went after the product designs used by the companies, which could have broader implications for how platforms handle hateful content, especially when it’s monetized.
StopAntisemitism Founder and Executive Director Liora Rez said the ruling was “monumental” and said that advocacy groups have been warning big tech that “the algorithm is affecting people negatively” both in terms of its addictive properties and “promoting hatred.”
“We kind of went from the platforms aren’t doing enough to remove antisemitism, for instance, to now platforms are specifically designing systems that actively spread and, most importantly, monetize and incentivize those to spread this hateful content,” Rez told Fox News Digital.
StopAntisemitism is a watchdog organization dedicated to exposing groups and individuals that push antisemitism.
Social media platforms often have policies barring certain kinds of content, particularly those that promote hatred or violence. However, influencers have started using code words to get around censorship, such as saying “unalive” instead of “kill.”
UNDER OATH, META’S ZUCKERBERG SHOWED WHY BIG TECH CAN’T POLICE ITSELF
Rez acknowledged that it is possible for those that spread hate to create similar codes, but she said that the “policy decision markers” at major platforms are aware of the issue. She said that due to StopAntisemitism’s social media reach, specifically with those 25 and under, the organization is often made aware of these terms early on and can alert the companies.
The StopAntisemitism founder said that AI-generated content will be at the center of the next battle, which has already started.
“We’re really worried about how AI is now helping to feed antisemitic content across the platforms. And there is very little, if any, oversight about it, and we’re hoping this ruling can somehow be navigated to help,” Rez said.
She pointed to AI-generated “rabbis” who have thousands of followers, with at least one gaining over 1 million. The accounts often push antisemitic narratives about Jews controlling financial systems and use Yiddish words in ways that Rez says indicate that there is not a Jewish person behind the account.
HAWLEY LAUNCHES GOOGLE INVESTIGATION AFTER ‘SHOCKING’ CHILD TRAFFICKING TESTIMONY AT SENATE HEARING
One such account — now deleted — is that of Rabbi Goldman, who garnered 1.5 million followers on Instagram despite posting his first video in mid-February. The AI-generated rabbi’s content received several community notes alerting others that it was fake. He spoke about so-called secrets that all Jews allegedly know while using Yiddish words in ways that do not make sense.
“So out of, let’s say, 10 videos, two or three will be interesting. However, the following two or three, and what we’ve noticed is these higher viewed videos are quickly followed by the problematic ones, right? Because the more your videos are seen, the more the algorithm pushes the next video and the next post to your audience,” Rez said of the AI-generated Rabbi Goldman.
Despite those concerns about AI-generated content, Rez expressed optimism about the social media companies’ willingness to address hateful content following the verdict. She said that StopAntisemitism hopes that by the end of the year it will see social media companies taking proactive steps.
“Meta has to step up and do more… Their failure to warn, essentially, was failure to protect, people got hurt,” Rez said. “We really think this was the precedent for future mass litigation. So again, we hope that they take this as a warning signal, and we hope AI is in the center of this.”
Fox News Digital reached out to Google for comment.



