A tablet screen displays a portrait of Jeffrey Epstein beside the U.S. Department of Justice website page titled Epstein Library, Feb. 11, 2026.
Veronique Tournier | Afp | Getty Images
A victim of notorious sex predator Jeffrey Epstein filed a class action lawsuit on behalf of herself and other survivors against the Trump administration and Google for allegedly wrongfully disclosing and publishing personal information about them.
The suit, filed on Thursday in the Northern District of California, where Google is headquartered, claims the U.S. Justice Department "outed" about 100 Epstein survivors in late 2025 and early 2026, and that even after the government acknowledged the mistake and withdrew the information, "online entities like Google continuously republish it, refusing victim's pleas to take it down."
With respect to Google, the suit says the company's core search engine and its artificial intelligence summary feature called AI mode were responsible for publishing victims' personal information.
"Survivors now face renewed trauma," the suit says. "Strangers call them, email them, threaten their physical safety, and accuse them of conspiring with Epstein when they are, in reality, Epstein's victims."
The complaint was filed by an Epstein victim who used the pseudonym Jane Doe.
After months of pressure, the DOJ earlier this year released more than 3 million additional pages of documents related to Epstein, including images and videos. In August 2019, Epstein killed himself in a jail in New York City, weeks after being arrested on federal child sex trafficking charges.
In taking on Google, the plaintiffs are testing whether a major safety net for internet companies and social media sites has its limitations. Section 230 of the Communications Decency Act governs internet speech and has long allowed major platforms in the U.S. to avoid liability for content appearing on their websites and apps.
With the explosion of AI-generated content and new controversies emerging regarding the publishing of non-consensual sexual images, including so-called deepfake porn, internet giants face a fresh new challenge in defending their turf. Earlier this month, Google was sued in a wrongful death case by a 36-year-old man's father, who alleged the company's Gemini chatbot convinced his son to attempt a "mass casualty attack" and to eventually commit suicide.
The lawsuit from Epstein survivors alleges Google "intentionally," through its design, fueled harassment by hosting information about the victims, and said its AI Mode feature "is not a neutral search index." The complaint comes after two jury verdicts this week — both against Meta and one involving Google's YouTube — concluded that the online platforms are failing to adequately police their sites for content that's causing real-life harm.
New Mexico Attorney General Raúl Torrez, who spearheaded his state's case against Meta, told CNBC this week that "there's a distinct possibility that these cases motivate Congress to re-examine Section 230 and, if not eliminate it, dramatically revise it."
The latest suit claims Google's AI-generated content revealed personal information about the victims. It said Google's AI Mode responded to queries asking for such details.
The complaint alleges that the government has failed to force tech platforms to take down materials in the past, allowing for the exposure of victims' information.
"As a part of this response, generated repeatedly on multiple platforms and across various devices, Google's AI Mode included Plaintiff's full name, displayed her full email address, and generated a hypertext link allowing anyone to send direct email to Plaintiff with the click of a button," the suit says.
Representatives from Google and the Trump administration did not immediately respond to requests for comment.
— CNBC's Dan Mangan and Jonathan Vanian contributed to this report.









English (US) ·