
A new investigation by Amnesty International and Northeastern University has revealed how TikTok’s algorithm can rapidly push French teenagers toward depressive and suicidal content, raising fresh concerns about youth protection and compliance with Europe’s stringent new digital regulations.
The study, titled “Dragged into the Rabbit Hole,” examined how quickly TikTok’s For You feed — the app’s central recommendation system — directs young users toward harmful material. Researchers created three fake profiles of 13-year-olds in France and monitored the content pushed to them as they passively scrolled.
Within just five minutes, all three profiles were exposed to videos labeled as “sad” or “depressive.” The accounts did not interact with the content by liking, sharing, or following. Simply watching the sad videos, researchers said, was enough to signal interest to the algorithm.
According to Piotr Sapiezynski, associate research scientist at Northeastern University, this passive behavior alone dramatically shaped what came next. “Just by the act of watching those videos and skipping others, they were implicitly signaling to TikTok that this is the kind of content they’re interested in,” he said.
Within 15 to 20 minutes, the accounts’ feeds contained significant amounts of mental-health-related content, and up to half of the recommended videos were categorized as depressive. Two accounts encountered suicide-related material within 45 minutes.
Amnesty International’s researcher on digital rights, Lisa Dittmer, described the findings as extremely worrying. “Within just three to four hours of engaging with TikTok’s For You feed, teenage test accounts were exposed to videos that romanticized suicide or showed young people expressing intentions to end their lives,” she said.
The report includes firsthand accounts from young people in France who say TikTok’s algorithm influenced their mental health and self-perception. One 18-year-old named Maëlle said that seeing videos about self-harm and suicide methods “influences and encourages you to harm yourself.”
The investigation comes as seven French families — including two who lost a child — are suing TikTok for allegedly failing to moderate dangerous content and exposing minors to life-threatening material. The families argue that the platform has not upheld its legal responsibilities under the European Union’s Digital Services Act (DSA), which mandates stronger protections for young users.
To determine whether the trend was systemic, Northeastern researchers expanded the experiment. Ph.D. student Levi Kaplan exported the original watch lists and created 10 automated French accounts that replicated the same viewing pattern. The results were similar: the automated accounts were also funneled into depressive or suicidal content, though not always identical videos.
Sapiezynski and Kaplan stressed that the findings apply to France specifically and may differ in countries with more robust content moderation in English. “It is entirely possible that if we try to redo this study in the U.S., we would see much less suicidal content,” Sapiezynski said, noting differences in staffing and moderation resources across languages.
TikTok firmly rejected the study’s conclusions, stating that the experiment did not reflect real user behavior. A company spokesperson said that the researchers themselves admitted that the vast majority — “95% of content shown to their pre-programmed bots” — was not related to self-harm, and highlighted the platform’s existing teen-safety features.
Amnesty International, however, argues that TikTok is failing to adequately identify and limit harmful content for minors. Amnesty France issued an extensive list of recommendations urging regulators and governments across the EU to enforce the DSA more aggressively.
“Binding and effective measures must be taken to force TikTok to finally make its application safe for young people in the European Union and around the world,” said Katia Roux, advocacy officer at Amnesty France.
As legal and regulatory scrutiny intensifies, the findings add pressure on TikTok to strengthen protection mechanisms for vulnerable users — particularly young people navigating mental-health struggles in an increasingly algorithm-driven digital world.