YouTube’s Gun Problem, Racial Bias in Meta Ads, and Sextortion Data
TTP's April 7th, 2024 Newsletter
YouTube Tightens Policies on Firearms
This week, without much fanfare, YouTube updated its firearm policies to age-restrict content featuring 3D-printed guns and automatic weapons. These changes follow research from TTP which revealed that YouTube was recommending real firearms videos to accounts registered as young boys, none of whom had expressed interest in this content. Instead, the accounts had been “trained” on playlists of clips from video games, watching them the way actual children would. The videos recommended by YouTube included weapons modification tutorials and dramatized reenactments of school shootings. When the test accounts clicked on these videos, YouTube began serving them more and more violent content, none of which was age restricted.
In response to TTP’s report, the organization Everytown for Gun Safety urged YouTube to age restrict firearms content and enforce its existing policies, which were being openly violated by channels using the platform to sell weapons or weapon accessories. Later, Manhattan District Attorney Alvin Bragg called on YouTube to stop recommending 3D-printed gun videos to children.
YouTube’s old firearms policies were primarily concerned with preventing gun sales and stopping the circulation of particularly harmful videos, such as tutorials for weapons manufacturing and the conversion of weapons to automatic fire. The new policies finally acknowledge that content featuring firearms can be inappropriate for children, and represent a shift in YouTube’s approach to weapons content. Age restrictions may ultimately reduce viewership, which reduces ad revenue for YouTube – a trade-off that many online platforms are unwilling to make without significant outside pressure.
New Research Suggests that Racial Biases Persist in Meta Ad Delivery
In 2019, Meta announced that it would no longer allow racial targeting in advertisements for housing, employment, or credit. Two years later, the company declared that those same restrictions would apply to all advertisements, with the goal of preventing discrimination and targeted harm to certain groups. Of course, Meta only changed its policies after a slew of lawsuits and bad press. Now, a growing body of evidence suggests that these changes were empty gestures, and that racial ad targeting is still possible with Meta’s data-gobbling ad tools. On Tuesday, The Intercept reported that researchers from Princeton and the University of Southern California had found evidence of “racial skew” in advertisements for for-profit colleges, meaning a higher percentage of black users were shown ads for low-quality, predatory institutions. That percentage climbed even higher when advertisements featured images of black people.
By testing advertisements for for-profit colleges, the researchers demonstrated one clear example of harm; for profit schools have a track record of targeting low-income students with deceptive marketing materials, and are also attended by a higher percentage of black students. If Meta’s ad-delivery system is reflecting this educational inequality, it could also be performing de facto discrimination with advertisements for other goods or services.
Meta Hands Over Sextortion Data Months After Boy’s Death
When 16-year-old Murray Dowey took his own life in December of 2023, his parents had no idea that he had been targeted by a group of sextortion scammers. It took two weeks for Scottish police to unlock his phone and conclude that he had likely been blackmailed via social media, falling victim to an increasingly-commonscam that has led to dozens of youth suicides. In January, police asked Meta to provide data related to their criminal investigation. When the company didn’t respond, the US Department of Justice followed up with a court order in early May, to no avail. Three days ago, Murray’s mother Ros made a LinkedIn post, urging Meta President of Global Affairs Nick Clegg to release the data. Finally, on June 6th, Meta began cooperating with law enforcement, a full six months after Murray’s death. Based on the timeline of the BBC’s reporting, it appears that Meta released the data only after it was asked to comment on the case.
A delay of six months means that Murray’s blackmailers may have found more victims. Since his death, Meta has announced that it will be blurring nude images sent to or received by minors – a measure that Meta whistleblower Arturo Bejar described as “woefully insufficient.” Instead of introducing easily bypassed safeguards, Meta could improve its cooperation with law enforcement or make it harder for strangers to find and contact children. The company could also crack down on the “sextortion guides” that users share on platforms like Facebook. Unfortunately, Meta’s current approach places the burden on children to protect themselves from professional scammers, who are thriving in aftermath of trust and safety layoffs.
Watch Now: TTP Director Katie Paul Discusses Facebook Militias with Former FBI Official Frank Figliuzzi and Host Chip Franklin
[click here to watch]
What We’re Reading
Big Tech Launches Campaign to Defend AI Use
The Hidden Life of Google’s Secret Weapon
Family of teen victim applauds Ohio lawmakers’ effort to make sextortion a crime


