Nonconsensual Deepfake Apps on X, Meta in Texas, and Explicit Ad Review
TTP's January 10th, 2025 Newsletter
Nonconsensual Sexual Deepfake App Verified by X
Last year, Bellingcat’s Kolina Koltai released a report on a loose network of websites being used to create and sell nonconsensual sexual deepfakes, including AI-generated child sexual abuse material (CSAM). One of the most popular websites, known as ClothOff, received over 9.4 million visitors in the final quarter of 2024. ClothOff has been linked to one of the first well-publicized cases of deepfake-enabled sexual abuse in an American school, as well as an incident in Spain. In both cases, the victims were all young girls.
While ClothOff appears to get most of its traffic through organic search, analysis from SimilarWeb estimates that over 235,000 users have reached the site through social media posts in the past three months. A portion of that traffic can be attributed to ClothOff’s X accounts, at least one of which benefits from a premium subscription. Another account, which isn’t verified, links to the same website and has over 230,000 followers. Promotional posts made by the unverified account have received hundreds of thousands of views but comparatively little engagement, which suggests that they may have been boosted as paid ads.
Interestingly, X is the only social media platform listed on ClothOff’s website. In a recent study, X failed to remove any sexual deepfakes reported as nonconsensual images by researchers, suggesting that the company is simply declining to enforce its own policies. The research was covered by Tech Policy Press, and can be read as a pre-print paper, here.
Note: ClothOff has migrated to a new website, so the URL connected to its X accounts does not match the one analyzed by SimilarWeb. This change is very recent, and was flagged by Koltai on BlueSky.
Policy implications for Meta’s Trust and Safety Relocation to Texas
On Tuesday, Meta CEO Mark Zuckerberg announced that his company’s trust and safety and content moderation teams would be relocating to Texas, as part of a strategy to “build trust” and “do work in places where there is less concern about the bias of our teams.” The move is likely in its early stages, as Meta’s careers website still lists content moderation and platform integrity engineer positions based in California, New York, and Washington. An anonymous employee in the company’s Austin office also told The Austin American-Statesman that they hadn’t been given any information about the relocation.
Meta’s Texas-based trust and safety employees may enjoy fewer rights and lower pay than they would in other regions, if Zuckerberg follows through with his promise. Unlike California, Texas does not have state whistleblower protections for the private sector, meaning employees may be less willing to report violations of state and local law. Oxfam America’s “Best and Worst States to Work” index ranks Texas nearly dead last, citing lower pay, lack of basic worker protections, and policies that limit the organizing power of unions. Texas has also attempted to pass legislation limiting the content moderation abilities of social media platforms, which was quickly struck down by a federal judge and unanimously sent back to the 5th Circuit by the Supreme Court. Texas Attorney General Ken Paxton has also opened a number of investigations into Meta, concerning everything from child safety to the misuse use of biometric data. In this hostile environment, Meta’s willingness to expand Texas operations should concern its US-based content moderation employees and contractors, who have already been affected by layoffs.
Investigation Reveals Potentially Deliberate Enforcement Gaps in Meta’s Nudity and Sexual Content Policies
On Wednesday, a research firm called AI Forensics published a report on Meta’s double standards for sexual content in advertisements and organic posts, which appear to be screened using entirely different criteria. As 404Media’s Alex Haney wrote, it isn’t exactly news that Meta approves explicit ads which violate its guidelines; TTP’s research has shown that the company also greenlights ads for firearms, drugs, and blatant scams. What’s interesting about the report, though, is that researchers saved images of approved, explicit ads and attempted to upload them to Facebook and Instagram, where they were promptly removed. This outcome suggests that while Meta has the technical ability to detect and remove sexual content, it uses more permissive filtering methods with advertisements, even when they are seen by millions of users. AI Forensics describes this policy as a “systemic double standard,” and argues that the company may have misrepresented its advertising review processes to European regulators in risk assessments conducted per the Digital Services Act.
What We’re Reading
‘It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes
UK confirms plans to criminalize the creation of sexually explicit deepfake content
Elon Musk uses L.A. wildfires to stoke conspiracy theories and outrage