This Week from Meta: Shadow Ads, Endless Apologies, and “New” Teen Protections
TTP's September 20, 2024 Newsletter
TTP Report Exposes Continued Political Advertising Failures at Meta
With U.S. elections just seven weeks away, Meta is still allowing users to buy and sell accounts approved to run political advertisements, according to a new report by TTP. Meta’s own community standards prohibit users from buying and selling platform assets, which include accounts. This policy is particularly important for political advertisements, which can be used to spread disinformation, hate speech, or calls to violence. Instead of enforcing these rules, though, Meta has turned a blind eye to large Facebook groups that are used to sell accounts, a problem that TTP has documented since November of 2022.
In this latest report, researchers identified multiple Facebook users claiming to sell accounts that could run political ads in the United States. One public Facebook group, called “cloning Ids,” had over 21,000 members and was filled with posts seeking to sell Facebook accounts and stolen identity documents. TTP noticed similar account-selling activity during India’s election, which may have contributed to a flood of “shadow” advertisements that could not be traced back to a candidate or party. One study, published by the non-profit Ekō in collaboration with Indian civil society groups, estimated that over a fifth of political ads on Meta’s platforms had been placed by these shadow advertisers in the months leading up to the election.
Meta’s Nick Clegg Walks Back Content Moderation Commitments at Senate Hearing
Last month, Meta CEO Mark Zuckerberg sent a letter to House Judiciary Chair Jim Jordan, claiming that he regretted giving into “government pressure” on content moderation decisions. Moving forward, Zuckerberg said his company was “ready to push back” on government interventions. Days later, he declared that he was “done apologizing,” and said that Meta had been unfairly blamed for wider social and political problems. Meta Vice President of Global Affairs Nick Clegg echoed that message in testimony before the Senate Intelligence Committee this week, telling Sen. Marco Rubio (R-FL) that Meta had “learned its lesson” on content moderation. Rubio proceeded to grill Clegg on Meta’s procedures for countering disinformation, and asked him to share a complete list of Meta’s fact-checkers, which Clegg agreed to do.
Ultimately, Meta’s “neutral” position is unlikely to appease anybody. Conservative legal scholars criticized Zuckerberg’s letter as “insignificant,” because he did not outright accuse the Biden Administration of pressuring his company on speech decisions. Just this week, Republican lawmakers attacked Zuckerberg over the appointment of Texas billionaire John Arnold to Meta’s Board; according to Rep. Jim Banks (R-IN), Arnold is a “far-left radical” who “funds pro-censorship organizations.” In fact, the only mainstream social media figurehead to avoid Republican criticism is Elon Musk, whose abandonment of content moderation has proven beyond financially disastrous for X.
Why Instagram’s “New” Teen Safeguards Don’t Mean Much
Under the looming threat of federal regulation, Meta chose this week to announce new safeguards for teen Instagram accounts, which include expanded parental controls and tighter default content restrictions. Meta Head of Safety Antigone Davis described the monitoring tools as a “game changer” for parents, claiming they will be “simpler” to use and were developed with input from families.
As TTP pointed out, though, many of these protections aren’t new at all. Meta has claimed to push teens towards private accounts since early 2021, and began making them private by default several months later. Still, enforcement was spotty, and Meta’s desktop app temporarily lacked the same protections because the company forgot to build them. Meta has also repeatedly announced updates to stop strangers from contacting children, though none of these changes appear to have prevented Instagram from becoming the number one platform for financial sextortion targeting teens. Worse, Meta’s Nick Clegg recently admitted that families “don’t use” the parental controls – a claim that has been backed up by anonymous sources within his company.
These painfully slow product changes reveal that Meta has been hesitant to institute stronger protections for teens. We knew that, of course, because Meta’s own communications revealed that Zuckerberg personally rejected efforts to improve children’s safety. Clegg, apparently pleading with Zuckerberg, argued that Meta needed to make investments in safety teams to support its “external narrative of well-being.” Unfortunately, Zuckerberg appears to have ignored his recommendations and allowed sweeping layoffs to trust and safety teams.
What We’re Reading
Three Mile Island nuclear plant to help power Microsoft's data-center needs
Social media and online video firms are conducting ‘vast surveillance’ on users, FTC finds