Persuasive AI Risks, Data Center Disconnections, and Australian Fact-Checking
TTP's March 21st, 2025 Newsletter
Google Struck Character.AI Deal Despite Research on Harms of “Persuasive” AI
Almost a year ago, a group of researchers with Google DeepMind published a paper examining the risks posed by “persuasive” AI, which they described as AI-generated communications capable of changing a user’s behavior, beliefs, or preferences. The paper went on to list mechanisms and design choices that can affect the persuasiveness of a model—a user is more likely to develop an emotional attachment to a chatbot that “remembers” them from one session to the next, for example. Chatbots can be further anthropomorphized by sharing information about “themselves,” using terms like “I” and “me,” or taking on the identity of a human being, such as a celebrity.
Over time, persuasive chatbots can personalize their outputs based on a user’s responses. Google’s researchers warned that AI models capable of recognizing psychological traits could use these mechanisms to manipulate users, pushing them to modify their behavior based on emotions like anxiety or fear.
This week, a Futurism article drew comparisons between Google’s research and lawsuits brought against Character.AI, a chatbot platform that Google “acquihired” four months after the DeepMind researchers published their paper. The platform allows users to communicate with chatbots, many of which are based on fictional characters that appeal to children.

Unfortunately, Character.AI failed to create basic safeguards on its platform, prompting multiple families to sue the company (and Google) after its chatbots appeared to persuade their children to engage in self-harm or suicide. As Google’s own researchers noted, users can be more vulnerable to persuasion in adolescence and young adulthood, which makes Character.AI’s disregard for child safety particularly devastating. Now, a Google spokesperson has refused to tell Futurism if the company’s leaders were aware of these findings when they struck their deal with Character.AI.
Texas Data Center Disconnections Responsible for Over Thirty “Near-Miss” Grid Incidents
On Wednesday, Reuters reported that data centers had triggered over 30 “near-miss” grid incidents in Texas by suddenly switching to on-site generators—a temporary measure taken to protect hardware from power fluctuations. When big customers like data centers go off-line, they cause a surge of unused power in the grid that can threaten the entire system’s stability. The incidents were documented in disclosures from the Electricity Reliability Council of Texas (ERCOT), which serves as the state’s grid operator. In a recent report, ERCOT warned that demand for energy in Texas could far outpace supply, as more data centers for crypto mining and AI are brought online.
Yesterday, the Texas Senate passed a bill that would give ERCOT a “kill switch” for large load customers during emergencies, in order to prioritize the delivery of energy to homes. Data centers would also be required to share information about their backup generation capacity. Industry lobbyists have opposed the bill, arguing that the temporary shutdown of data centers could “threaten national security.”
Separate ERCOT initiatives have created massive handouts for data centers. In 2022, TTP published a report on a program that allowed large customers to buy energy at significant discounts and sell it back to the grid during periods of high demand. The deals allowed crypto mining companies to rake in tens of millions of dollars, while regular ratepayers faced steep bills and deadly outages.
Meta Continues to Work with Fact-Checkers in Australia
This week, Meta published a blog post announcing that it would use fact-checkers to restrict the circulation of false information related to Australia’s upcoming elections, just days after Meta Head of Global Affairs Joel Kaplan denounced the practice as a “censorship tool” run by “so-called experts.” While Meta is not required to address misinformation under Australian law, the nation moved aggressively to regulate platforms in other areas, forcing them to compensate news publishers for their content and banning the creation of accounts by children under than 16. Meta has taken a very different approach to content moderation in the United States, where fact-checking will be replaced by a “Community Notes” system that doesn’t limit the reach of posts.
Meanwhile, the Computer & Communications Industry Association (CCIA) is urging the Trump Administration to attack “foreign unfair trading practices,” submitting a laundry list of complaints on behalf of its members. The targeted regulations include Australia’s news bargaining code, as well as European data privacy laws. Meta might comply with these laws in public, but its trade associations are clearly working behind the scenes to weaponize US trade dominance against countries that have taken a more proactive approach to online safety.
What We’re Reading
Community Notes Can’t Save Social Media From Itself