Twitter Under Elon Musk Leans on Automation to Centralize Content
Elon Musk’s Twitter is relying more on automation for moderated content, eliminating some manual reviews and favoring distribution limits instead of removing certain speech outright, its new head of trust and security told Reuters.
Twitter also places strict limits on abuse-prone hashtags and search results in areas that include child abuse, regardless of the potential implications for “misuse” of those terms, said Twitter’s Vice President of Trust and Product Safety Ella Irwin.
“The biggest thing that’s changed is that the team has been empowered to move as quickly and as aggressively as possible,” Irwin said Thursday, in the first interview a Twitter executive has given since Musk acquired the social media company in late October.
His comments come as researchers report an increase in hate speech on social media, after Musk announced an amnesty for accounts suspended under the company’s previous leadership who did not break the law or engage in “bad spam.”
The company has faced direct questions about its ability and willingness to moderate harmful and illegal content since Musk laid off part of Twitter’s workforce and announced long hours that have resulted in the loss of hundreds of employees.
And advertisers, Twitter’s biggest source of revenue, have fled the platform due to concerns about product safety.
On Friday, Musk vowed “significant strengthening of content moderation and the protection of freedom of expression” in a meeting with French President Emmanuel Macron.
Irwin said Musk encouraged the team to be less concerned about how their actions will affect user growth or revenue, saying security is the company’s top priority. “He insists that every day, many times a day,” she said.
The security approach described by Irwin at least in part reflects an acceleration of changes already planned since last year about Twitter’s handling of hate speech and other policy violations, according to former employees familiar with the project.
Another approach, drawn from the industry’s mantra “freedom of speech, not freedom of access,” involves leaving out certain tweets that violate company policies but preventing them from appearing in places like the home timeline and search.
Twitter has long used “visibility filtering” tools around false facts and had already included them in its official hate conduct policy before Musk’s discovery. This approach allows for free speech while reducing the potential risks associated with offensive viral content.
The number of hateful tweets on Twitter spiked in the week before Musk tweeted on Nov. 23 that perceptions, or perceptions, of hate speech were declining, according to the Center for Counting Digital Hate — with one example of researchers pointing to an increase. of such content, while Musk expressed a decrease in visibility.
Tweets containing anti-Black language that week were three times the number seen the month before Musk took office, while tweets containing homophobic slurs were up 31%, the researchers said.
‘More dangers, go faster’
Irwin, who joined the company in June and previously held security roles at other companies including Amazon.com and Google, pushed back at suggestions that Twitter lacked the resources or the will to secure the platform.
He said the layoffs do not significantly affect full-time employees or contractors working in what the company called “health” divisions, including “sensitive areas” such as child safety and content moderation.
Two sources familiar with the cuts said more than 50 percent of the health engineering faculty was laid off. Irwin did not immediately respond to a request for comment on the matter, but previously denied that the health team would be significantly affected by the layoffs.
He went on to say that the number of people working in child safety has not changed since the purchase, and said that the product manager of the team is still there. Irwin said Twitter has brought back some positions from people who left the company, although he declined to provide specific figures on the amount of the turnover.
He said Musk is focusing on more automation, saying the company previously made the mistake of using time-consuming human reviews and processing of malicious content.
“He encouraged the team to take more risks, to move quickly, to protect the field,” he said.
Regarding child safety, for example, Irwin said Twitter has switched to automatically removing tweets reported by trusted people with a track record of accurately flagging dangerous posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse, said she noticed that Twitter had recently taken down the content as quickly as 30 seconds after reporting it, without acknowledging that she had received her report or confirmation of its decision.
In an interview Thursday, Irwin said Twitter has taken down about 44,000 accounts involved in child safety violations, in cooperation with cybersecurity group Ghost Data.
Twitter also restricts hashtags and search results that are often associated with abuse, such as those aimed at “teen” porn. Previous concerns about the impact of those restrictions on the permitted use of the terms are gone, he said.
The use of “trusted reporters” was something that “we’ve discussed in the past on Twitter, but there was some hesitation and frankly some delay,” Irwin said.
“I think now we have the ability to move forward with things like that,” he said.
© Thomson Reuters 2022