views
Free speech, political leanings and Elon Musk. Twitter has been making much news since its new boss took over.
The latest? Musk endorsing the ‘The Twitter Files’; an account of independent author and journalist Matt Taibbi, who published a series of tweets outlining the thinking behind the decision to censor the news concerning Hunter Biden’s laptop.
These records, which appear to be emails from Twitter staff members that have been censored, explain why the technology giant decided to bury the Hunter Biden story in the final days of the 2020 presidential campaign.
1. Thread: THE TWITTER FILES— Matt Taibbi (@mtaibbi) December 2, 2022
The thread perhaps seeks to confirm the political bias Musk has accused the platform of having, before he took over. One of the main reasons he had cited for acquiring Twitter was to further his goal of ‘free speech.’
But this particular endorsement comes amid another controversy – the anti-Semitic content posted by Ye (Kanye West). After Ye said he ‘liked Hitler’ in a show, he followed it up with objectionable tweets. Twitter suspended his account.
The whole narrative reignites the ‘free speech’ debate. And while Musk is all for talking about it, it remains to be seen how the tech giant, now in control of Twitter, will ‘reform’ the site according to his apparent vision, especially when it comes to content moderation – the thing Musk is criticising through the Twitter Files.
First, an Explanation of the Twitter Files
A potentially explosive and exclusive story was published by The New York Post in October 2020, three weeks before the presidential election in the United States in 2020. The story was titled “Biden’s Secret Emails: An executive from Ukraine thanked Hunter Biden for the “opportunity to meet” the vice president of the United States. The proprietor of a computer repair shop in Delaware allegedly claimed that the laptop had belonged to President Biden’s second son, Hunter Biden, but that he had abandoned it, explained a report by Gizmodo.
The owner of the tabloid claimed that the contents of the laptop had been brought to him by the shop owner. According to the allegations made by the Post, emails and files that were discovered on the laptop revealed how Hunter had peddled influence with Ukranian businessmen. The laptop also allegedly contained “a raunchy 12-minute video” showing Hunter smoking crack and having sex.
Following the publication of the article in the Post, Twitter instituted a policy that prevented users from tweeting a link to the article or sending it via direct message, citing it as “hacked material.” Additionally, the company suspended the Post’s account for several days, preventing it from making any further tweets during that time.
But why did this happen?
In an interview earlier this week, Yoel Roth, who had previously served as the head of trust and safety at Twitter, stated that the company was unable to verify the story, which suggested that he and others at the company did not trust the Post.
Now, the tweet thread, which Taibbi dubbed the “Twitter Files,” depicts executives from the company making a moderation call.
Screenshots and emails posted by Taibbi reveal what one executive referred to as a “whirlwind” of activity within Twitter’s policy and trust and safety departments as employees questioned an initial decision to block sharing of the story for violating Twitter’s policy on the distribution of hacked materials, said a report by the Wired. (It is currently unknown where the laptop came from or whether or not all of the files on it legitimately belong to Hunter Biden.)
The screenshots showed a member of the staff issuing a warning that stated, “We’ll face hard questions on this if we don’t have some kind of solid reasoning.” A lawyer for the company gave his opinion that it was “reasonable for [Twitter] to assume” that the information obtained by the newspaper had been stolen, said a report by the Wired. Other screenshots showed executives from Twitter receiving advice from a Democrat in Congress as well as lobbyists representing the technology industry.
Twitter changed its mind about the moderation decision two days later, and at the time, CEO Jack Dorsey called it the “wrong” decision.
But How Will Musk Moderate Content?
While the report has stirred up conversation, there are also questions on whether Musk can handle moderation amid influences of various kinds.
Aarlan Marshall, in a report by the Wired, tackling that very question, remarks that Musk was recently faced with a tough moderation decision after Ye’s actions, and that the ‘moderation assignments will only get more complicated from here.’
“The longer he owns the site, the more likely he is to face a challenge with political entanglements. And research has suggested that hate speech has already become more visible on Musk-run Twitter,” the report comments.
But has it?
Another Wired report says that before and after Elon Musk purchased Twitter at the end of October, researchers from the Digital Planet group at Tufts University monitored the spread of hate speech on the social media platform.
In order to accomplish this, they made use of a data stream offered by the platform and referred to as the firehose. This stream is a feed that contains every public tweet, like, retweet, and reply that is shared on the platform. The same method was utilised by the group in previous studies, such as the one that investigated the toxic content that was posted on Twitter in the lead up to the midterm elections in the United States.
In order to investigate how Musk’s ownership of Twitter affected its culture, the researchers conducted a search of tweets that were published between March 1 and November 13 of this year. They then compiled a list of the 20 tweets that contained keywords that could point to anti-LGBTQ+, racist, or antisemitic intent, and chose the tweets with the highest number of followers, likes, and retweets. After that, they went through each of the three categories’ tweets, analysed the language used, and made an attempt to determine the authors’ true intentions.
Hate speech impressions (# of times tweet was viewed) continue to decline, despite significant user growth!@TwitterSafety will publish data weekly.Freedom of speech doesn’t mean freedom of reach. Negativity should & will get less reach than positivity. pic.twitter.com/36zl29rCSM
— Elon Musk (@elonmusk) December 2, 2022
A single tweet out of each of the three top 20 lists was identified by the researchers as being actually hateful during the months leading up to Musk’s takeover. In this particular instance, the tweet targeted Jewish people. The others were either paraphrasing the hateful statements made by another person or using the relevant key words in a context free of hateful connotations.
The same analysis found that in the weeks after Musk took over Twitter, hateful tweets became much more prominent among the most popular tweets with potentially toxic language. Seven of the top 20 posts in each category of tweets that used words associated with anti-LGBTQ+ or antisemitic posts were now hateful. One of the most popular tweets that used language that could be construed as racist was ranked among the top 20 tweets that were considered to be hate speech.
‘Freedom of Speech, Not Freedom of Reach’
Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favoring restrictions on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.
Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter Vice President of Trust and Safety Product Ella Irwin.
Her comments come as researchers are reporting a surge in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or engaged in “egregious spam.”
The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees. And advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.
The approach to safety Irwin described at least in part reflects an acceleration of changes that were already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, former employees familiar with that work told Reuters.
One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harm associated with viral abusive content.
Child Abuse Content Still ‘Easily Available’ on Twitter
Last week, influential podcaster Liz Wheeler, who describes herself as “unapologetically one of the conservative movement’s boldest voices,” took to Twitter to praise Musk, for cleaning the site of “child pornography and child trafficking hashtags.” Wheeler’s comments were in response to Musk’s decision to remove hashtags that referenced “child pornography and child trafficking.”
(For a long time, advocates for children have argued that these terms indicate legitimacy and compliance on the part of the victim, and that the content in question ought to be referred to as “child sexual abuse material.”)
If Child Protection Is Musk’s ‘Top Priority,’ Why Is It So Easy To Find Users Selling Images Of Nude Minors On Twitter? https://t.co/UoCiYeqVQU pic.twitter.com/VIaOUvQ0o6— Forbes (@Forbes) December 4, 2022
However, a report by Forbes said that content that violates the law can still be found on Twitter in a number of different languages. Hashtags and search terms that have traditionally been associated with child sexual abuse material (CSAM) continue to return a significant amount of content that is associated with it. And employees who have recently left Twitter, from the rank-and-file to the executive level, told Forbes that the problem will be much more difficult to combat now that Musk has eliminated the teams that were charged with policing it, along with their institutional knowledge.
“No one is left who is an expert on these issues and can do the work. . . . It’s a ghost town,” one team member told Forbes.
The report further claims that Twitter’s new policy ‘is a gross oversimplification to reduce Twitter’s CSAM problem to a handful of hashtags’, according to industry professionals. They say that Musk is being praised for something that he hasn’t even attempted to do, and yet people still applaud him for it.
According to Carolina Christofoletti, a CSAM threat analyst for TRM Labs and a researcher at the University of Sao Paulo in Brazil, “I don’t see any meaningful action taken by Musk so far.” She has issued a warning that the problem “is much bigger than ‘easy to catch’ fishes” and “far bigger than a bunch of hashtags.” She mentioned that the removal of several problematic CSAM hashtags and other actions associated with them “were all things done under the previous leadership.”
“He has drained the child safety team on Twitter without any risk impact whatsoever,” she continued, pointing to a tweet by Musk in which he asked users, rather than his internal team, for suggestions on how to better tackle CSAM.
In a post that she made on LinkedIn, she stated, “Twitter is exactly how it was, there is nothing new under the sun here.” The CSAM networks are still operational, just like they were in the past, the report further stated.
The National Center on Sexual Exploitation has issued a warning in the meantime, stating that Musk’s rumoured plans to move forward with a paywalled video feature, similar to OnlyFans, would only make the already pressing concerns regarding the safety of children on the site even worse. “Musk cannot confront child sexual abuse material on Twitter while creating another means that would only fuel sexual abuse and exploitation on the platform,” Lina Nealon, director of corporate and strategic initiatives, said in a public statement on Tuesday.
Read all the Latest Explainers here
Comments
0 comment