Twitter Expands Use of Enforcement System to COVID-19 Falsehoods

Twitter Inc. is expanding the use of its strike system to include users who spread misleading information about Covid-19 and its vaccines, the social-media platform’s latest attempt to curb the spread of potentially harmful content.

Source: WSJ | Published on March 2, 2021

Women hands typing on mobile smartphone, Live Chat Chatting on application Communication Digital Web and social network Concept. copy space.

The new rules, disclosed Monday in a corporate blog post, are an update of plans the company rolled out last year to help tamp down coronavirus misinformation and amplify authoritative sources during the pandemic.

Twitter said it would begin applying labels to posts pertaining to vaccines that include conspiracy theories and rhetoric unfounded in research or credible reporting. Similar to the company’s election-integrity policy update in January, Twitter said it would permanently suspend users who violate its Covid-19 misinformation policy five times.

The first strike will result only in a warning, but subsequent strikes would result in accounts being locked either for 12 hours or seven days. Users can appeal locks or suspensions, the company said.

The company said it has removed more than 8,400 tweets and challenged 11.5 million accounts globally since issuing the earlier guidelines about Covid-19 posts.

“Through the use of the strike system, we hope to educate people on why certain content breaks our rules so they have the opportunity to further consider their behavior and their impact on the public conversation,” the company said in the post.

The company has been targeting misleading information on the pandemic, including on how the virus spreads in communities, false claims of preventive measures and suggestions that vaccines are used to cause harm to populations.

Social-media giants have wrestled with false and misleading information on their platforms about sensitive topics, such as voting, election integrity and the pandemic. They have faced scrutiny from government agencies, politicians and others over their moves to curb the spread of misinformation.

Facebook Inc. in December said it would start removing false claims and conspiracy theories about Covid-19 vaccines that have been debunked by public health experts.

Last month Facebook expanded the list of debunked claims about Covid-19 and vaccines that would be removed from the platform and said it had purged 12 million pieces of content from Facebook and Instagram under its virus misinformation policy.

Twitter Chief Executive Jack Dorsey last week said the company needs to build greater trust with its user base and that it plans to be more transparent around its content-moderation practices.

“We agree many people don’t trust us,” Mr. Dorsey, Twitter’s co-founder, said during a virtual meeting with analysts. “Never has this been more pronounced than the last few years.”

Twitter has faced controversies over some of its moderation practices. Last year, it put restrictions on the New York Post’s account and blocked users from sharing links to a pair of the newspaper’s stories about President Biden’s son, Hunter Biden. Company officials later pledged changes to how certain content rules would be enforced and to provide more context around such decisions.

The policy update disclosed Monday also shows that Twitter remains interested in using content labels as part of its moderation tool kit. The company has said that applying labels to posts can help reduce the spread of misinformation. Twitter made that determination after a study of U.S. election-related tweets during a roughly two-week period in October and November that it flagged for containing disputed or potentially misleading information.