Saturday, August 19, 2017

Twitter Grapples With "Verified" White Supremacists As Other Tech Companies Crack Down On Hate Speech

Kacper Pempel / Reuters

The reaction from major tech companies to the deadly white supremacist rally in Charlottesville, Virginia, was swift.

Apple cut off white supremacists from Apple Pay. Google and GoDaddy booted a Nazi website. And Facebook, Wordpress, OkCupid, and others moved to ban white supremacists or crack down on hate speech.

Meanwhile on Twitter, people who use their accounts to spread white supremacist messages haven't just been left alone, they're operating with coveted blue "verification" checkmarks, putting the social media giant in the increasingly difficult position of trying to defend the "all speech" tenant it was founded on against a user base that is demanding more accountability.

Protesters against racism march through Oakland on Aug. 12, 2017.

Noah Berger / AP

While Twitter says the checkmarks are meant to confirm that users are who they say they are on the social network, many see them as symbols of legitimacy or an indication of a user's prominence.

In the wake of the Charlottesville attack, Twitter did suspend a few accounts, but observers are questioning the company's decision to stand by the "verified" checkmarks for accounts associated with white supremacism, which in some cases rally massive troll armies and distribute everything from racist Pepe the Frog memes to Nazi imagery.

Twitter / Via Twitter: @RichardBSpencer

Twitter insists that the blue checkmark isn't an endorsement of the content an account shares and doesn't constitute special or elevated status. Instead, verification is supposed to let "people know that an account of public interest is authentic," according to Twitter's official description.

"Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas," the company adds.

Twitter's pages about hateful conduct and online abuse don't mention anything about verification. If someone does break Twitter's rules — such as harassing another user — they face penalties that include having their account suspended.

But there's one very high profile case that significantly muddies Twitter's explanation of verification as a simple tool to tell who's who: Milo Yiannopoulos, the right-wing provocateur who lost his verified status last year.

Citing a policy of not commenting on specific accounts, Twitter has refused to say what Yiannopoulos did to lose his verification (he was later permanently banned). Yiannopoulos has also never been specific about why he thought he was unverified, but one Twitter executive suggested it was for a tweet containing the phrase: “You deserve to be harassed.”

A person familiar with the situation who spoke on the condition of anonymity told BuzzFeed News the decision to take action against Yiannopoulos was hotly debated inside Twitter at the time. Some argued the best vehicle to handle Yiannopoulos was through suspension.

"The ultimate decision was to do the verification, I think in part because, at the time, the policies, as written, made it quite difficult to suspend him," the person said. "Because it was sort of the case of, 'We don’t want him on the platform, but he knows the rules really well.'"

In losing his verification, Yiannopoulos told BuzzFeed News that Twitter was using "a tool for establishing the identity of prominent people as an ideological weapon."

"Obviously it also confers a sense of legitimacy," he said.

The person familiar with the situation agreed that after its introduction, verification became more than just a way to identify if a person is who they say they are.

"That badge became literally sort of a badge of honor," the person said. "People craved having the checkmark as a status symbol."

As a result, in the days following Charlottesville, numerous users took to the platform — with some tweeting directly at Twitter CEO Jack Dorsey — to ask why people associated with extremism still have that "badge of honor."

Some of the verified users spotlighted by observers used Twitter to promote the white supremacist rally in Charlottesville, and the social network remains their refuge as other major tech companies crack down on their presence.

Alt-right figures such as Tim Gionet, better known by his Twitter handle @bakedalaska, tweeted promotional material for the rally. Richard Spencer, who was once temporarily banned from Twitter, shared numerous images and messages glorifying the scene. And a woman who uses the pseudonym Ayla and the Twitter handle @apurposefulwife invited people to watch her speak at the rally.

Questions have been raised about a number of other verified accounts as well, some of which weren't directly involved in Charlottesville, but routinely share racist content.

At the same time, some of these users are facing crackdowns on other platforms. After the recent violence, Paypal and web hosting company Squarespace cut off Spencer's National Policy Institute, a white nationalist think tank.

The pushback has been so widespread that on Friday Ayla wrote a blog post criticizing tech companies for "deplatforming us," while Spencer wrote that "corporate America" was campaigning "to shut our web outlets down."

When they needed to get their message out about the crackdown, they turned to Twitter.

Yiannopoulos said the platform is "structured to give more features and visibility to verified users."

The person familiar with Twitter's policies said that in the past, verified accounts were prioritized by the social network's algorithms and would land higher in search results and "top tweets" sections. That prioritization actually came up during the conversation about what to do with Yiannopoulos, with some in the company arguing that "we should be under no obligation to promote someone that we feel is bad for the platform."

Twitter did not respond to questions about whether tweets from verified accounts still get priority.

John Wihbey, a media professor at Northeastern University who has studied Twitter, told BuzzFeed News that it is widely believed verification "generates an additional layer of trust."

Wihbey said the boost in perceived legitimacy may be waning today, (Yiannopoulos also said verification isn't what it used to be) but the perception still lingers.

"The blue checkmark is an important status symbol and it’s also a signaling that you’ve gone through some kind of vetting process," he said.

In the wake of Charlottesville and President Trump's much-criticized remarks blaming "both sides" for the violence, many Twitter users are frustrated that the platform appears to be treating every side equally, a position at odds with other tech companies.

Twitter's approach to verification "isn't coherent in terms of the platform's overall approach to identity," said Nicco Mele, director of Harvard's Shorenstein Center on Media, Politics and Public Policy.

"They don’t want to be gatekeepers, and yet sometimes they are gatekeepers," he said. "Are they going to privilege truth? Or are they going to treat InfoWars the same way they’re going to treat the New York Times and BuzzFeed?"

The person familiar with Twitter's policies said that there are also differing opinions within the company, adding that "many view it as a good idea that got out of control and would be just as happy if verification didn’t exist."

Wihbey, who said he spoke with Dorsey in the social network's early days, added that Twitter simply may not have been prepared for its evolution into a home for extremists with massive followings.

"To be honest, this whole 'herds of white nationalists and Russian bots,' it was just not foreseeable when they were founding the company and setting the rules of the game," Wihbey said. "And once that ship is out to sea, it’s pretty hard to rebuild out in the middle of the ocean."

But while he favors allowing people — even those with "totally repugnant" views — to have Twitter accounts, Wihbey said he doesn't think it's necessary to "help them with additional designations of credibility."

"I think it's a problem for any hate group to be given extra designations, such as a verified account," he said. "I'm not sure they meet the 'public interest' standard as articulated by Twitter."

LINK: Here’s What Really Happened In Charlottesville

LINK: Apple Pay Is Cutting Off White Supremacists

LINK: Twitter's Favorite Excuse Is Failing The Public




from BuzzFeed - USNews http://ift.tt/2uS2qsX

No comments:

Post a Comment