TechNews Pictorial PriceGrabber Video Thu Nov 28 17:37:58 2024

0


Why Twitter's new anti-harassment tools will fail
Source: Mike Elgan


Twitter is trying to curb harassment. Again.

Twitter VP of Engineering Ed Ho this week announced three changes that Twitter believes will end its reputation as a haven for trolls, haters, spammers, misogynists, racists and idiots.

Two years ago, Twitter's then-CEO Dick Costolo was quoted in a leaked memo saying that "We suck at dealing with abuse and trolls on the platform and we've sucked at it for years."

Twitter's reputation may be hurting its bottom line. The company this week reported its slowest quarter for revenue growth since the company went public. Twitter consistently loses about $100 million per quarter. Advertising is weak. Revenue is down. User growth is flat.

Advertisers don't want to be associated with Twitter because it's a bad neighborhood. Harassment is a perennial problem on the network. Companies apparently don't want their brands sandwiched between nuggets of hate speech. (Twitter executives declined to be interviewed for this story.)

Sadly, Twitter's changes won't stop harassment. They'll make the problem worse. And they'll make the quality of everyday conversations on Twitter worse, too.

I'll tell you why. But first, let's look at Twitter's three new changes.

First, Twitter claims it's "taking steps to identify people who have been permanently suspended and stop them from creating new accounts." Twitter won't say what these methods are.

Second, Twitter intends to remove tweets by default from search results if they contain "potentially sensitive content" or if they were posted on accounts that have been blocked or muted. However, users have the option to disable this "smart search" and see unfiltered results.

Third, Twitter will automatically "collapse" or hide objectionable replies, which include those deemed "abusive" or "low quality." These will be viewable by clicking on a link that says "Show less relevant replies."

All this sounds pretty good, until you take a closer look.

Twitter is relying in part on software-controlled moderation. This approach is inferior to user moderation.

The most effective user moderation is the ability to delete the replies of other users.

If you post something on a social network, you should be able to delete any reply to that post from any user for any reason. You can delete replies on Facebook, Google+, YouTube, Instagram, Pinterest, Tumblr and every other social network I can think of.

Not Twitter. If someone replies to your tweet with something hateful, hurtful or threatening, you can't delete the reply.

Trolls love this about Twitter. This is why Twitter has a harassment problem.

User moderation works because people are good at knowing when replies are made in bad faith. Their ability to delete replies frustrates trolls, who learn that harassment is futile. Sure, trolls can post hateful comments elsewhere. But without the power to disrupt conversations against the wishes of the person who started the conversation, harassment is futile.

Instead of giving users the power to delete replies, Twitter relies on two alternatives: a user reporting system and automated software-based moderation (with help from Twitter staff).

The trouble is software can't identify bad replies as well as people can. I learned this fact on Google+.

Google may have the industry's best algorithms and artificial intelligence. But even Google's software fails for content moderation. Google has been algorithmically flagging "low quality" replies on Google+ for years. It hides flagged replies by default, and users can reveal them with an obscure "See all comments" menu item, which most users don't know about.

I would guess that around 10 percent of high quality replies are flagged as "low quality." And probably an equal number of "low quality" replies are judged as "high quality" and allowed to appear among the other replies. Filtering software just doesn't work that well.

For example, I recently posted on Google+ what I call a "Mystery Pic." I post a picture of some technology thing and invite followers to guess what's in the picture.

One person guessed: "A pod or a wheel from office chair" -- exactly the kind of reply I was looking for. Google's software flagged that comment as spam and buried it so nobody could see it.

Just below that, someone posted a hyphen and nothing else, an incredibly "low quality" reply. Google's software identified that hyphen as a high-quality reply and allowed it to remain.

Software simply isn't advanced enough to judge language.

Google's automated identification and hiding of "low quality" comments fail, and the result is that Google+ simply isn't as good as it could be. Bad comments appear. Good comments are buried. The same outcome is likely on Twitter.

The difference is that on Google+, I can make the effort to correct bad decisions by the software. On Twitter, I can't. When Twitter's systems fail to identify a bad reply, the bad reply will remain.

When Twitter's systems bury a good reply, I can see the good reply by clicking on a link. But it will remain buried for other users, thereby degrading the quality of conversation on Twitter.

Trolls will figure out how to game the system to avoid being buried.

Any kind of system designed to keep motivated people out -- whether they're anti-hacker, anti-spam or, in Twitter's case, anti-harasser -- is essentially an arms race. As Twitter tries to develop better anti-harassment systems, the harassers will learn how to get around them.

Compounding Twitter's unwillingness to allow industry-standard user moderation, Twitter resists pseudonymity -- where Twitter knows who you are but the public does not. Twitter users are fully anonymous. That means any so-called "user" might be a bot, a troll paid by the Russian government or a person with 100 accounts. There's usually no way to know.

(Researchers last month discovered a network of 350,000 fake user accounts controlled by automated bots that had existed undetected for years.)

The anonymity of Twitter users makes me doubt that the company can succeed in their attempt to "identify people who have been permanently suspended and stop them from creating new accounts." How can you "identify people" without, you know, identifying people? And how will Twitter avoid blocking new and legitimate accounts by accident?

It gets worse: One of Twitter's new anti-harassment systems itself can be used for harassment.

Twitter promises to remove tweets from search results posted by accounts that have been blocked or muted. This is the best thing that ever happened to haters.

Now abusers can simply block or mute their victim in an organized effort, or by a single user with dozens of accounts, and thereby remove their victim from search results. Victims may be unaware that abusers have blocked them from search results.

Many victims of harassment will be automatically silenced in Twitter search the second this change goes into effect (it's being rolled out over the coming weeks).

For example, let's say a woman has been speaking out on Twitter about sexism in video games. And let's say she has attracted haters and has been arguing about it on Twitter for the past two years. No doubt she has been blocked by dozens or hundreds of Twitter users. Now, when the new rules go into effect, her tweets in the past, present and future will be removed from search results.

If the haters haven't silenced her yet, Twitter will finish the job.

Only three methods can effectively curb out-of-control harassment on social sites: user moderation of comments, user ranking of comments and the disallowance of anonymity.

Twitter rejects all three methods, and so chronically suffers from a reputation for harassment.

Imagine if Twitter accepted all three. It could be the greatest site on the Internet.

Instead, Twitter has now implemented software-controlled moderation and other systems that will reduce the quality of Twitter conversations and silence constructive users. They will be gamed by active trolls and generally result in a lower quality social site.

Twitter is in trouble. The company isn't growing, profiting or succeeding.

Twitter should be the world's town square, a level playing field that unites everyone in conversation and sharing. Never before has there been a social site so relevant, and at the same time so unsuccessful.

Twitter is ruined by its reputation as a hotbed of harassment, trolling and vitriol. And it's so unnecessary.

Twitter's refusal to implement user moderation, which would solve the problem overnight at close to zero cost to the company, is business malpractice in the extreme and an abdication of the trust that millions of users have placed in the company.

Twitter wants to be great. But the company's leaders simply won't allow that to happen.

This story, "Why Twitter's new anti-harassment tools will fail" was originally published by Computerworld.
To comment on this article and other CIO content, visit us on Facebook, LinkedIn or Twitter.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |