Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

Twitter's hate speech rules are expanded

  • Published
TwitterImage source, Getty Images
Image caption,
Twitter has not disclosed which accounts face closure as a consequence of its new rules

Twitter has widened what constitutes hateful and harmful behaviour on its platform, and says it will begin enforcing stricter rules concerning it.

Information contained in a person's profile, regardless of what they actually tweet, will now be considered.

Those who express an affiliation with groups that use or celebrate violence to achieve their aims will be permanently suspended, Twitter said.

Hateful imagery - such as the Nazi swastika - will now be hidden.

A "sensitive media" prompt will be shown to users before they can opt to view it.

But such content will no longer be allowed on a person's profile page, and users will be asked to remove it. Repeat violators will be banned.

The company said the move would "reduce the amount of abusive behaviour and hateful conduct" on the network. A spokeswoman confirmed profiles would be removed, but would not give an example of an account in violation of the rules.

"If an account’s profile information includes a violent threat or multiple slurs, epithets, racist or sexist tropes, incites fear, or reduces someone to less than human, it will be permanently suspended," she explained.

"We plan to develop internal tools to help us identify violating accounts to supplement user reports."

Twitter has promised a more robust system to appeal against decisions, but said that it was still in development.

Defining hate

The new rulings will also have an important exception.

"This policy does not apply to military or government entities and we will consider exceptions for groups that are currently engaging in (or have engaged in) peaceful resolution," the company said.

The changes had been made following consultations with Twitter's Trust and Safety Council, a group consisting of representatives from more than 40 organisations dealing with, among other things, anti-Semitism, homophobia, sexism and racism.

Twitter has defined hateful imagery as "logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin".

The announcement is Twitter's latest attempt, in a difficult year for the company, to clamp down on what many people consider its most pressing issue: disgusting behaviour from a significant number of users.

Image source, Getty Images
Image caption,
President Trump caused controversy when he retweeted Britain First's deputy leader

The challenge for the company has been to grapple with offensive content while not being seen to censor legitimate political views.

This was recently brought sharply into view when US President Donald Trump retweeted three tweets by Jayda Fransen, deputy leader of far-right group Britain First. Ms Fransen was previously convicted of religiously-aggravated harassment.

UK Prime Minister Theresa May's official spokesman said it was "wrong for the president to have done this".

A Twitter spokesperson would not say if Britain First would fall foul of the stricter rules, adding it would not comment on individual groups or accounts.

Related Internet Links

The BBC is not responsible for the content of external sites.