Facebook moves to curb spread of terror, hate speech
The new guidelines give more clarity on acceptable posts relating to violence, hate speech nudity and other contentious topics.
The new document said Facebook will not allow a presence from groups advocating “terrorist activity, organized criminal activity or promoting hate.”
The move comes with Facebook and other social media struggling with defining acceptable content and freedom of expression, and with these networks increasingly linked to radical extremism and violence.
Last month, French Interior Minister Bernard Cazeneuve urged online giants Apple, Facebook, Google and Twitter to discuss ways to thwart terrorists from using the platforms for recruitment and fundraising.
Videos of gruesome executions have been frequently distributed online by the Islamic State group as a propaganda tool.
The new guidelines say Facebook will take down “graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence.”
On terrorist or criminal organizations, Facebook also said it would not tolerate “supporting or praising leaders of those same organizations, or condoning their violent activities.”
Facebook said meanwhile that nudity would be banned in many cases but allowed for images of breastfeeding, art or discussions of medical conditions.
“These standards are designed to create an environment where people feel motivated and empowered to treat each other with empathy and respect,” said a blog post from Facebook global policy chief Monika Bickert and deputy general counsel Chris Sonderby.
The new guidelines also say Facebook members should use their “authentic name,” a move that appears to head off criticism from people who used stage or performance names instead of their legal name.
In October Facebook said it would ease its “real names” policy that prompted drag queen performers to quit the social network and sparked wider protests in the gay community and beyond.
The new Facebook guidelines clearly ban so-called “cyberbulling,” barring any content “that appears to purposefully target private individuals with the intention of degrading or shaming them.”
‘Risk of physical harm’
Facebook said it would remove content, disable accounts and work with law enforcement “when we believe that there is a genuine risk of physical harm or direct threats to public safety.”
But it also pointed out “that something that may be disagreeable or disturbing to you may not violate our community standards.”
“It’s a challenge to maintain one set of standards that meets the needs of a diverse global community,” the blog post said.
“This is particularly challenging for issues such as hate speech. Hate speech has always been banned on Facebook, and in our new community standards, we explain our efforts to keep our community free from this kind of abusive language.”
Facebook said earlier this year it was putting warnings on “graphic content,” which would also be banned for users under 18.
In 2013, Facebook ended up banning a beheading video after outrage followed a lifting of the ban.
Twitter meanwhile has become the latest online platform to ban “revenge porn,” or the posting of sexually explicit images of a person without consent.
Twitter faced threats after blocking accounts linked to supporters of the Islamic State, but one study showed at least 46,000 Twitter accounts have been linked to the group.
Facebook at the same time released its report on government requests for user data in the second half of 2014, showing a modest uptick to 35,051 from 34,946 in the prior period.
“There was an increase in data requests from certain governments such as India, and decline in requests from countries such as the United States and Germany,” the blog post said.
The amount of content restricted for violating local law increased by 11 percent 9,707 cases from 8,774.