Your Company

Facebook announces steps to tackle terrorism

Facebook announces steps to tackle terrorism

Facebook has announced it will start implementing measures to combat online terrorism and filter out extremist content on its networks, following pressure from international governments for more to be done to protect online safety.

The social media giant has said that it will be using artificial intelligence to scour its websites and spot text, images, and media relating to terrorism or extremist messages, as well as to identify fake online accounts.

The company has opened up about its attempt to improve its record on the matter after criticism of its previous lack of effort, saying that it wants to be as transparent as possible.

“We want to be very open with our community about what we’re trying to do to make sure that Facebook is a really hostile environment for terror groups,” said Monika Bickert, the director of global policy management at Facebook.

“Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny.”

Talk has been rife, especially in the UK and Europe, of potential legislation being put in place to force social media networks to regulate their content to limit the power of terrorist groups to spread their message, and Facebook has now led the way in beating lawmakers to the punch.

The company has also declared that human reviewers will also be employed to work alongside the artificial intelligence, because any red flags that come back to the AI on posted content will need to be reviewed for context before declaring it to be a violation.

Related Posts