Facebook launches UK initiative to counter online extremism
TECH & SCI
By Guo Meiping

2017-06-24 21:44 GMT+8

After coming in for criticism following a spate of terrorist attacks in the last few months, tech and social media giants have been working on counter-terrorism measures.

Facebook launched its UK initiative on Friday, in a bid to train and fund local organizations to fight extremist content online, as Internet companies attempt to suppress hate speech and violent content on their platforms.

According to the company, the initiative will train non-governmental organizations to help them monitor and respond to terrorism-related content, and create a dedicated support desk so they can communicate directly with Facebook.

Sheryl Sandberg, chief operating officer of Facebook. /VCG Photo

"The recent terror attacks in London and Manchester are absolutely heartbreaking," said Facebook's chief operating officer, Sheryl Sandberg. "There is no place for hate or violence on Facebook, we use technology like artificial intelligence to find and remove terrorist propaganda, and we have teams of counter-terrorism experts and reviewers around the world working to keep extremist content off our platform."

As mentioned by Sandberg, Facebook recently introduced its AI program, which includes image matching and language understanding, in conjunction with its already-existing human reviewers to better identify and remove extremist content more rapidly.

Google: Four steps against extremist content

Google also announced its own measures against terrorism earlier this month. Kent Walker, general counsel at Google, said that the company is "committed to being part of the solution" to tackling online extremist content. "There should be no place for terrorist content on our services," he added. 

VCG Photo

Walker mentioned four new steps for Google to combat extremist material on its YouTube video platform:

1. Investing more engineering resources into its artificial intelligence software, which can be trained to detect and remove terrorism-related content.

2. Expanding the number of independent experts and adding additional grants in YouTube's Trusted Flagger program, which provides more robust tools for people or organizations who are particularly interested in and effective at notifying content that harms the community.  

VCG Photo

3. Taking a tougher stance on videos that do not clearly violate policies. For example, videos that contain inflammatory religious or supremacist content will appear behind an interstitial warning and will not be monetized, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find.

4. YouTube will build on the Creators for Change program for promoting YouTube voices against hate and radicalization. It will also be working with Jigsaw, a company behind "The Redirect Method", which uses ad targeting to send potential ISIL recruits anti-terrorist videos.

Twitter: More than 300,000 accounts suspended

VCG Photo

Twitter has also been working on creating a cleaner environment for its users. The company claimed in its twice-annual transparency report released this March that it suspended nearly 377,000 accounts in the past six month.

These accounts were suspended because of terrorism-related content. This marked the first time that Twitter had included its efforts on fighting violent extremism in its transparency posts. 

READ MORE