Facebook, Instagram ramp up removal of hate speech, bullying content

Topics Facebook | Instagram | Fake news

Facebook has removed 26.9 million pieces of hate speech content in the fourth quarter of 2020 -- up from 22.1 million in Q3 and the company gives credit to improvements in its automated systems that catch and purge such comments.

The social network also blocked 6.4 million pieces of organised hate content in the October-December period from its main platform, up from 4 million in Q3.

Instagram also saw significant jumps in hate speech, bullying and self-harm removals, according to the company's 'Community Standards Enforcement Report'.

Facebook took down 6.3 million pieces of bullying and harassment content from its platform in the fourth quarter of 2020 -- up from 3.5 million in Q3.

"Last quarter, we shared the prevalence of hate speech on Facebook for the first time to show the percentage of times people see this type of content on our platform. This quarter, hate speech prevalence dropped, seven to eight views of hate speech for every 10,000 views of content," Guy Rosen, VP Integrity at Facebook, said in a blog post late on Thursday.

Facebook said that its proactive rate for bullying and harassment went from 26 per cent in Q3 to 49 per cent in Q4 on its main platform, and 55 per cent to 80 per cent on Instagram.

"Improvements to our AI in areas where nuance and context are essential, such as hate speech or bullying and harassment, helped us better scale our efforts to keep people safe," Rosen informed.

The company also took action on 2.5 million pieces of suicide and self-injury content in Q4, up from 1.3 million in Q3 due to increased reviewer capacity.

On Instagram, it blocked 3.4 million pieces of suicide and self-injury content, up from 1.3 million in Q3.

"This year, we plan to share additional metrics on Instagram and add new policy categories on Facebook. We're also working to make our enforcement data easier for people to understand by making these reports more interactive," Facebook said.

Last year, Facebook committed to undertaking an independent, third-party audit of its content moderation systems to validate the numbers it publishes and the process will start this year.

The company has already made changes to News Feed in ways that reduce the amount of hate speech and violent content users see.

--IANS

na/


(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)


Dear Reader,


Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.

We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

Business Standard is now on Telegram.
For insightful reports and views on business, markets, politics and other issues, subscribe to our official Telegram channel