Of the total requests made by India, Facebook
produced some data for 15,206 accounts.
The US made the highest number of legal process requests at 47,958, followed by India with 24,944 legal process requests. In India, Facebook produced data for nearly 14,345 of these legal requests.
"As always, we scrutinize every government request we receive to make sure it is legally valid, no matter which government makes the request. If a request appears deficient or overly broad, we push back, and will fight in court, if necessary. We do not provide governments with “back doors” to people’s information," added Sonderby in the post.
Facebook also released its latest Community Standards Enforcement Report and said it removed about 4.7 million pieces of content globally on the platform connected to organized hate, an increase of over 3 million pieces of content from the previous quarter.
The report provides metrics on how well Facebook and Instagram
enforced their policies from October 2019 through March 2020.
"We’ve spent the last few years building tools, teams and technologies to help protect elections from interference, prevent misinformation from spreading on our apps and keep people safe from harmful content," said Guy Rosen, VP Integrity in a post.
Facebook claimed that it is now able to proactively find almost 90 per cent of hate speech that is taken down from the platform, compared to 24 per cent in 2018. This was made possible because Facebook expanded its proactive detection technology to more languages.
The company also increased its proactive detection rate, which is the content it removes on its own before someone reports it, for organized hate, to 96.7 per cent in Q1 2020 from 89.6 per cent in Q4 2019.
On Instagram, the proactive detection rate increased from 57.6 per cent to 68.9 per cent, 175,000 pieces of content were removed in Q1 2020, up from 139,800 the previous quarter.
Sharing enforcement data for bullying on Instagram
for the first time in this report, the Menlo Park-based firm including taking action on 1.5 million pieces of content in both Q4 2019 and Q1 2020.
"On Instagram, we made improvements to our text and image matching technology to help us find more suicide and self-injury content. As a result, we increased the amount of content we took action on by 40 per cent and increased our proactive detection rate by more than 12 points since the last report," said Rosen.
As part of this report, Facebook has added new data on hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment for Instagram, and organized hate on Facebook and Instagram.
The Community Standards report does not reflect the full impact of how Facebook tackled misinformation during the pandemic, because it includes data only through March 2020.
Covid-19 related actions:
- In April, Facebook applied warning labels to about 50 million pieces of content related to COVID-19 misinformation, based on around 7,500 articles by its independent fact-checking partners.
- 95 per cent of the time when someone sees content with a warning label, they don't click through to view it
- Since March 1, Facebook claims to have removed more than 2.5 million pieces of organic content for the sale of masks, hand sanitizers, surface disinfecting wipes and Covid-19 test kits.
- For this, it relies on computer vision technology that has earlier been used to find and remove firearm and drug sales.