Facebook said Tuesday it took down 21 million "pieces of adult nudity and sexual activity" in the first quarter of 2018, and that 96 percent of that was discovered and flagged by the company's technology before it was reported.
Facebook also said it removed 583 million fake accounts in the same period, or the equivalent of 3 to 4 percent of its monthly users. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.
On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.
The new report was released in conjunction with Facebook's latest Transparency Report, which said that across the world government requests for account data increased by four percent in the second half of 2017 compared to the first half.
A spokeswoman later said that Facebook blocks "disturbing or sensitive content such as graphic violence" so that users under 18 can not see it "regardless of whether it is removed from Facebook".
The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users.
For years, Facebook has relied on users to report offensive and threatening content.
In Facebook's first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts-with 837 million spam posts and 583 million fake accounts being acted upon.
Boris Johnson will discuss moves to protect European countries working in Iran
Washington's decision to withdraw from the deal and reimpose sanctions angered its European allies as well as China and Russian Federation .
Explaining the figures, the company's vice president of product management Guy Rosen said: It's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.
The platform also revealed how much content its automated systems were picking up and how much was reported by users. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.
Facebook took action on 1.9 million pieces of content over terrorist propaganda.
Nevertheless, the company took down nearly twice as much content in both segments during this year's first quarter, compared with Q4.
Now, however, artificial intelligence technology does much of that work.
But hate speech is a problem for Facebook today, as the company's struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.
"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook". It says it found and flagged almost 100% of spam content in both Q1 and Q4.
"These kinds of metrics can help our teams understand what's actually happening to 2-plus billion people", he said. That doesn't include what Facebook says are "millions" of fake accounts that the company catches before they can finish registering. "While not always flawless, this combination helps us find and flag potentially violating content at scale before many people see or report it".