The world's largest social network published enforcement numbers for the first time on Wednesday, revealing millions of standards violations in the six months to March.
The social media network took action against 1.9 million posts containing Islamic State, al-Qaida and related terrorism propaganda before users reported them in the first quarter of this year, Facebook said in a report.
The company estimates around 3 to 4 percent of the active Facebook accounts on the site during the first three months of 2018 were fakes, Mr Rosen said.
In terms of graphic violent content, Facebook said more than 3.4 million posts were either taken down or given warning labels, 86% of which were spotted by its detection tools.
On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.
Facebook said Tuesday it took down 21 million "pieces of adult nudity and sexual activity" in the first quarter of 2018, and that 96 percent of that was discovered and flagged by the company's technology before it was reported. Furthermore, 2.5 million pieces of hate speech were removed although Rosen concedes that Facebook's technology still has some work to do in this category as only 38 percent was flagged automatically.
"We have lots of work still to do to prevent abuse", Facebook VP Guy Rosen wrote in a separate post.
Meghan Markle's mother has arrived in London ready to meet the Queen
The former Suits actress is set to tie the knot with the British royal on 19 May at St George's Chapel at Windsor Castle, England. Recall, the half-brother of Megan Markle wrote a letter to Prince Harry with the request "not to marry Megan".
The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users. "In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence", the report said. Facebook noted that while its artificial intelligence technology found and flagged many standard violations, more progress needed to be done.
Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech during the first quarter.
"For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue".
Facebook's self-assessment showed its screening system is far better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda.
"Today's report gives you a detailed description of our internal processes and data methodology".
"In addition, in many areas - whether it's spam, porn or fake accounts - we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts".