Facebook disables 1.3 billion fake accounts

Facebook disables 1.3 billion fake accounts

Facebook disables 1.3 billion fake accounts

However, Facebook pointed out that it moderated 2.5 million pieces of hate speech, 1.9 million pieces of terrorist propaganda, 3.4 million pieces of graphic violence and 21 million pieces of content featuring adult nudity and sexual activity. The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier.

While the 583 million fake Facebook accounts and their removal is perhaps the biggest takeaway from this report, the company pointed out how the metrics of flagging and removal had improved when compared to previous quarters - such as improvements in photo detection technology that can detect both old and newly posted content.

Facebook also released statistics that quantified how pervasive fake accounts have become on its influential service, despite a long-standing policy requiring people to set up accounts under their real-life identities.

Facebook released its Community Standards Enforcement Report to showcase what the company done to protect its users. If Facebook tamps down on bad content, as some analysts predict, it is unlikely to lose users and advertisements, which account for 98% of its annual revenue. But it's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important, and what works.

Facebook says 0.22-0.27% of views by users were of content that violated its standards around graphic violence in the period. The report also doesn't cover how much inappropriate content Facebook missed.

The world's largest social network published enforcement numbers for the first time on Wednesday, revealing millions of standards violations in the six months to March.

Meanwhile, Facebook removed or added warning labels to about 3.5 million pieces of graphic violence content.

Commons Culture Committee chairman Damian Collins said Facebook failed to provide "a sufficient level of detail and transparency" in its response following an appearance by chief technology officer Mike Schroepfer.

- 836 million instances of spam had action taken against them.

Only 38 percent of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.

As Mark said at F8 we have a lot of work still to do to prevent abuse.

"AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we're working on it", said Zuckerberg to CNet. It said the rise was due to improvements in detection.

Facebook has come under fire for showing too much zeal on this front, such as removing images of artwork tolerated under its own rules.

The problem is that, as Facebook's VP of product management Guy Rosen wrote in the blog post announcing today's report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.

Related news