Facebook removed 3,2 billion fake accounts between April and September including millions of posts depicting child abuse and suicide, according to its latest content moderation report.
That more than doubles the number of fake accounts taken down during the same period last year, when 1,55 billion accounts were removed, according to the report.
The world’s biggest social network disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, identified as a growing area of concern about fake news by disinformation researchers.
Proactive detection of violating content was lower across all categories on Instagram than on Facebook’s flagship app, where the company initially implemented many of its detection tools, the company said in its fourth content moderation report.
For example, the company proactively detected content affiliated with terrorist organisations 98.5% of the time on Facebook and 92.2% on Instagram.
It removed more than 11,6 million items depicting child nudity and sexual exploitation of children on Facebook and 754 000 pitems on Instagram during the third quarter.
Law enforcement is concerned Facebook plans to provide greater privacy to users by encrypting messaging services will hamper efforts to fight child abuse.
Last month, FBI Director Christopher Wray said the changes would make the platform a “dream come true for predators and child pornographers.”
Facebook added data on actions taken around content involving self-harm for the first time. It removed about 2,5 million posts in the third quarter that depicted or encouraged suicide or self-injury.
The company also removed about 4,4 million items involving drug sales during the quarter, it said in a blog post.