Meta, Inc. (the parent of Facebook, Inc.) has been tracking the number of pieces of content (posts, photos, videos, or comments) on Facebook, Instagram, and WhatsApp and taking action when they violate the guidelines to combat misinformation.
In February 2022, Meta, Inc. wiped out a massive 23.6 million pieces of content from its social media platforms in India, following the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Out of these, 21.2 million pieces of content were removed from Facebook, and over 2.4 million pieces of content were removed from Instagram.
Facebook Response To Harmful Content
The ‘SPAM’ accounted for most of these takedowns, with 15.4 million spam posts being removed from the Facebook India platform between February 1, 2022, and February 28, 2022. In addition, Facebook India removed about 2.4 million violent and graphic content, and 1.4 million adult nudity and sexual activity-related content.
The social media behemoth has also revealed the ‘Proactive Rate,’ a metric that shows the percentage of all content or accounts acted on that Facebook and Instagram found and flagged even before users do. During the 28-day period in February 2022, 99.9% of all SPAM and Violent and Graphic-related content or accounts on Facebook India were found and flagged before the Indian users reported them.
Between February 1 and February 28, Facebook and Instagram received 478 and 1050 reports through the Indian grievance mechanism, respectively, and responded to 100% of them. In 402 and 561 cases, respectively, Facebook and Instagram provided tools to assist users in resolving their issues. These include pre-established channels for reporting content for specific violations, self-remediation flows where they can download their data, and avenues for dealing with account hacking issues, among other things.
The top complaint registered by Indian users was ‘account has been hacked,’ which received 135 and 502 reports on Facebook and Instagram, respectively.
“The report describes our efforts to remove harmful content from Facebook and Instagram and demonstrates our continued commitment to making Facebook and Instagram safe and inclusive. We use a combination of Artificial Intelligence, reports from our community and review by our teams to identify and review content against our policies,” Meta said.
When compared to the previous month, Meta took down over 14.8 million pieces of content in January 2022, including 11.6 million pieces of content from Facebook and over 3.2 million pieces of content from Instagram.
Not only have Facebook and Instagram increased their efforts to combat fake news, but the largest messaging app WhatsApp, Inc. has also gradually increased its efforts to fight fake accounts on its platform.
In February 2022, WhatsApp in India banned 1,426,000 bad accounts to comply with the new IT Rules 2021. The company received 335 grievance reports from across the country and responded to 21 of them.
Facebook Constant Fight With Fake News
When such a large number of people use social media platforms every month, the tech behemoths cannot afford to ignore the new and emerging challenges such as the spread of fake news, abusive content, hate speech, harassment, threats of violence, morphed images of women, and other content.
All digital and social media giants with more than 5 million users, such as Google, Facebook, WhatsApp, Telegram, Koo, Sharechat, and LinkedIn, must publish monthly compliance reports under the new IT Rules 2021.