New Delhi: Facebook-owned WhatsApp on Thursday released its first monthly grievance redressal report as mandated under the new IT norms. In the report, the popular messenger app informed that has banned 2 million accounts in India alone in the period from May 15 to June 15 that were attempting to send harmful or unwanted messages at scale.


More than 95 percent of such bans are due to the unauthorised use of automated or bulk messaging, the company informed in a statement.


ALSO READ | WhatsApp New Feature Allows Users To Join An Ongoing Video Call; Here's How It Works


In the report, as required by India's Intermediary Guidelines, 2021, WhatsApp stated that user reports received by the platform via the grievance channels are evaluated and responded to.


"Our top focus is preventing accounts from sending harmful or unwanted messages at scale. We maintain advanced capabilities to identify these accounts sending a high or abnormal rate of messages and banned two million accounts in India alone from May 15-June 15 attempting this kind of abuse," the company informed.


"Majority of users who reach out to us are either aiming to have their account restored following an action to ban them or reaching out for product or account support," it added.


It was informed that WhatsApp received 345 reports in total, across categories such as ban appeal, account support, product support, safety issues, and others.


Against this, 63 accounts were “actioned" by WhatsApp during the May 15-June 15 period. 


The popular messenger app revealed that the number of accounts banned has risen significantly since 2019 as the sophistication of systems has increased, and “so we are catching more accounts even as we believe there are more attempts to send bulk or automated messages".


It added that the majority of these accounts are banned proactively, without depending upon any user report as about 8 million accounts are banned/disabled on an average globally per month.


Besides the behavioural signals from accounts, WhatsApp relies on available "unencrypted information" including user reports, profile photos, group photos and descriptions as well as advanced AI tools and resources to identify and prevent abuse on its platform.

'Accounts Actioned' means reports where the platform took remedial action based on the report. It can include both banning an account and restoring a previously banned account based on the complaint.

The IT rules, which came into effect on May 26, make it compulsory for significant digital platforms to include the number of specific communication links or parts of information they proactively remove by using automated tools.

Under these rules, social media companies have to take down flagged content within 36 hours, and remove within 24 hours content that is flagged for nudity and pornography.

The social media giants have to mandatorily appoint three key personnel that include grievance officer, chief compliance officer and nodal officer. These officials need to be residents in India.


Non-compliance with the IT rules results in platforms losing the intermediary status that provides them immunity from liabilities over any third-party data hosted by them.


(With Agency Inputs)