Over the last three months, the Facebook-owned messaging has banned more than 2 million accounts each month for bulk or automated behavior. WhatsApp released a white paper on Wednesday detailing its efforts to curb this type of abuse, which can be used to distribute click-bait links or spread political misinformation to large groups of people.
‘WhatsApp was built for private conversations among close friends and we are constantly working to maintain the private nature of our service,’ said a spokesperson in an emailed statement. ‘Today, we’re sharing more about how our advanced machine learning systems prevent automated behavior and bulk messaging to help keep WhatsApp safe.’
VentureBeat, which attended a press briefing in New Delhi, earlier reported on WhatApps efforts.
WhatsApp is working on machine learning systems that can find and flag accounts with questionable activity, like sending bulk messages or creating multiple accounts for the sake of disseminating dubious content. Of the 2 million accounts banned, 75 percent were handled without a recent user report, according to WhatsApp.
In January, WhatsApp put a limit on how many times a message could be forwarded in order to curb the spread of misinformation. The company, which has more than a billion daily users, began testing the message limit in India following a spate of mob violence and lynchings in that country blamed on misinformation spread on the social network.
Meanwhile, Facebook deleted 583 million fake accounts within the first three months of 2018.