Despite the high risk of violence ahead of the Kenyan national election next month, Facebook approved hate speech ads promoting ethnic violence and calling for rape, slaughter, and beheading.

According to Global Witness and legal non-profit Foxglove, some of the ads that failed to be detected included vices like ethnic violence, rape, slaughter, and beheading.

“It is appalling that Facebook continues to approve hate speech ads that incite violence and fan ethnic tensions on its platform,” said Nienke Palstra, Senior Campaigner in the Digital Threats to Democracy Campaign at Global Witness.

According to the investigations, a total of 20 ads were submitted in both languages with 17 approved. The three detected violated their Grammar and Profanity Policy.

Foxglove then made the grammatical changes as well as removed the insults, when done, Facebook approved the ads despite the fact they had hate speech.

“In the lead up to a high stakes election in Kenya, Facebook claims its systems are even more primed for safety – but our investigation once again shows Facebook’s staggering inability to detect hate speech ads on its platform,” said Palstra.

Foxglove further stated that the majority of the words used were quite dehumanizing as they highlighted several sensitive areas, however, the company denied publishing the phrases in the report.

A response to the allegations was provided by Meta, Facebook’s parent company saying the team has taken “extensive steps to help Meta catch hate speech and inflammatory content in Kenya” and that they’re “intensifying these efforts ahead of the election.”

Global Witness and Foxglove call on Facebook to Urgently increase the content moderation capabilities and integrity systems deployed to mitigate risk before, during, and after the upcoming Kenyan election.

Properly resource content moderation in all the countries in which they operate around the world, including providing paying content moderators a fair wage, allowing them to unionize, and providing psychological support.

Routinely assess, mitigate and publish the risks that the impact of their services on people’s human rights and other societal level harms in all countries in which they operate.

Publish information on what steps they’ve taken in each country and for each language to keep users safe from online hate.