Uprooting hate speech: The challenging task of content moderation in Ethiopia

Image of a refugee using a mobile phone. From ODI/Gabriel Pecot (https://www.flickr.com/photos/overseas-development-institute/33059506181/)
Published Date
April 27, 2021

While many have lauded social media platforms like Facebook, Twitter, and YouTube for being open spaces for self-expression, there is no doubt that this openness can also be problematic. This is particularly true when it comes to hate speech or other dangerous content. Platforms are arguably failing when it comes to their obligation to adequately moderate the content published on their websites. 

This is especially true in multi-ethnic and multi-lingual countries in the developing world like Ethiopia. Hate speech runs rampant on both social and traditional media in Ethiopia, where dangerous and divisive rhetoric often accompanies bursts of violent attacks and conflicts. Social media usage is growing in Ethiopia and is in dire need of proper regulation. There are approximately 21 million internet users in the country. Facebook has over 11 million users, a number that is expected to triple over the next four years. Yet, to date, platforms have not invested enough time and resources to adequately address the challenge of moderating online hate speech.

The Content Moderation Tightrope Act

Social media platforms face two big challenges regarding content moderation. On the one hand, when they are hypersensitive about removing potentially harmful content, particularly in the absence of any oversight and accountability mechanisms, they may end up “policing speech.” 

On the other hand, if they are less aggressive and less sensitive, they run the risk of leaving hateful content to spread, potentially contributing to the outbreak of violence offline. Critics have noted several instances where platforms were inactive in moderating content that helped foment mass violence. For example, an independent UN fact-finding mission found that Facebook had not done enough to prevent the spread of hate speech targeting the Rohingya ethnic minority in Burma, leaving them vulnerable to attack by the public at a time when the military was perpetuating human rights violations against them. Now, there is growing concern that a similar pattern is unfolding in Ethiopia.

Platforms are largely left to self-regulate the content they host. They are not directly bound by international human rights law, though they are encouraged to consider the impacts of their actions on human rights. While some governments have recently introduced domestic laws to regulate content, these laws can be vague and difficult to implement in practice.   

For instance, the Ethiopian parliament adopted a proclamation in March 2020, requiring social media companies to remove hate speech and disinformation within 24 hours of receiving notification. The proclamation does not detail how this expedited removal process will be operationalized. 

Ethiopia also adopted its first-ever Mass Media Policy in August 2020, which lays out a set of recommended norms and standards aimed at addressing the country’s press freedom challenges. The policy emphasizes the need for a procedure to remove content that prevents the “affirmative use” of social media. It does not, however, prescribe specific content removal processes and does not define “affirmative use” of social media. Similarly, both the Mass Media Policy and the proclamation fail to specify what fines or sanctions can be imposed when platforms fail to moderate content.

The Imminent Danger that Online Hate Speech Presents in Ethiopia

One of the reasons hate speech is especially dangerous in Ethiopia is that society is deeply divided along ethno-nationalistic lines. In 1995, the country’s Constitution put in place a form of ethnic federalism, which has made ethnicity a major organizing factor in holding political office.

Ethiopia is home to more than eighty ethnic communities. Major ethnic groups include the Oromo and the Amhara, who were at the heart of the protests that led to Ethiopia’s transition to democracy in 2018, and the Tigrie, the politically dominant group before 2018. Prime Minister Abiy Ahmed is the country’s first Oromo leader, and while symbolically important, his rise to power has not eased ethnic divides

Long-simmering ethnic tensions were brought to a boil in June 2020, with the murder of Hachalu Hundessa, a popular Oromo singer and activist. Ethiopia was rocked by an extraordinary wave of violence and ethnic-based attacks. The situation was exacerbated by online hate speech and calls to attack minorities living in the Oromia region. 

As a result, even beyond the events that followed Hachalu’s death, hate speech circulated on social media in Ethiopia, provoking discrimination and even violence offline. In some areas, the perpetrators of physical attacks against ethnic minorities also spread calls for more violence couched in derogatory ethnic terms against them on social media. For instance, Neftegna, an ethnic slur meaning musketeer in Amharic, was widely distributed on social media in Oromia to target the Amhara living there. Shortly after Hachalu’s murder, Jawar Mohammed, senior leader of the Oromo Federalist Congress (OFC), took to Facebook, promoting a dangerous ‘us’ versus ‘them’ narrative. Characterizing Hachalu’s murder as an attack on the Oromo people by an unspecified other, Jawar framed the incident as part of a long-running ethnic conflict, potentially opening the door for retaliatory violence. 

“They did not just kill Hachalu Hundessa. They shot at the heart of the Oromo Nation, once again!! [. . .] You can kill us, all of us, you can never ever stop us!! NEVER!!” Jawar wrote in a Facebook post on June 30, 2020.

Nevertheless, regardless of the polarizing ‘us’ versus ‘them’ narrative incited by Hachalu’s murder, the role of hate speech and racial slurs in propagating violence cannot be ignored. As violent mobs chanted ‘Kill Neftegna’ or ‘Leave Neftegna' in various towns in Oromia, the Ethiopian Human Rights Commission found that the spread of unregulated hate speech online effectively abetted crimes against humanity in Ethiopia. According to the digital rights organization AccessNow, little was done at the time to remove this offensive content.

Despite the potential for violence, platforms have been unable to moderate hateful content quickly, leading to malicious actors abusing social media. Minority Group International has urged “social media platforms including Facebook and Twitter to be on the alert for hate speech against minorities in Ethiopia and quickly take down harmful content that encourages inter-ethnic hatred and violence.”

Platforms’ Insensitivity to Context

Without a strong understanding of local contexts and languages, effective content moderation is a difficult undertaking. This is especially true in a country like Ethiopia, with a complicated web of ethnic and religious fault lines running through the social and political spheres. The variety of local languages—of which there are more than 80, several of which have different scripts—further complicates the issue, especially when trying to weed out hate speech. Relying on literal translations without understanding the local context makes it difficult to identify when certain terms are deployed with hateful intentions. 

Demonstrating the importance of context-sensitive content moderation, the Facebook Oversight Board recently upheld Facebook’s decision to remove a demeaning slur that violated the platform’s Community Standard on Hate Speech. The term in question, “taziks,” literally translates from Russian to ‘wash bowl,’ however it can also be used as a play on a slur against Azerbaijanis. In explaining their decision, the Board said: “The context in which the term was used makes clear it was meant to dehumanize its target.” The ruling is an example of how important it is for social media companies to focus on the contextual meanings of terms circulating in their platforms.

A Deficit of Content Moderators and Trusted Flaggers  

Another challenge to effective content moderation in Ethiopia is the small number of moderators and trusted flaggers deployed there. Facebook opened its first content moderation center in Africa in 2019, promising to hire 100 people to cover all African markets. However, they have not disclosed the number of hires made to date, nor the number of moderators that focus on particular countries. While Facebook has hired additional moderators to focus on Ethiopia, local experts note that there are still not enough moderators with expertise in the numerous local languages. 

While some platforms rely heavily on algorithms for content moderation, those algorithms should be backed by human moderators that have a deep understanding of the cultural context, linguistic nuances, and the social and political dynamics of Ethiopia. In fact, in 2018, then UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, emphasized that platforms’ leadership and policy teams should also diversify their staff in order to enable the application of local or subject-matter expertise to content issues. 

Platforms have started some initiatives aimed at tackling viral hate speech and misinformation. For instance, Facebook’s trusted flagging partnership is a reporting system where select individuals or civil society organizations flag illegal content to platforms directly for immediate moderation. Yet, the trusted flagging systems in Ethiopia are being undertaken on a voluntary basis and are inconsistent in their moderation standards. Considering that Facebook’s community standards are not even available in Amharic, the working language of Ethiopia, it is clear that the company still has a long way to go to improve local content moderation efforts.

Conclusion 

Social media platforms shouldn’t be unruly behemoths. Content moderation is subtle task that weighs a number of factors into account before a certain post is removed. Moderating hate speech requires a serious understanding of local contexts. The task of content moderation must place human rights at the center while weeding out hate speech from among protected speech. 

Although platforms have made some limited efforts towards context-sensitive content moderation, they still face time and resource constraints, an overdependence on algorithms, and insufficient understanding of semantic nuance. Platforms should pay particular attention to the importance of context in countries with significant political and societal divisions and conflicts like Ethiopia.

To that end, there are a number of things both social media companies and local actors can do to improve content moderation in Ethiopia and more effectively tackle the issue of online hate speech:

  • Platforms should incorporate international human rights law norms into their community standards to fully ensure that content moderation practices will be guided by principles of legality, legitimacy, necessity, and proportionality.
  • Recognizing that Ethiopia is prone to violence supercharged by hate speech on social media, platforms should give earnest attention to the socio-political contexts in the country.
  • Civil society organizations operating in Ethiopia should work with platforms to prepare a timely, context-specific lexicon on hate speech to guide platforms in moderating illegal content.

This all is to say that it’s not too late—social media platforms should do better and work to implement context-sensitive content moderation processes in conflict-prone countries like Ethiopia and beyond.

 

Yohannes Eneyew Ayalew is a 2020/21 Open Internet for Democracy Leader. Yohannes is a PhD Candidate at the Faculty of Law, Monash University in Australia, and is working on a project that looks at broader debates on internet freedom in Africa. He tweets at @yeayalew.