The Evolving Scope of the Umati Project

By Nanjira Sambuli
iHub Research
  Published 2nd June 2014
Share this Article
The Evolving Scope of the Umati Project

Set up in September 2012 ahead of Kenya’s 2013 general elections, the Umati project sought to identify the use and role of social media in propagating hate speech online, something that at the time no other monitoring group was looking into. In particular, we set out to identify a particular subset of hate speech (a term that, as of yet, does not have a universal definition), termed ‘dangerous speech.’ Defined as speech with a potential to catalyze or inspire violence, a framework devised by Susan Benesch proved instrumental in identifying elements of online speech that could then be categorized as per their level of ‘dangerousness.’
 

Lessons learned

Online hate speech, we contend, is a symptom of a much more complex issue, often from offline socializations and perceptions that precede online interaction. Online conversations, therefore, may be seen to offer a window of insight into the conversations and convictions people have offline, offering a way to better understand what recurring issues need to be addressed. In particular, ethnicity has been a primary lens through which political, economic and social issues are viewed and reacted to in Kenya, with religious affiliation increasingly becoming a new avenue through which hateful speech is disseminated.
Through Umati, we have observed emerging phenomena on how netizens are dealing with inflammatory speech online pointing towards self-regulation of the online space. Counter-speech trends were noted at different times as events took place and elicited reactions, but such speech was not a primary focus for the Umati monitoring process at the time. We have since come to appreciate its importance and are now bringing it into focus, as we widen the project’s scope to monitor how public conversations take place online over time, and how some of it may move towards dangerous speech. This broader approach will help us to better understand self-regulation mechanisms employed by online communities. Preliminary self-regulation mechanisms observed online include ridiculing a speaker or a narrative that attempts to inflame hate/misinform/disinform; flooding online spaces with positive counter messages that diffuse tensions arising from hateful messages, as well as humour and satire to ‘hijack’ inflammatory narratives. We have realized that observations of dangerous speech online should be put into context of other speech online, as rarely do such incidents happen in isolation.

 

More on our findings throughout the first phase of the project can be found here.

 

Umati Phase 2: what’s happening, what to look out for.

We are able to widen the project’s scope in the second phase of the project (July 2013 to date) thanks to greater use of Machine Learning and Natural Language Processing techniques. This largely entails in-depth examination of information architectures for each online source we monitor (social network, websites, forums), to ensure we are able to retrieve data accordingly, devise processing filters and apply the Umati methodology to categorize the type of speech found. Having built a database of over 7,000 dangerous speech incidents in the first phase of the project through human monitoring, we have a corpus of text off of which we are working, coupled with observations of the changing dynamics of conversations and language references (you can check out one example here).
 

We also held the first of what we hope will be a series of public forums, to engage the online community on the Umati findings. This is in line with one of our overarching objectives on furthering civic education on hateful speech. Highlights and rhetorics from the Umati Forum are accessible here.
 

We will continue to share updates on this process, and welcome any questions, comments, reactions and suggestions.
 

comments powered by Disqus

Account Login