Race in the digital sphere
Go to the Beyond hate crime: Perspectives on racism in Ireland main page
Download Race in the digital sphere (pdf)
Eugenia Siapera, Elena Moreo and Paloma Viejo-Otero
The @Ireland Twitter account seems a great idea. It is run by the Irish Central, the Irish diasporic US-based news outlet, and offers a different person every week the opportunity to tweet about life in Ireland, providing great insights into a variety of viewpoints. However, when in August 2016, a black British woman, Michelle Marie, was chosen to tweet using the @Ireland Twitter handle, she was subjected to eight hours of abuse, mostly centring on the fact that she was black. In January 2018 in Dundalk someone stabbed two people; as soon as the media started reporting that the perpetrator may have been from the Middle East, cries of terrorism, open borders, uncontrollable immigration and the like emerged online. In actual fact, the nationality of the accused, who is currently considered too ill to be tried in court, is still not known.
Through our work on HateTrack, building a computational tool that can harvest and classify online posts according to their probability to contain racially toxic contents, we came across so many examples such as these. We encountered both the dehumanizing racism that everyone is quick to condemn and the ‘milder’ varieties that appear to pose ‘reasonable’ questions, arguments and debates. The ubiquity of what we termed racially toxic contents was such that we found it difficult to make sense of it through ‘hate speech’ frameworks. We use the term ‘racially loaded toxic contents’ to refer to language and contents that entrench polarisation, reinforce stereotypes, spread myths and disinformation, justify the exclusion, stigmatisation, and inferioritisation of particular groups, and reinforce exclusivist notions of national belonging and identity. We decided to use this broader definition in order to move beyond discussions of illegal hate speech and to capture a wider range of ultimately pernicious discourses. On the one hand, we wanted to avoid getting caught up in discussions of what is and what isn’t racist hate speech from a legal/normative point of view. On the other hand, we wanted to explore the broader dynamics of online discussions and how they affect and are affected by, the accumulation and circulation of toxic racist messages. We use the notion of racial toxicity in the sense suggested by the political theorist Jeremy Waldron (2012): we understand it as a slow-acting toxic agent, incrementally and effectively poisoning the fabric of public life and the civic spaces we inhabit.
The sociologist Sanjay Sharma (2017) views the online distribution of digital racism as a power law: when certain events happen, racist contents increase in volume. Based on this, he distinguishes between high magnitude events such as terrorist attacks, which are linked to spectacular and extreme, but relatively short-lived, racist posts; mid-range events, which trigger explicit racism, for example, a political speech on immigration leads to a relative increase of racist posts; finally, ambient or ‘long tail’ racist posts are triggered by smaller, less important or low magnitude events. We found this theorisation very useful in understanding our findings.
Significant events, which in Ireland during our research included the recognition of Traveller ethnicity, the Dundalk stabbing referred to earlier, and the return of the Irish citizen Ibrahim Halawa from Egypt where he was on trial, were linked to a significant rise in the volume of racially toxic contents. Mid-range events, for example, any news about refugees in Europe or the #WeAreIrish campaign or anything on Direct Provision, were linked to increased levels of racially toxic contents. But we also found toxic content a propos of everything and nothing in particular: a Facebook page of a bridal shop, for example, referred to a person of Traveller ethnicity only to be deluged by racially toxic posts. On the Facebook pages of news outlets, any article on housing will almost inevitably include racially toxic posts; any article written by or referring to non-white Irish people; any article on Middle Eastern politics, on Trump, on taxi drivers, almost anything: this is the ambient racism that Sharma refers to. Taken together, all these show that the whole of the internet is contaminated.
But then it occurred to us, that the question is really not how can the Internet be so racist, but rather how can we expect the Internet to be a non-racist haven when race and racial thinking is so intricately bound with the way societies are constructed? In part, this expectation is linked to ideas of progress, especially technological progress, where technology will liberate us from prejudices, such as racism. This is the ideology that Silicon Valley is certainly invested in, what Barbrook and Cameron refer to as the Californian Ideology, a fusion of counter-cultural individualism and neoliberal technological utopianism. Because new technologies rely on the binary code of 0s and 1s, it is posited, and because we are all individuals, or ‘users’, old-fashioned prejudices will disappear. Current social and political structures that are inefficient and bureaucratic, and confine and restrict individuals will be replaced by flexible arrangements that allow for people’s full potential to emerge. Reality proved to be very different.
Rather than relying on binary code, scripts often mobilize racist code. In 2013, Gizmodo published an investigation of Github, the open source code directory, reporting that Java, HTML and Python scripts deposited there by computer programmers, regularly include racial slurs, alongside misogynistic and homophobic terms. In 2016, an article in ProPublica showed that risk assessments undertaken by algorithms systematically discriminate against minorities, for example denying them bail, or contributing to higher sentences. When Microsoft tried to use Twitter crowds to train a chat-bot, the bot was trained to use racist and misogynistic language. In 2017, a viral video showed an automatic soap dispenser that only works on white skin. Algorithms can only be trained on existing data, which are derived from societies that are already racially structured, hence reproducing and reinforcing the current racial logics.
The liberal ideology behind big tech corporations is, in fact, a racial logic. The Community Standards of Facebook, i.e. the rules by which users should abide, define “hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability.” But when the materials used by Facebook to train content moderators were leaked to the New York Times, it transpired that if a protected category such as race is paired with a non-protected category such as social class, the former loses its protected status. In practice, this means that content such as ‘white men are assholes’ gets removed but content such as ‘poor black people should sit in the back of the bus’ remains. As Paloma Viejo-Otero has argued, this understanding of racial equivalence betrays an ahistorical position that does not acknowledge the inequalities between races; it reduces everyone to individuals and holds that there is no difference between those who historically have been the oppressors and those who are oppressed.
There is another liberal concern that feeds directly into this racial logic: the issue of free speech and freedom of expression, which on Facebook and other social media corporations acquires an economic dimension. These platforms rely on user contents, which they then monetise through selling data on the users that produce these contents. Any decrease in the contents allowed to circulate may then have an impact on the revenues of these platforms. Executives must therefore carefully weigh decisions on what is and is not allowed, based on calculations that include their brand image and customer satisfaction, their own liberal ideology, as well as the regulatory framework surrounding their operations. In terms of digital racism, this means that platforms must determine themselves which kinds of contents they allow and which they remove, but they are more invested in regulating or governing the contents that users produce and publish on their platforms rather than the eradication of racism. It is only when certain types of content interfere with their smooth operations that they develop relevant policies.
To illustrate all this, consider the following: Facebook developed a new kind of category, the quasi-protected category, which includes immigrants, refugees and transgender people. This category does not afford the full range of protections given to the protected categories as Facebook considers that debate should be allowed for the sake of freedom of speech. So users can say, for example, ‘build a fence to keep migrants out’ and ‘migrants are filthy’. Facebook further allows Holocaust denial in certain countries, for example in the US, but IP blocks in Germany, where it is illegal. In another recent leak of the training materials for content moderators, it transpires that in the wake of the Charlottesville protests in August 2017, Facebook decided to regulate organised hate groups; however, while they ban white supremacists and praise for them, they are happy to allow white nationalists and white separatists. The upshot is that those banned adapt and reinvent themselves in order to comply and remain visible in the digital sphere. For their part, social media platforms are not concerned with race but with keeping the steady flow of contents in ways that don’t compromise either their brand identity or their business model. This is how, for example, Facebook regulates the flow of hate shaped by its liberal ideology, its concern to allow contents to circulate, and the regulatory context in the different regions where it operates.
Moreover, in order for hateful contents on social media platforms such as Facebook and Twitter to be removed, someone has to report them. Yet what we found in our research is that very few people actually report hateful contents. This is the case because some feel it is futile given the amount of hate that circulates; some prioritise a kind of absolute freedom of speech, where everyone can say whatever they want; yet others believe that it is not their place to report contents, that someone else should do it. What is implied is that this ‘someone else’ is the one targeted by this hate. Indeed, in our research, the people who were more likely to report racially toxic contents were those directly affected by them. This, in turn, means that the circulation of racially toxic contents effectively becomes one more problem for racialised people to deal with. In this manner, the sphere of social media becomes yet another sphere for the regulation of racialised people, what they say and what is said about them, and the burden of doing something about it falls on their shoulders.
This is the key finding that has emerged from our work on digital racism: rather than levelling the field, or enabling the full participation of people who are racialised, social media platforms in fact subject people not only to more controls but also regulate the flows of hate, dividing hate into categories of acceptable and not acceptable hate. What’s more, they ask for racialised people to work for free in reporting such contents but without actually having any input on what will be removed or not. We take all this to have important implications in thinking the role of the state and state racism in determining race in the current historical conjunction. If our findings are correct, then we see that in the shift to neoliberal techno-capitalism, the state loses its status as the main focus and source of racialisation. Social media corporations and their liberal ideology emerge as powerful new loci for the re-creation of racialised subjects as isolated individuals, who are not only targeted by toxic contents, but who are also tasked with trying to clear this. On the internet, the old adage goes, no-one knows you are a dog, meaning that anyone can participate on an equal footing; but what we found is that far from providing a space for everyone to be themselves, the digital sphere is already and inescapably racially constituted and has become another domain to control, regulate and govern racialised people.
Go to the Beyond hate crime: Perspectives on racism in Ireland main page
Eugenia Siapera is Professor and Head of the School of Information and Communication Studies, at UCD. Her research interests are in the area of digital media and politics, technocapitalism and inequalities, digital racism and misogyny.
Elena Moreo is an independent researcher in the areas of race, ethnicity, migration, asylum, and digital racism. She has worked on a number of research projects in Trinity College Dublin and Dublin City University.
Paloma Viejo Otero is a researcher in racism and racist hate speech in online environments at the School of Communications at DCU.
.
References
Sharma, S.(2017), “Theorizing Online Racism: the stream, affect and power laws,” paper presented at the AoIR conference, Tartu, Estonia, Oct 19-21.
Waldron, J.(2012), The Harm in Hate Speech, Harvard University Press.