THE BACKGROUND OF THE TRUSTED FLAGGER PROGRAMME
THE EUROPEAN UNION LAW AND HATE SPEECH ONLINE
- The Framework Decision on Combating Racism and Xenophobia by means of criminal law requires the European Union Member States to criminalise the public incitement to violence or hatred directed against a group of persons or a member of such a group based on certain characteristics, including race, colour, religion, descent or national or ethnic origin. This is the legal basis for defining illegal online content.
- The Code of Conduct on countering illegal hate speech online. Since May 2016, Facebook, Twitter, YouTube and Microsoft, followed by Google+ and Instagram in January 2018, and TikTok in 2021 have committed to combating the spread of illegal online hate speech in Europe through a voluntary Code of Conduct.
- The companies have committed to reviewing the majority of request to remove the content in less than 24 hours and to remove the content if necessary. Assessing what could be illegal hate speech includes taking into account criteria such as the purpose and context of the expression.
- IT companies have to act in line with national laws, in particular, those transposing the Framework Decision on combating racism and xenophobia and the 2000 e-commerce Directive. When they receive a valid alert about content allegedly containing illegal hate speech, the IT companies have to assess it, not only against their own rules and community guidelines, but, where necessary, against applicable national law (including that implementing EU law), which fully complies with the principle of freedom of expression.
- The Code of Conduct is based on cooperation involving the European Commission, IT platforms, civil society organisations and national authorities who meet regularly under the umbrella of the High Level Group on Combating Racism, Xenophobia and Other Forms of Intolerance.
- In order to ensure the effective reporting of the progress in the implementation of the Code to the High Level Group, the European Commission established a new sub-group dedicated to countering hate speech online. The sub-group brings together the relevant actors from the private sector as well as from international, European, national and local level who have a direct role in combating and preventing hate speech online.
- More on the European Commission work on Combating illegal hate speech online can be found on their dedicated webpage.
RECENT ONLINE HATE SPEECH MONITORING RESULTS
In 2022, INAR participated in the 7th evaluation of the Code of Conduct on countering illegal hate speech online, along with other trusted flaggers in Europe. The results were released by the European Commission on 24 November:
- Showed a decrease in companies’ notice-and-action results: the number of notifications reviewed by the companies within 24 hours dropped as compared to the last two monitoring exercises, from 90.4% in 2020 to 81% in 2021, and 64.4% in 2022. TikTok is the only company that improved its time of assessment.
- The removal rate, at 63.6%, is also considerably lower than at its peak in 2020 (71%). Only YouTube
performed better on this parameter than in the previous two years.
- Companies’ frequency and quality of feedback to users improved.
Trusted flagger programme
INAR as a Trusted Flagger
- Trusted Flagger to the IT companies refers to the authorised reporter status given to certain expert civil society organisations which allows them to report illegal content through a special reporting system or channel, not available to normal users.
- INAR is the only Irish representative and Trusted Flagger on the subgroup made up of 33 civil society organisations and 2 national authorities from 27 States (all except Luxembourg).
- We have the Trusted Flagger status with Facebook, Twitter, YouTube, Instagram, TikTok and Google+.
- The Commission evaluates and ensures the implementation of the below-mentioned Code of Conduct by the IT Companies with the help of expert civil society organisations with the Trusted Flagger status, including INAR, via regular monitoring exercises.
Reporting serious racist content on social media via Trusted Flagger channels
INAR’s Trusted Flagger status can be utilised by its Members and the general public to help address serious cases of online hate speech.
- If you reported a serious/ illegal case of online hate speech directly to the social media platform, but it has not been removed, you can escalate this report with INAR.
- To do so, send us a direct link to the content in question, i.e. post or comment under that post. You can get the link by clicking on the date/time of the post or comment and then copying the URL. Send it to us together with a screenshot of that content via Facebook/ Twitter message or by emailing it to firstname.lastname@example.org with HATE SPEECH REPORT in a subject heading. Include the information on why you think this content should be removed. Please note that without the link we will not be able to process the complaint.
- If we believe the reported content meets the criteria of serious hate speech (see the below guidelines) and breaches the guidelines of the social media platform in question, we will report the content directly to that platform through specific channels available only to Trusted Flaggers.
SERIOUS HATE SPEECH
THE typology OF the illegal contenT
The European Commission provides the following guidelines to distinguish content inciting to bias/ discriminatory hatred or violence.
Content using degrading, defamatory words/expressions to name certain social groups/individuals belonging or perceived to belong to such groups
Content diffusing degrading, defamatory pictures/images relating to certain groups/individuals belonging or perceived to belong to such groups
insulting overgeneralising statements
Content expressing insulting overgeneralising statements about certain groups/individuals belonging or perceived to belong to such groups
denying/grossly trivialising historical events
Condoning/glorifying, denying or grossly trivialising historical events relevant to certain groups
calling for limiting the rights
Content calling for limiting the rights of, or otherwise discriminating against certain groups/individuals belonging or perceived to belong to such groups.
calling for violent acts
Content calling for violent acts against certain groups/individuals belonging or perceived to belong to such groups
calling for the murder/annihilation
Content calling for the murder/annihilation of certain groups/individuals belonging or perceived to belong to such groups
IREPORT.IE RACIST INCIDENT REPORTING SYSTEM
iReport.ie Racist Incident Reporting System, INAR Ireland’s flagship project, was launched in July 2013. It allows the people, communities, and organisations of Ireland to confidentially report racism, including online hate speech.
- National, confidential and user-friendly way to report racism from any online device.
- Enables people who experience or witness racism and/or those supporting them to do something about it and break the silence.
- Used for monitoring racism in Ireland.
- Provides evidence and data on racism in Ireland.
- Counters an increase in racism and a hardening of racist attitudes.
- Helps focus the discussion on finding solutions to racism.