ISD reports into online terrorism, violence and hate

Published: 19 September 2023
Last updated: 19 September 2023

The Online Safety Bill, when it passes into law, will seek to clamp down on illegal content and activity on regulated services, in particular, priority offences related to terrorism, hate crime, harassment, and incitements to and threats of violence. These are acute harms and pose a severe risk to UK online users, including children. The Bill will also create duties for regulated services to protect children from harmful content and duties specific to the largest services1 empowering users to opt out of encountering harmful content including incitements to hatred against, and hateful abuse of, people with particular protected characteristics.2The harms researched within scope of these reports cut across all three sets of duties.

We commissioned two reports from the Institute of Strategic Dialogue (ISD) to build our understanding of user experiences of online services as we prepare to take on new duties under the Bill, with a particular focus on online terrorism, incitements to violence and hate:

  1. Tangled Web - The interconnected online landscape of hate speech, extremism, terrorism and harmful conspiracy movements in the UK”.
  2. Hate of the Nation - A landscape mapping of observable, plausibly hateful speech in the UK”.

The services we will regulate are likely to be global in nature, but for these reports we were interested in whether online content could be collected and analysed through the perspective of UK users. We also wanted the research to be conducted on a cross-platform basis because we know that UK users’ online experiences are rarely limited to one service, and people involved in spreading online terrorist, violent and hate content usually do not limit themselves to one service either. There are many factors that can influence which services might be used to spread such content, including their trust and safety systems, user base and functionalities.

We are appreciative of ISD taking on these ambitious projects given the number of services and harms areas involved. The research has shown there are challenges to objective and comprehensive study in this area, and the limitations of this type of analysis need to be borne in mind. In particular, while the use of automated classifiers to identify hate speech is increasingly widespread, these studies show that such tools remain prone to significant error rates, because of the importance of context and intent in determining whether a particular post is truly hateful.

The research projects were methodologically ambitious - attempting to map the landscape of online hate and actors posting terrorist and other types of harmful content across multiple services, including collection and analysis of multiple harms using multiple machine learning models and classifiers. Apart from each service being unique in its functionalities and design, the most challenging aspect of the research was the data access possible through the publicly available APIs (Application Programming Interface) of the in-scope services. For some services, ISD were able to collect large volumes of data and for others, very small samples with more limited information available about each post.

The Online Safety Bill, once enacted, will be one of the first regulatory frameworks in the world to give an independent regulator formal information gathering powers relating to any user to user and search services available in the UK. These powers will allow Ofcom to make mandatory requests for data and information from regulated services, and will be essential in establishing accurate, detailed and comparable metrics to assess the effectiveness of their trust and safety measures.

The purpose of this research has been to better understand the risk to UK users of encountering online terrorist, incitement to violence and hate content. The research has provided insights into how such content moves across and how the risk to users presents on a range of services.

As a means to help services minimise the sharing of such content, our Codes of Practice will recommend measures aimed at mitigating illegal content. Once our Codes are finalised, services will also be required to conduct their own risk assessments, and to mitigate identified risks including terrorist use of their services, in order to drive improved standards and systems, and deliver a safer life online for UK users.

1 Duties applicable only to Category 1 services.

2  These are race, religion, sex, sexual orientation, disability and gender reassignment.

Back to top