Content moderation in user-to-user online services

14 September 2023

Concerns about harms related to social media and other online services that host user-generated content (user-to-user services) have become a focus of public debate in recent years, both in the UK and globally. Many of these concerns fall under Ofcom’s remit through our media literacy duties and powers under the video-sharing platform regime, or are likely to fall within our duties under the Online Safety Bill.

An important part of service providers’ efforts to limit users’ harm is content moderation – that is, activities aimed at removing, or reducing the visibility of, potentially harmful content. Content moderation is central to current public discussions about online regulation, in part because it raises implications for how users can express themselves freely online. It will also be relevant to Ofcom’s future work on online services’ safety systems and processes. However, information in the public domain on content moderation is relatively scattered and incomplete, and it may be difficult for non-experts to form a holistic view of common practices and challenges.

To help develop our understanding of content moderation, over the last two years we worked with six service providers of different sizes and types, including Facebook (Meta), YouTube (Google), Reddit and Bumble focusing particularly on how providers can identify, tackle and track harm. Service providers engaged on a strictly voluntary basis and we are grateful for the time and effort they devoted to this work. This report presents a summary of our findings from these experiences, as well as our own reflections on some of the decisions and trade-offs that services may face when designing and implementing their content moderation systems and processes.

Full report

Content moderation in user-to-user online services - an overview of processes and challenges (PDF, 521.4 KB)