6 October 2021

Better protections from harmful online videos

  • Ofcom sets out guidance to help video-sharing platforms keep users safer
  • One in three video-sharing users come across hate speech
  • Our approach balances protection of users with upholding freedom of expression

People who use online video-sharing sites and apps should be better protected from harmful content, as Ofcom issues new guidance for tech companies today.

Video sharing platforms (VSPs) are a type of online video service where users can upload and share videos with other members of the public. They allow people to engage with a wide range of content and social features.

VSPs established in the UK – such as TikTok, Snapchat, Vimeo and Twitch – are required by law to take measures to protect under-18s from potentially harmful video content; and all users from videos likely to incite violence or hatred, as well as certain types of criminal content.

Ofcom research (PDF, 4.6 MB) shows that a third of users say they have witnessed or experienced hateful content; a quarter claim they’ve been exposed to violent or disturbing content; while one in five have been exposed to videos or content that encouraged racism.[1]

Our research shows that 70% of users say they have been exposed to any potential online harm, 32% to hateful content, 26% to bullying, abusive behaviour and threats, 26% to violent or disturbing content, and 21% to racist content.

Today’s best practice guidance is designed to help companies understand their new obligations and judge how best to protect their users from this kind of harmful material. We have already begun discussing with platforms what their responsibilities are, and what they are doing to comply with them.

What platforms should do to protect users

Ofcom’s job is to enforce the rules set out in legislation and hold VSPs to account. Unlike in our broadcasting work, our role is not to assess individual videos. And the massive volume of online content means it is impossible to prevent every instance of harm.

Instead, the laws focus on the measures providers must take, as appropriate, to protect their users – and afford companies flexibility in how they do that. To help them meet their obligations to protect users, our guidance sets an expectation that VSPs should:

  • Provide clear rules around uploading content. Uploading content relating to terrorism, child sexual abuse material or racism is a criminal offence. Platforms should have clear, visible terms and conditions which prohibit this – and enforce them effectively.
  • Have easy reporting and complaint processes. Companies should implement tools that allow users to flag harmful videos easily. They should signpost how quickly they will respond, and be open about any action taken. Providers should offer a route for users formally to raise concerns with the platform, and to challenge their decisions. This is vital to protect the rights and interests of users who upload and share content.
  • Restrict access to adult sites. VSPs that host pornographic material should have robust age-verification in place, to protect under-18s from accessing such material.

Plans and priorities for the year ahead

One of our five priorities for the year ahead – as set out in our workplan (PDF, 685.8 KB) – is to work with VSPs to reduce the risk of child sexual abuse material.[2]

The Internet Watch Foundation reported a 77% increase in the amount of “self-generated” abuse content in 2020. Adult VSPs carry a heightened risk of child sexual abuse material and the rise in direct-to-fans subscription sites specialising in user-generated adult content has potentially made this risk more pronounced. Given this heightened risk, we expect that VSPs’ creator registration processes and subsequent checks are strong enough to significantly reduce the risk of child sexual abuse material being uploaded and shared on their platforms.

Over the next 12 months we will also prioritise: tackling online hate and terror; ensuring an age-appropriate experience on platforms popular with under-18s; laying the foundations for age verification on adult sites; and ensuring VSPs’ processes for reporting harmful content are effective.

Our approach to enforcement and reporting

We will take a rigorous but fair approach to our new duties. As for TV and radio, we will balance protecting people from harm, with rights to freedom of expression.

If we find a VSP provider has breached its obligations to take appropriate measures to protect users, we have the power to investigate and take action against a platform. This could include fines, requiring the provider to take specific action, or – in the most serious cases – suspending or restricting the service.

We also have a broad range of new powers to collect information from providers about what they are doing to tackle user safety on their services.

In autumn next year, we will publish a first-of-its-kind report providing transparency for users and the wider public on the steps VSPs are taking to protect children and other users from harm.

Online videos play a huge role in our lives now, particularly for children. But many people see hateful, violent or inappropriate material while using them.

The platforms where these videos are shared now have a legal duty to take steps to protect their users. So we’re stepping up our oversight of these tech companies, while also gearing up for the task of tackling a much wider range of online harms in the future.

Dame Melanie Dawes, Ofcom Chief Executive

Notes to editors

  1. We commissioned bespoke consumer research to inform our approach to VSP regulation. Fieldwork for this research was conducted between September and October 2020, and findings relate to the three months prior to interview. We have also published two reports from leading academics from The Alan Turing Institute, covering online hate; and from the Institute of Connected Communities at the University of East London, on protection of minors online.
  2. Ofcom’s video-sharing platform framework: a guide for industry

Related content