31 January 2024

Search engines can act as one-click gateways to self-harm and suicide content

  • One in five ‘self-injury’ search results glorifies, celebrates or instructs harmful behaviour
  • Image searches bring most harmful or extreme results
  • Cryptic self-injury search terms make detection and moderation challenging

Content that glorifies or celebrates self-injury is widely available via Internet search engines, Ofcom warns today.

Research carried out for Ofcom by the Network Contagion Research Institute reveals the extent to which major search engines – Google, Microsoft Bing, DuckDuckGo, Yahoo! and AOL – can act as gateways to harmful self-injury-related web pages, images and videos.

The researchers entered common search queries for self-injurious content, as well as cryptic terms typically used by online communities to conceal their real meaning. They analysed over 37,000 result links returned by the search engines.

The study found that, across the main five search engines:

  • Harmful self-injury content is prevalent. One in every five (22%) results linked, in a single click, to content which celebrates, glorifies, or offers instruction about non-suicidal self-injury, suicide or eating disorders. Nineteen per cent of the very top links on page one of the results linked to content promoting or encouraging these behaviours, increasing to 22% of the top-five page one results.
  • Image searches carry particular risk. Image searches delivered the highest proportion of harmful or extreme results (50%), followed by web pages (28%) and video (22%). Research has already shown that images can be more likely to inspire acts of self-injury. Also, it can be hard for detection algorithms to distinguish between visuals glorifying self-harm and those shared in a recovery or medical context.
  • Cryptic search terms reveal more harmful content. People are six times more likely to find harmful content about self-injury when entering deliberately obscured search terms, a common practice among online communities. Both the specific and evolving nature of these terms pose significant detection challenges for services.[1]
  • Help, support and educational content is signposted. One in five (22%) search results were categorised as ‘preventative’, linking to content focused on getting people help – such as mental health services or educational material about the dangers of self-injury.

“Search engines are often the starting point for people’s online experience, and we’re concerned they can act as one-click gateways to seriously harmful self-injury content.

“Search services need to understand their potential risks and the effectiveness of their protection measures – particularly for keeping children safe online - ahead of our wide-ranging consultation due in Spring.”

Almudena Lara, Online Safety Policy Development Director

Protecting children from harmful content

Some search engines offer safety measures such as ‘safe search’ settings and image blurring to restrict inappropriate content to users. These were not used by the researchers in our study.

Search services must act to ensure they are ready to fulfil their requirements under the Online Safety Act. Specifically, they will have to take steps to minimise the chances of children encountering harmful content on their service – including content that promotes self-harm, suicide and eating disorders.

In exploring children’s potential pathways to harm via search services, today’s report is an important part of Ofcom’s evidence base to inform our understanding of the risks faced by youngsters online.

In the spring, we will consult on our Protection of Children Codes of Practice. These Codes will set out the practical steps search services can take to adequately protect children.

Notes to editors:

1.‘Data-voids’ are a pervasive and challenging issue in this context. ‘Data voids’ refer to situations where the search demand for certain keywords is not met with reliable or safe information, due to the relative obscurity of the search terms or phrases used. Searches using cryptic language may, therefore, lead to more harmful content, as algorithms aim to provide relevant results, but lack safe and accurate information to fill these gaps.

Related content