Online safety: our research agenda

15 April 2024

This agenda sets out Ofcom's areas of interest for future research in the online safety space.

By publishing it, we hope to encourage interested academics and researchers to consider how best to achieve our shared research goals.

Online safety: our research agenda (PDF, 443.9 KB)

Agenda ymchwil diogelwch ar-lein (PDF, 464.1 KB)

Our agenda has four main themes:

User activity and behaviour

We prioritise understanding what people do online and their attitudes and experiences of being online. Learning more about how a person’s characteristics such as their age and gender impacts their online experiences and behaviour can help us take policy decisions that make life safer online for people in the UK.

Children's online experiences

Ensuring that children in the UK can live a safer life online is core to the Online Safety Act so is a focus for our research too. While we already invest in lots of research in this space, it is important that our evidence base stays up to date. The considerations for us when conducting research with children include maintaining high standards of safeguarding and ethical practice while being able to accurately assess what children do online.

Areas of particular interest include:

  • Methodologies for understanding what content children are being exposed to online, where, and how frequently
  • Ways we can measure the cumulative impact of harmful content on children and their reaction/responses to it
  • Children’s interaction within services designed for children’s use only ('walled gardens') and within private spaces such as group chats
  • Methodologies for understanding the relationship between online activity and children’s wellbeing, including repeated exposure to harmful content

Vulnerable users' online experiences

Developing our understanding of the needs and experiences of more vulnerable people online can help us better address their user safety needs going forward. We are particularly interested in how certain user characteristics may indicate increased vulnerability online and how approaches to user safety can best reflect this.

Areas of particular interest include:

  • User characteristics (like neurodiversity and language barriers) that may make a person more vulnerable online, and the online spaces where such users are particularly vulnerable
  • How the effectiveness of safety measures can be tailored to serve the needs of different groups of vulnerable users

Behavioural insights

Behavioural insights help us understand how consumers and businesses behave, and how people make decisions. We use these insights to inform our policy-making, improve services, and ultimately deliver better outcomes for users and citizens.

We are interested in the interaction between how services are designed, how users behave, and how harm manifests online, as well as what drives and influences the behaviours of the businesses we regulate.

Areas of particular interest include:

  • Factors which shape adoption, among different demographics, of emerging safety technologies
  • Design features that can be effective in increasing informed choice or empowering users to shape their online experiences
  • Approaches to evidencing the medium-to-long term impact of design features on user behaviour (for example, the effect of repeat exposure to alert warnings or repeated prompts to update content controls)
  • Design features and preventative interventions that affect more complex behaviours (for example, high-risk contact that moves across platforms or risky browsing behaviour across multiple platforms)

Online risk and harm

Understanding the nature, causes and impacts of online harm is central to our online safety duties. The Online Safety Act distinguishes between illegal content, such as child sexual abuse material, terrorism and hate content, and content which is not illegal but may be harmful to children, such as pornography and content that encourages or promotes an eating disorder.

While we already have a lot of expertise and evidence about these harms, ensuring that our evidence base is up to date to reflect the latest developments will be a continuous process.

Hate and terror

The use of online services to incite and radicalise vulnerable people, including children, towards hate and violence poses a major risk. It can have severe and far-reaching consequences, including for targeted minorities and protected groups. In this ever-changing space, continuing to build upon our understanding of these harms across the huge range of services in scope is vital to us.

Areas of particular interest include:

  • Future safety measures that could be effective at mitigating against the uploading and spreading of hate and terrorist content/activity online
  • Techniques for learning more about the behaviours and characteristics associated with perpetrators of hate speech and terrorist content/activity online
  • Techniques for learning more about the relationship between gaming services and hate speech and extremism

Misinformation and disinformation

Misinformation is one of the most prevalent potential harms encountered by both adults and children online. Ofcom’s duty to promote media literacy includes helping the public understand the nature and impact of mis- and disinformation, and how they can reduce their exposure to it. We have a longstanding duty to promote and research media literacy more broadly.

Areas of particular interest include:

  • Emerging tactics, techniques and procedures for disinformation campaigns, and any emerging actors
  • The prevalence of mis/disinformation and its correlation with significant moments/events, such as political developments or humanitarian crises
  • The means by which online locations become associated with mis/disinformation over time

Fraud

Fraud is the most commonly experienced and frequently reported crime in the UK, with victims of fraud often experiencing both financial loss and a negative impact on their mental health. We know that fraudsters rapidly adapt to exploit new technologies, so it is important that we improve and update our understanding continuously, too.

We'd like to hear about:

  • Differences in the use of online advertisements in fraudulent activity between user generated content and paid for advertising (including on search services)
  • Emerging methodologies perpetrators use to coerce users into fraud

Violence against women and girls

Women and girls experience disproportionate and distinct forms of harms online. This includes a wide range of complex and interrelated harms that seek to threaten, monitor, silence and humiliate women and girls. To meet our duties of protecting users' safety and rights, it is important to continue to build our understanding of how these harms manifest, change and adapt to technological advances.

We'd like to hear about:

  • How gender-based abuse manifests online and how it might be affected by individual vulnerabilities (like protected characteristics and public figures) or emerging technologies
  • Potential mitigations for preventing and responding to online-gendered abuse (like deterrence and safety tools) and what challenges could be faced when implementing the mitigations

Child sexual exploitation and abuse

The sexual exploitation and abuse of children online is a persistent and growing threat, with devastating consequences for those affected. New risks are emerging as the way we interact online evolves, and we will continue to work collaboratively with stakeholders to strengthen our evidence base and make the biggest possible impact on the safety of children online.

We'd like to hear about:

  • Understanding the online harms landscape of CSEA (specifically online CSE, cross-platform offending, self-generated indecent imagery, gaming and emerging threats such as extended reality and generative AI)
  • Understanding impact of online CSEA on victims and survivors, including impact of emerging harms
  • Evaluating the effectiveness and usability of emerging tools of moderation for online CSEA, such as automatic content classifier based on machine learning (ML) algorithms

Content that is harmful to children

The Online Safety Act sets out certain types of content that is harmful to children. While we have developed a strong evidence base on the nature, prevalence and impact of this content, children’s online experiences and the content they encounter continues to evolve. Sections 61 and 62 of the Act list content that is harmful to children.

We'd like to hear about:

  • Measuring the impacts of harm from content that is harmful to children as defined in the Online Safety Act
  • Identifying additional new and emerging harms that may not be illegal but could still be harmful to children

Service design

It is important that we keep our understanding of online services up to date to ensure we can identify emerging functionalities and mitigate their unintended consequences. We must be aware of the monetisation models that impact a service’s design and consider how these models could affect a user’s interaction with harmful content and other users. Maintaining this research will allow us to monitor emerging in-scope services and analyse how their characteristics could have an impact on the user.

How online services are designed and function

Understanding the characteristics and functionalities of services, as well as how they develop over time, is central to fulfilling a range of our regulatory duties. We are interested in:

  • emerging types of service;
  • new design features that have the potential to change or influence user experience; and
  • any other service characteristics relevant to online safety.

Areas of particular interest include:

  • The implications of new types of services, such as decentralised and immersive technology services, for user safety and media literacy
  • The techniques available to learn more about the relationship between a service’s functionalities and the risk of harm to its users
  • How different approaches to algorithmic design affect user experience, such as exposure to and engagement with certain kinds of content

How services providers' business models work

Business models hold an important influence on how a service develops over time. We need to keep informed on how these business models operate throughout different industries to allow us to understand their motivations and anticipate emerging risks.

Areas of particular interest include:

  • Techniques to better understand the relationship between a service provider’s business model and the risk of harm to its users
  • How SMEs use social media and search platforms for commercial or revenue-generating purposes
  • How services’ monetisation policies can impact on user safety, and other drivers for investment in user safety

Safety measures and technology

We carry out research to develop our skills and understanding of trust and safety measures and technologies. It is important we keep up with the rapid rate of change and innovation in this space. It is also important for us to continue learning how design of safety measures on services can impact their effectiveness at keeping users safe online.

Evaluating safety measures

Assessing whether services’ safety measures are effective at reducing the risk of harm to UK users is an important part of the online safety regime. Evaluating safety measures will also help us to understand whether these create risks of unintended effects – positive or negative. This may include assessing the impact of safety interventions on competition and innovation, freedom of expression, privacy, and users’ experiences. We are interested in identifying the right metrics and analytical techniques to assess the impact of different types of safety measures, where possible at scale. We also want to explore how these approaches may need to vary according to type of harm or services studied.

Areas of particular interest include:

  • New or emerging analytical techniques and metrics to support the evaluation of platforms’ user-facing safety measures (e.g. reporting and flagging tools, user empowerment tools)
  • Whether and how safety measures can have unintended effects on user experience, user rights, and on innovation and competition – and how such effects can be measured
  • The potential for interventions on the largest services to result in the displacement of harms and users to other and/or smaller services
  • Analytical approaches to assessing at scale the risk of harm to UK users on online services

Evaluating safety tech

While the arrival of new technologies comes with many opportunities and benefits, it also comes with the potential for new or different harms. We need to continually evaluate the impact of new technology and safety tech measures and have the right expertise to recommend new measures in the future. We would value efforts from the wider research community to develop new approaches and methodologies that can help us in our task to assess technologies and safety tech measures.

Areas of particular interest include:

  • The development of novel methodologies to improve safety and/or and assess the effectiveness of new safety tech measures, when evaluating multi-layered architectures as a whole or in the following areas:
    • Recommender systems
    • Age assurance
    • Privacy enhancing technology
    • Automated Content Moderation
    • Generative AI
    • ‘Deep fakes’ and synthetic content technology
  • Techniques (including methods, principles and technical metrics) to ethically create and share training data that includes harmful content

Generative AI

As generative artificial intelligence systems become more sophisticated and adoption of such applications increases, we need our evidence base to keep pace. We must continue to develop our understanding of potential impacts of generative AI on different demographics of users and different types of online harms.

Areas of particular interest include:

  • Techniques to examine the impact of generative AI on different types of harmful content and different groups of users
  • The particular impact generative AI may have on children’s online experiences
  • Techniques to ensure the ethical governance of AI tools and training datasets, for example, mitigation of bias

Parental controls

Our research indicates that parents use a range of methods to engage with their children’s online activity, and that parents’/carers’ and children’s attitudes to these methods are influenced by several factors. Understanding these factors, as well as the effectiveness of parental control tools in practice, is important to providing both parents and children with the support they need online.

Areas of particular interest include:

  • The factors that influence attitudes and adherence to parental controls among parents and children
  • How parental controls operate alongside other safety measures provided by platforms
  • How to evaluate the effectiveness and any unintended consequences of parental controls

The themes and areas listed in this agenda are not exhaustive. The complex nature of the Online Safety Act – the diversity of content and services it covers – means that we are always looking to broaden our evidence base.

Get involved

We work with academics in different ways, like offering letters of support for projects or co-sponsoring PhD studentships.

Please get in touch with us at academic.engagement@ofcom.org.uk if you'd like to know more. You can also express your interest in researching an area in this agenda by completing our online form.

Rate this page

Was this page helpful?