Ofcom has set out how we are supporting the safe innovation and use of artificial intelligence across the sectors we regulate, and in streamlining the way we work.
Smarter communications
The industries we regulate have technology and innovation at their heart. As technologies evolve, new opportunities emerge that have the potential to drive better outcomes for consumers and businesses. For example:
- Online platforms use automated content moderation to identify harmful content at scale and with greater speed, helping improve safety for their users.
- Broadcasters use AI to generate real-time captions, translate content into multiple languages, and provide automated dubbing and audio descriptions.
- Telecoms companies use AI to help keep their networks secure; and in the future, they may use AI to enhance network management.
- Spectrum allocation could be optimised to help reduce congestion on networks and enhance network efficiency to deliver a better service for consumers.
- Postal companies could further optimise delivery routes which could save money, reduce carbon emissions, and improve reliability and quality of service for consumers.
In general, our regulation is technology-neutral, which means regulated companies are essentially free to deploy AI as they see fit, without needing our permission, helping to enable faster innovation and growth.
That said, while AI affords new opportunities and benefits for businesses and consumers, it is important for Ofcom to stay ahead of any associated risks, and take action to mitigate them.
Supporting innovation
Encouraging and promoting economic growth is built into Ofcom’s duties, and we are working on a range of initiatives to support AI innovation to help achieve this. These include:
- Creating safe spaces to experiment with technology. Together with Digital Catapult, Ofcom runs the SONIC Labs that are providing an interoperable (‘Open RAN’) test-bed for mobile network equipment vendors to explore the use of AI in mobile networks.
- Providing large data sets to help train and develop AI models. Our data can be used to train and develop AI models, improving their outputs. For example, our unique, large data sets on how spectrum is used in the UK has enabled academia and industry to develop state-of-art AI models for spectrum use cases.
- Collaborating with other institutions to provide regulatory alignment. For example, we work with the CMA, ICO and FCA through the Digital Regulation Cooperation Forum to understand new AI applications such as agentic AI.
Mitigating risks
While both industry and consumers benefit from AI deployment, the risks created or exacerbated by AI primarily flow to the consumer.
These risks can cause serious harm to individuals, especially in our online lives. For example, two in five UK internet users aged 16+ say they have seen a deepfake – among those, one in seven say they have seen a sexual deepfake.
Of those who say they have seen a sexual deepfake, 15% say it was of someone they know, 6% say it depicted themselves, and 17% thought it depicted someone under the age of 18.
To tackle deepfakes and a range of other serious online harms, we are implementing – and starting to enforce – the UK’s Online Safety Act. Our ‘safety by design’ rules – which mean platforms should take down illegal content created by AI, and assess the risks of any changes they make to their services – will help create a safer life online for all UK users, especially children, while at the same time ensuring that tech firms have flexibility and freedom to innovate.
How Ofcom is using AI
We are harnessing AI to reduce the burden on all organisations and individuals we regulate or engage with. We have more than 100 technology experts – including around 60 AI experts – in our data and technology teams, including many with direct experience of developing AI tools.
We are carrying out over a dozen trials in our own use of AI, aimed at increasing our productivity, improving our processes and generating efficiencies. These include using everyday third-party GenAI applications as well as creating GenAI based applications in-house. For example:
- Streamlining the translation of broadcast content in response to complaints received, by using an AI translator in conjunction with our broadcast recording service. This has allowed us to re-deploy resources to focus on other priorities and reduced translation costs.
- We developed a customised text summarisation tool to analyse large data sets across responses to our consultations, to find patterns and themes more quickly and efficiently.
- We have used AI to improve spectrum planning, with huge potential to increase the amount of data that can be transmitted over a given bandwidth – especially in built-up areas using high frequencies.
Over the next year, we plan to accelerate the use of AI across our policy areas as appropriate, adopting a safety-first approach. In practice, this means continuing to trial AI tools and only rolling them out across the organisation once we are confident they are safe and secure.