8 June 2023

What generative AI means for the communications sector

Generative AI – which includes tools like ChatGPT and Midjourney – has gone from being a relatively unknown technology to a topic that dominates daily headlines across the globe. Benedict Dellot, Anna-Sophie Harling and Jessica Rose Smith from Ofcom’s Technology Policy and Online Safety Policy Development teams discuss how Ofcom is responding to these developments.

To get a sense of just how quickly the generative AI world is moving, we need only look at the number of new models released every week, or the amount of money flowing into AI startups in recent months. The most well-known model, ChatGPT, amassed over 100 million users in the first two months after its release. Analysts reported that it was the fastest-growing consumer internet app, comparing it with TikTok, which took nine months to reach 100 million users, and Instagram, which took over two years.

So, what does this mean for the future of the communications sector?

Transforming the communications sector

Whether you believe that generative AI has the potential to change the world for good, or that it poses more risks than benefits, most experts agree it is likely to have a significant impact on the future of our economy and society as a whole.

This is certainly true for the communications industries. From telecoms security to broadcast content, and from online safety to spectrum management, generative AI promises to disrupt traditional service delivery, business models and consumer behaviour.

Many of these changes could be beneficial. Generative AI models can be used in the production of TV content, enhancing the ability of producers to create compelling visual effects. Likewise, in the field of online safety, researchers are examining how generative AI could be used to create new datasets – also known as synthetic training data – to improve the accuracy of safety technologies. Generative AI can flag potential malicious activity by identifying abnormalities on a network, therefore protecting the security of data and online assets.

"From telecoms security to broadcast content, generative AI promises to disrupt traditional service delivery, business models and consumer behaviour"

The use of generative AI could also pose risks. Voice clones created by generative AI tools could be used to scam people over the phone by impersonating loved ones. Fraudsters could also use generative AI models to create more effective phishing content. Generative AI could pose various risks to users of online services, for instance by enabling people to more easily access instructions for self-harm or by providing advice on smuggling illegal substances.

Generative AI models could also be used to create ‘fake’ news and media, which can spread quickly online, leading to challenges for broadcast journalists in authenticating footage from online sources. A related concern is that these tools might inadvertently produce inaccurate news content or serve up news content that is biased towards one or other political persuasion, which could undermine efforts to create a pluralistic news ecosystem online.

How Ofcom is working on generative AI

Teams across Ofcom are closely monitoring the development of generative AI. Our technical, research and policy teams are undertaking research to better understand the novel opportunities and risks surrounding the development and use of generative AI models across the communications sectors that Ofcom regulates, and the steps that developers and other industry players are taking to mitigate those risks.

We are exploring how Ofcom can maximise the benefits of this technology for the communications industries, making sure consumers and organisations across our sectors can benefit from its transformative potential, while also being protected from any harms it poses.

What we're doing

  • Working with companies that are developing and integrating generative AI tools which might fall into scope of the Online Safety Bill, to understand how they are proactively assessing the safety risks of their products and implementing effective mitigations to protect users from potential harms.
  • Monitoring the impact on people’s media literacy of new technologies including generative AI and augmented and virtual realities.
  • Publishing information for our regulated sectors on what generative AI might mean for them and their responsibilities to their customers and users. This includes advice to UK broadcasters in a recent bulletin (PDF, 197.3 KB), explaining how the use of synthetic media is subject to the Broadcasting Code.
  • Reviewing the evidence surrounding detection techniques that could be used to distinguish between real and AI-generated images and video content. We are also exploring the role that transparency can play in helping to indicate whether content has been developed by a human or a generative AI model. A good example is the Content Authenticity Initiative standard.
  • Participating in discussions through international think-tanks on AI regulation generally and in a multilateral expert group to help shape emerging best practices for the ethical use of AI in journalism. We are also working with fellow broadcasting regulators on how to incorporate information about AI into media literacy policy.
  • Continuing to build our understanding of generative AI – including organising a generative AI ‘tech week’ that brought together external speakers to discuss developments in the technology as well as measures for mitigating its risks.

We are also aligning our efforts with our digital regulator partners through the Digital Regulation Cooperation Forum. As part of this, we will be hosting a number of discussions, both internal and external, over the coming months to share our respective research on generative AI and identify opportunities for further collaboration.

Our next steps

We are pleased to see many stakeholders across our sectors undertaking work to realise the benefits of generative AI while minimising the potential risks.

When companies and service providers are integrating generative AI models into their products and services, we expect them to consider the risks and potential harms that might arise. We also expect firms to think about what systems and processes they could deploy to mitigate those risks. Transparency about how these tools work, how they are used and integrated into services, and what steps have been taken to build in protections from harm are likely to be critical to building confidence that risks can be minimised while allowing users to enjoy the benefits generative AI can provide.

Ofcom welcomes continued engagement from those developing generative AI models as well as those who are incorporating generative AI into their services and products as we consider these issues.

Related content