Use of AI in online content moderation
In recent years a wide-ranging, global debate has emerged around the risks faced by internet users, with a specific focus on protecting users from harmful content. A key element of this debate has centred on the role and capabilities of automated approaches (driven by Artificial Intelligence and Machine Learning techniques) to enhance the effectiveness of online content moderation and offer users greater protection from potentially harmful material. These approaches may have implications for people’s future use of – and attitudes towards – online communications services, and may be applicable more broadly to developing new techniques for moderating and cataloguing content in the broadcast and audiovisual media industries, as well as back-office support functions in the telecoms, media and postal sectors.
Ofcom has commissioned Cambridge Consultants to produce this report as a contribution to the evidence base on people’s use of and attitudes towards online services, which helps enable wider debate on the risks faced by internet users.