Concerned looking woman on laptop

Helping to tackle fraud under the new online safety regime

Published: 12 February 2024
Last updated: 12 February 2024

Fraud is a real source of consumer harm, and criminals often use online services to target their victims.

For International Fraud Awareness Week, policy managers Kate Engles and Hannah Green from Ofcom’s Online Fraud team highlight the new responsibilities that online firms will have to tackle and deter fraud on their services, under Ofcom’s remit as the regulator for online safety.

Fraud is the most commonly experienced and frequently reported crime in the UK and accounts for 40% of all reported crime incidents in England and Wales. Looking specifically at the online space, according to Action Fraud 83% of all fraud reported nationally over the last year was cyber-enabled, with a significant proportion of reports referencing social media and encrypted messaging platforms.

Our own research found that around nine in ten online adults in the UK have come across content they suspected to be a scam or fraud, with over a quarter losing money as a result. But it’s not just about financial loss; more than a third said that the experience had a negative impact on their mental health.

Fraudsters rapidly adapt to exploit new technologies and to how we are increasingly living our lives online. Over the past decade, growing use of social media, online messaging and search services has fuelled the rise of online tactics such as investment scams and romance fraud. Generative AI and ‘deep fakes’ also present new tools for criminals to trick online users into losing money or giving away their personal details.

New online safety rules will make it harder for fraudsters to operate online

There’s no simple solution to tackling a complex problem like fraud, but the new online safety rules will be an important part of making it harder for fraudsters to operate online. Online services will now be required to assess the risk of their users being harmed by illegal content on their platforms. Their new duties include looking at the risks of fraud and financial services offences being carried out by criminals, taking appropriate steps to protect their users, and removing illegal content when they identify it or are told about it.

Last week, we published a consultation that sets out how we assess online risks of fraud and other illegal harms, how companies should measure and reduce it, and how we’ll take action against those who fall short.  We also published a set of draft codes and guidance that specify what online services can do in relation to search results and content that is generated or uploaded by users to comply with the new rules. In these documents we have proposed some initial targeted steps to combat fraud.

Among the recommended measures for large services with medium to high level risks are:

  • Automatic keyword search: This will disrupt criminals by being used to detect content containing keywords strongly associated with the sale of stolen personal and financial information.
  • Streamlined expert reporting: A dedicated reporting channel for fraud will disrupt scammers, by allowing expert bodies with specific experience in investigating fraud to report content directly to online services and to law enforcement, government departments and regulators like the Financial Conduct Authority.
  • Verified accounts: Clear internal policies for notable user verification and paid-for user verification schemes, and improved public transparency for users about what verified status means in practice. This will help to disrupt fraudulent activity involving impersonating celebrities, companies and government bodies.

These proposals are part of a wider collective effort to combat fraud

In February 2022, we explained our role in tackling scam calls and texts. With the passage of the Online Safety Act, we have new responsibilities to help make online services safer for all users, including overseeing how these services fulfil their duties about tackling fraud.

But it’s not just about user-generated content. The Online Safety Act requires additional duties for some larger “categorised” services to tackle paid-for fraudulent advertising in "phase three" of our work. Early next year we plan to publish a call for evidence to support our development of the code of practice which will underpin these duties.

We are not the only body with a key role in tackling fraud. Earlier this year the Home Office published a strategy to reduce fraud. It is essential that we work closely with other public bodies that have statutory responsibilities to investigate and prevent fraud, alongside industry and consumer bodies. Our targeted duties and powers relating to online services and telecoms providers will not capture every scenario where scams surface. But our new online duties will enable us, for the first time, to hold online services to account for the risks posed by online scammers exploiting their platforms.

Share your views on our proposals

We’re consulting with experts, industry and the public on the approach we plan to take. Parliament will then review our industry codes of practice next year, before they come into force.

If you have views about how the Online Safety Act should be implemented, including the proposals relating to fraud, please read and respond to our consultation on protecting people from illegal harms online. The deadline for responses is 5pm on Friday 23 February 2024.

Back to top