The Online Safety Act 2023: a primer

After a long time in the making, the Online Safety Act finally received Royal Assent on 26 October 2023. According to the accompanying Government press release, the Act “places legal responsibility on tech companies to prevent and rapidly remove illegal content” and aims “to stop children seeing material that is harmful to them”. So what are the main implications of the new legislation, and how will it affect online businesses?

The key provisions of the Act

There are three overarching provisions of the Act, which seek to tackle:

  • Certain illegal content – terrorism offences (Schedule 5), child sexual exploitation and abuse offences (Schedule 6), and priority offences (Schedule 7) which include: assisting suicide, threats to kill, human trafficking and sexual exploitation, supply of drugs or firearms and weapons, assisting illegal immigration, harassment and stalking, blackmail involving sexual imagery, fraud and financial crime.
  • Content that is lawful but nevertheless “harmful to children” – eg pornography, promotion of suicide, self-harm or eating disorders etc.
  • Fraudulent advertising – for the largest service providers.

Social media platforms and search engines – and other online services companies which host user generated content on their platforms – will be required to take steps to ensure they are abiding by the Act. Alongside the provision of robust reporting mechanisms to help users flag illegal content, one of the notable new obligations will be the implementation of age verification systems to minimise the risk of children accessing adult material and other harmful content.

Failure to comply with the Act can result in fines of up to £18 million or 10% of global annual revenue (whichever is higher) and even custodial sentences for senior executives.

The Online Safety Act also introduces several new criminal offences (Part 10):

  • Sending false information with the intention of causing “non-trivial psychological or physical harm to a likely audience”;
  • Sending messages which threaten death, rape and other serious sexual assault, or financial loss;
  • Sending or showing flashing images (literal flashing which can cause epilepsy, as opposed to “flashing” of genitalia etc);
  • Sending unwanted images of genitalia (aka “flashing) – even if they are computer generated, ie “deepfakes”;
  • Encouraging or assisting serious self-harm; and
  • Sharing or threatening to share intimate photographs or videos without consent (eg blackmail).

Who is covered by the Act?

The creation of new criminal offences aside, the Act is primarily targeted at online service providers – referred to as “user-to-user services” – which allow user generated content to be shared on its platforms. Social media companies most clearly fall within the ambit of the Act, as long as they are open to internet users in the UK, and search engines are also included.

However, although the main in-scope services will be the Silicon Valley behemoths, small-scale online companies and even individuals who operate personal websites are potentially covered by the Act if:

  • users are able to generate and/or share content with other users (eg online forums); or
  • pornographic content is being hosted.

The largest online providers and those hosting pornographic content will have the most onerous responsibilities under the Act.

But there are certain important exclusions (Schedule 1, Part 1) – notably comments sections. Since a great deal of “toxic” online content comprises below-the-line discussions, this is arguably a rather curious oversight. Other exclusions relate to journalistic content on news sites.

Who will enforce the Act?

Ofcom has been granted regulatory powers to ensure compliance with the provisions of the Online Safety Act. However, it will not be responsible for determining how to deal with individual content issues; instead it is tasked with overseeing the compliance of regulated services.

It will be able to investigate any potential breaches of the Act, with powers to obtain any relevant information, including demanding an interview with a relevant person at the regulated service provider. Additionally it has powers of entry and inspection of business premises, and it can compel a business to obtain an independent report.

There will be scope for appeal of any Ofcom decisions related to the Act, to the Upper Tribunal – and determinations could also be subject to challenge by way of judicial review (Part 8).

An administrative fee payable to Ofcom (Section 84) may be imposed on the largest (by global revenue) regulated services.

Next steps

The majority of the provisions come into force within two months of the Act receiving Royal Assent – so by 26 December 2023. However, Ofcom has started a series of four consultations which will help to establish a final set of regulations and guidance over the course of 18 months (by around May 2025) – the first of which will close on 23 February 2024.

Although finalised best practice guidelines may take several months to emanate, online companies can start preparing by taking several steps:

  • Carry out a risk assessment to determine if they fall within scope of the Act.
  • Provide robust reporting mechanisms for all users.
  • Take particular care to moderate content if their services are targeted towards children.
  • Implement age verification processes if their content is targeted at adults.

One highly contentious aspect of the Act relates to the scanning of encrypted content, referred to in the text of the Act as the use of “accredited technology” (Section 121) – which several technology companies, such as email provider Proton, have pointed out does not exist. How Ofcom will tackle this part of the Act stands to be seen – but the Government itself has conceded that the clause cannot be implemented unless it is “technically feasible”.

Will litigation prove more effective?

Whilst we wait to find out the results of the Online Safety Act, across the pond hundreds of families and school districts are engaged in a class action lawsuit against four of the biggest tech companies (Meta, TikTok, Google and Snapchat), claiming that their platforms are harmful by design. Similar US litigation almost resulted in a landmark case being heard on grounds of product liability – which would potentially dent the armour of Section 230 (allowing social media companies to evade responsibility for user generated content) – before a settlement was reached and the offending website was shut down.

It will be interesting to see if litigation proves more effective than regulation in terms of taming big tech and tackling the negative aspects of social media.

Further reading

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo via Piqsels.