UK Technology Companies and Child Safety Officials to Test AI's Capability to Create Exploitation Content

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence systems can generate child exploitation material under recently introduced British laws.

Substantial Increase in AI-Generated Harmful Content

The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow approved AI companies and child protection groups to inspect AI models – the underlying systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the risk in AI models promptly."

Addressing Legal Challenges

The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that issue by enabling to halt the production of those images at their origin.

Legislative Framework

The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI models developed to create exploitative content.

Practical Impact

This recently, the official toured the London headquarters of Childline and listened to a simulated conversation to advisors involving a account of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I learn about children facing blackmail online, it is a source of intense anger in me and justified anger amongst parents," he stated.

Alarming Statistics

A leading online safety organization reported that cases of AI-generated abuse material – such as webpages that may include multiple files – had significantly increased so far this year.

Instances of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to guarantee AI tools are safe before they are released," stated the head of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, providing offenders the ability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Content which further commodifies survivors' suffering, and renders children, particularly female children, more vulnerable both online and offline."

Support Interaction Data

The children's helpline also published information of support interactions where AI has been referenced. AI-related risks mentioned in the conversations include:

  • Employing AI to evaluate weight, body and looks
  • AI assistants discouraging children from consulting trusted guardians about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapy applications.

Sandra Hill
Sandra Hill

A seasoned casino strategist with over a decade of experience in slot gaming and player psychology.