Tech firms and child safety organizations will be granted permission to evaluate whether AI tools can generate child abuse material under recently introduced UK laws.
The declaration came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow approved AI developers and child protection organizations to examine AI systems – the underlying technology for chatbots and image generators – and verify they have adequate protective measures to prevent them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the danger in AI models early."
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that problem by enabling to stop the production of those images at their origin.
The amendments are being added by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or sharing AI models developed to generate exploitative content.
This recently, the minister toured the London base of a children's helpline and heard a mock-up call to advisors involving a report of AI-based abuse. The interaction depicted a teenager seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst parents," he stated.
A leading internet monitoring organization stated that cases of AI-generated exploitation content – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of the most severe material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
The law change could "constitute a vital step to guarantee AI products are secure before they are released," commented the head of the online safety foundation.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, giving offenders the ability to make possibly endless amounts of advanced, lifelike exploitative content," she continued. "Material which additionally commodifies survivors' suffering, and renders children, especially girls, more vulnerable both online and offline."
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including using AI assistants for assistance and AI therapy apps.
A passionate gamer and writer with years of experience in competitive gaming and content creation.