A passionate historian and travel writer specializing in Italian cultural heritage and ancient Roman history.
Technology companies and child safety organizations will receive permission to assess whether AI systems can generate child exploitation material under new UK laws.
The announcement coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow approved AI developers and child safety organizations to inspect AI systems – the foundational technology for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about preventing abuse before it happens," stated Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the risk in AI systems early."
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at preventing that problem by helping to halt the creation of those materials at their origin.
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models designed to generate child sexual abuse material.
This recently, the official visited the London headquarters of a children's helpline and listened to a simulated call to counsellors involving a account of AI-based abuse. The interaction depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about children facing extortion online, it is a source of extreme anger in me and justified anger amongst families," he stated.
A leading online safety organization stated that instances of AI-generated abuse content – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
The law change could "represent a vital step to guarantee AI tools are secure before they are released," commented the head of the internet monitoring foundation.
"AI tools have made it so survivors can be victimised all over again with just a simple actions, giving criminals the ability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Material which additionally commodifies survivors' suffering, and makes children, particularly female children, more vulnerable on and off line."
The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing AI assistants for support and AI therapy apps.
A passionate historian and travel writer specializing in Italian cultural heritage and ancient Roman history.