🔗 Share this article British Technology Companies and Child Protection Agencies to Test AI's Capability to Create Exploitation Images Tech firms and child protection organizations will receive authority to evaluate whether AI tools can generate child abuse images under recently introduced UK laws. Significant Rise in AI-Generated Illegal Material The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025. New Regulatory Structure Under the changes, the authorities will allow designated AI developers and child safety organizations to inspect AI models – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to stop them from creating images of child sexual abuse. "Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the danger in AI systems promptly." Tackling Legal Challenges The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it. This legislation is aimed at averting that issue by helping to stop the creation of those images at source. Legal Structure The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems developed to create child sexual abuse material. Real-World Impact This recently, the official visited the London base of Childline and listened to a simulated call to advisors featuring a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI. "When I hear about children experiencing extortion online, it is a source of extreme anger in me and rightful concern amongst parents," he stated. Concerning Statistics A leading internet monitoring foundation reported that cases of AI-generated exploitation content – such as online pages that may contain multiple files – had significantly increased so far this year. Instances of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086. Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025 Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025 Industry Response The legislative amendment could "represent a vital step to guarantee AI products are secure before they are launched," commented the chief executive of the online safety foundation. "Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the ability to create potentially limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which additionally commodifies victims' trauma, and makes young people, especially female children, less safe both online and offline." Counseling Session Information Childline also published details of support interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise: Using AI to evaluate body size, body and appearance Chatbots discouraging children from consulting safe guardians about harm Being bullied online with AI-generated material Digital extortion using AI-faked images During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year. Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using AI assistants for assistance and AI therapeutic applications.