Will McLaren Keep Playing Fair and Halt Verstappen? - F1 Questions and Answers
-
- By Todd Peterson
- 03 Feb 2026
Tech firms and child protection organizations will receive permission to assess whether artificial intelligence systems can produce child abuse images under new British laws.
The announcement came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow approved AI developers and child protection groups to examine AI systems – the foundational systems for conversational AI and image generators – and ensure they have adequate protective measures to stop them from creating depictions of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI systems promptly."
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to averting that issue by helping to stop the production of those images at source.
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, producing or distributing AI systems developed to create exploitative content.
This week, the minister toured the London base of a children's helpline and heard a mock-up conversation to advisors involving a account of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I hear about young people experiencing extortion online, it is a cause of extreme anger in me and rightful concern amongst parents," he stated.
A prominent internet monitoring foundation reported that cases of AI-generated exploitation content – such as online pages that may contain multiple images – had significantly increased so far this year.
Instances of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
The law change could "constitute a crucial step to guarantee AI tools are secure before they are released," commented the head of the online safety foundation.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, providing criminals the capability to make potentially endless quantities of advanced, lifelike exploitative content," she continued. "Material which further commodifies survivors' trauma, and renders children, particularly girls, more vulnerable on and off line."
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated topics were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.
Travel enthusiast and local expert sharing insights on Sardinian accommodations and hidden gems.