AI-based child sexual abuse images targeted with new laws

Four new laws will deal with the danger of child sexual abuse images generated by Artificial Intelligence (AI), the government has announced.
The Home Office says that, for better safety of children, Britain will be the first country in the world to create illegal AI devices designed to create child sexual abuse materials (CSAM). , In which five years are sentenced to up to up to up to up.
Keeping AI pioneables manual will also be made illegally, and criminals will be sentenced to three years in jail. They teach manual people how to use AI to sexually abuse young people.
Home Secretary Yett Cooper said, “We know that the activities of sick predators online often take them to the individual for the most terrible misconduct.”
“This government will not hesitate to work to ensure the safety of children online to maintain our laws with the latest hazards.”
Other laws include creating a crime for running websites where pedophile hair can share sexual abuse materials or provide advice to children about groom. It will be sentenced to jail up to 10 years.
And the border force will be given powers to instruct the individuals that they suspect to take a sexual risk to unlock their digital devices for inspection when they try to enter the UK, Because CSAM is often shot abroad. Depending on the severity of the images, it will be sentenced to jail up to three years.
Artificially generated CSAM contains images that produce either partially or completely computers. Software can “naked” real images and change one child’s face with another, creating a realistic image.
In some cases, the voices of the real -life of children are also used, which means that innocent people of misconduct are being harassed again.
Fake images are also being used to blackmail children and force the victims to further misuse.
National crime agency (NCA) It said that it makes around 800 arrests every month, which is related to the dangers given to children online. It said that 840,000 adults are threatened for children across the country – both online and offline – which makes 1.6% of the adult population.
Cooper said: “These four new laws are bold measures designed to protect our children online as technologies develop.
He said, “It is important that we deal with the child’s sexual abuse as well as the offline so that we can better protect the public.”
However, some experts believe that the government could have gone further.
Professor Claire McGlin, an expert in the legal regulation of pornography, sexual violence and online misuse, stated that the changes were “reception”, but “significant gaps”.
The government should ban “Nudify” apps and deal with “normalization of sexual activity with young girls on mainstream porn sites”, he said, described these videos as “fake child sexual abuse videos” Did
These videos include “adult actors, but they look very small and are shown in children’s bedrooms, with toys, pigtails, braces and other childhood markers,” he said. “This material can be found with the most obvious discovery terms and makes hair sexual abuse legitimate and normal. Unlike many other countries, this material is valid in the UK.”
Internet Watch Foundation (IWF) Warns that more sexual abuse Children’s AI images are being produced, becoming more prevalent on the open web with them.
The latest data of charity suggests that CSAM’s report has increased by 380% with 245 confirmed reports in 2023 compared to 2023 in 2023. Each report can have thousands of pictures.
Last year research found that over a period of one month, 3,512 AI hair sexual abuse and exploitation images were discovered on a dark website. In the previous year, compared to a month, the number of most serious category images (category A) increased by 10%.
Experts say that the AI CSAM can often look incredibly realistic, making it difficult to tell the real from fake.
Interim chief executive officer IWF, Derek Ray-Hil said: “The availability of this AI content leads to sexual violence against children.
“It hugs and encourages abusers, and it makes real children less safe. To prevent AI technology from being exploited, it is definitely to be done more, but we welcome () the announcement, but we welcome the announcement, And believe that these measures are an important early points. “
Lynn Perry, Chief Executive Officer of Children Charity Bernardo, welcomed government action to deal with the AI-built CSAM, which normalizes the misuse of children, risks more of them, both and offline “.
“It is important that the law resides with technological progress to prevent these frightening crimes,” he said.
“Tech companies must ensure that their platforms are safe for children. They need to take action to introduce strong safety measures, and Ofcom must ensure that the Online Security Act is effective and firmly implemented. Is.”
The new measures declared will be introduced as part of the crime and police bill when they come to Parliament in the next few weeks.