Call for ban on AI apps creating naked images of children

Call for Ban on AI Apps Creating Naked Images of Children

TL;DR

  • The Children’s Commissioner for England, Dame Rachel de Souza, urges the government to ban "nudification" apps that generate sexual deepfake images of children.
  • Children express fear of becoming targets for these technologies, leading some to avoid sharing images online.
  • Though illegal to create or share child sexual abuse material, the technology enabling these images remains legal.
  • New regulations are necessary to protect children and hold developers accountable.

Introduction

In a significant development regarding child protection and digital safety, the Children’s Commissioner for England has called for an outright ban on artificial intelligence (AI) applications that generate sexually explicit images of children. This plea comes amidst growing concerns over the psychological implications such technologies have on children, particularly young girls, who fear they could be victimized by sexually explicit deepfakes. The commissioner emphasized that the government must take decisive action to prevent these apps from proliferating and causing further harm.

Alarming Prevalence of Nudification Apps

Dame Rachel de Souza's recent report highlights the alarming rise of "nudification" apps, which use AI to manipulate photos and create realistic images of individuals in compromising situations. Despite existing laws that prohibit the creation or distribution of child sexual abuse material (CSAM), the tools that enable such misuse have remained unchecked and legal.

According to the report, many children reported feeling frightened by these technologies, with concerns that "a stranger, a classmate, or even a friend" could exploit these applications to manipulate their images within the popular digital landscape. This anxiety has resulted in many girls actively avoiding social media, apprehensive about the potential for misuse.

Disproportionate Impact on Girls

The report asserts that nudification tools disproportionately target girls and young women. Research indicates that 99% of sexually explicit deepfakes circulating online feature female subjects, perpetuating a culture of misogyny. Such widespread access to these AI capabilities raises severe ethical and emotional questions regarding consent and child safety.

In focus groups conducted by the Commissioner, children voiced distress over the accessibility of these apps through mainstream platforms and app stores. One 16-year-old participant stated, “Even before any controversy came out, I could already tell what it was going to be used for, and it was not going to be good things.”

Government Response and Recommendations

While the government's existing Online Safety Act addresses some forms of online abuse, it does not adequately counteract the threats posed by nudification technologies. In a response to the Commissioner’s report, a government spokesperson acknowledged the illegality of CSAM but also recognized the need for additional legislative measures.

To enhance protection for children, Dame Rachel de Souza has made several recommendations:

  • Immediate ban on bespoke nudification apps.
  • Establish legal responsibilities for GenAI developers to foresee and mitigate risks related to child safety in their products.
  • Implement effective systems to remove sexually explicit deepfake images from the internet.
  • Recognize deepfake sexual abuse as a form of violence against women and girls in both law and policy.

Conclusion

As AI technology evolves, so too do the risks it poses to vulnerable populations, especially children. The call to ban nudification apps represents a critical step towards safeguarding children in the digital realm. With growing awareness and pressure from advocacy groups and experts, significant legislative reforms may soon emerge to protect young people from the consequences of unregulated technology.

The urgency of the situation compels action not only from government bodies but also from technologists and developers who must ensure their innovations prioritize safety over convenience.

References

[^1]: BBC News. (2025-04-28). "Ban AI apps creating naked images of children, says children's commissioner". BBC News. Retrieved 2025-04-28.

[^2]: The Guardian. (2025-04-28). "Commissioner calls for ban on apps that make deepfake nude images of children". The Guardian. Retrieved 2025-04-28.

[^3]: Children's Commissioner for England. (2025-04-28). "Press Notice: Children’s Commissioner calls for immediate ban of AI apps that enable ‘deepfake’ sexual abuse of children". Children's Commissioner for England. Retrieved 2025-04-28.

[^4]: The Guardian. (2025-04-28). "What are ‘nudification’ apps and how would a ban in the UK work?". The Guardian. Retrieved 2025-04-28.

[^5]: Yahoo Finance. (2025-04-28). "Call for total ban on deepfake apps that exploit children". Yahoo Finance. Retrieved 2025-04-28.

[^6]: Engadget. (2025-04-28). "UK regulator wants to ban apps that can make deepfake nude images of children". Engadget. Retrieved 2025-04-28.

[^7]: The Epoch Times. (2025-04-28). "Children’s Commissioner Calls for Ban on AI ‘Nudification’ Apps". The Epoch Times. Retrieved 2025-04-28.

[^8]: MSN. (2025-04-28). "UK Regulator Wants to Ban Apps That Can Make Deepfake Nude Images of Children". MSN. Retrieved 2025-04-28.

Metadata

Keywords: AI apps, nudification, children's safety, deepfake technology, Children’s Commissioner, UK legislation, online safety.

News Editor 2025年4月28日
このポストを共有
タグ
Goldman Sachs-backed start-up to buy UK sound studio in bet on AI music-making