I have time to read more in the summer. This week’s AAIE (Association for the Advancement of International Education) newsletter highlights the issues of Artificial Intelligence (AI) generating CSAM (Child Sexual Abuse Material). As an international school leader and the Strategic Head of our Child Safeguarding Team at our school, I need to keep abreast of online threats to children. The Washington Post reported a record high number of CSAM reported to the National Center for Missing and Exploited Children. The Center was alerted to 88 million files in 2022.
The problem has a new layer with AI generating inappropriate images of minors. Governments are fighting it through laws and hearings. Tech companies are using AI to detect and scrub the internet of these images and videos. This article in Time “As Tech CEOs Are Grilled Over Child Safety Online, AI Is Complicating the Issue” has a good overview of the problem.
United Nations Interregional Crime and Justice Research Institute (UNICRI), through its Centre for AI and Robotics, launched an AI for Safer Children Initiative as described in the video below.
This NBC News article reports AI companies and the non-profit Thorn are combating AI-generated CSAM. Stanford researchers found CSAM in the data that some AI Large Language Models trained on. Thorn is helping companies develop principles, such as checking data sets before inputting them into their AI models. This is a link to the Thorn website for more information.


Published by