- Twitter has successfully tested an AI-powered solution to combat child sexual abuse material (CSAM) on its platform, developed by non-profit group Thorn.
- The technology is designed to proactively detect, delete, and report text-based material containing child sexual exploitation.
- Thorn’s VP of data science, Rebecca Portnoff, said the Safer AI model can have a real-life impact at scale.
Twitter has announced the successful beta testing of Thorn’s AI-powered Safer solution, a tool designed to combat child sexual exploitation online. The technology aims to proactively detect, delete, and report text-based material containing child sexual abuse.
Twitter Partners with Thorn to Enhance Platform Safety
Twitter, in its ongoing partnership with Thorn, has been testing the non-profit group’s Safer solution during its beta phase. The social media giant aims to create a safer platform by expanding its capabilities in fighting high-harm content where a child is at imminent risk. The solution was seamlessly integrated into Twitter’s detection mechanisms, allowing the platform to focus on high-risk accounts.
Thorn’s Safer AI Model: A Potential Game-Changer
Thorn’s Safer AI model comprises a language model trained on child safety-related texts and a classification system that generates multi-label predictions for text sequences. The model’s prediction scores range from 0 to 1, indicating its confidence in the text’s relevance to various child safety categories. According to Rebecca Portnoff, Thorn’s VP of data science, the model has shown significant potential in identifying harmful child sexual abuse activity, prioritizing reported messages, and supporting investigations of known bad actors.
AI Tools and the Fight against Child Sexual Exploitation
With the proliferation of generative AI tools, internet watchdog groups have raised concerns about the potential for AI-generated child pornography. The Safer AI model’s successful beta testing on Twitter could represent a significant step forward in combating this issue. However, the fight against child sexual exploitation online is far from over, with ongoing challenges such as content moderation and the protection of fundamental rights.
Conclusion
The successful beta testing of Thorn’s Safer AI model on Twitter represents a promising development in the fight against child sexual exploitation online. As AI technology continues to advance, it is hoped that these tools can be effectively harnessed to create safer online environments. However, the challenges of content moderation and the protection of fundamental rights remain, requiring ongoing vigilance and innovation.