Welcome to Psychiatryai.com: Latest Evidence - RAISR4D

DDML: Multi-Student Knowledge Distillation for Hate Speech

Entropy (Basel). 2025 Apr 11;27(4):417. doi: 10.3390/e27040417.

ABSTRACT

Recent studies have shown that hate speech on social media negatively impacts users’ mental health and is a contributing factor to suicide attempts. On a broader scale, online hate speech can undermine social stability. With the continuous growth of the internet, the prevalence of online hate speech is rising, making its detection an urgent issue. Recent advances in natural language processing, particularly with transformer-based models, have shown significant promise in hate speech detection. However, these models come with a large number of parameters, leading to high computational requirements and making them difficult to deploy on personal computers. To address these challenges, knowledge distillation offers a solution by training smaller student networks using larger teacher networks. Recognizing that learning also occurs through peer interactions, we propose a knowledge distillation method called Deep Distill-Mutual Learning (DDML). DDML employs one teacher network and two or more student networks. While the student networks benefit from the teacher’s knowledge, they also engage in mutual learning with each other. We trained numerous deep neural networks for hate speech detection based on DDML and demonstrated that these networks perform well across various datasets. We tested our method across ten languages and nine datasets. The results demonstrate that DDML enhances the performance of deep neural networks, achieving an average F1 score increase of 4.87% over the baseline.

PMID:40282652 | DOI:10.3390/e27040417

Document this CPD

AI-assisted Evidence Research

Share Evidence Blueprint

QR Code

Psychiatry AI: Real-Time AI Scoping Review (RAISR4D)