New Machine Learning Model Detects Harmful Social Media Comments

Image by Unsplash
Representational image

This post is also available in: עברית (Hebrew)

A new machine-learning model developed by a team of researchers from Australia and Bangladesh is making waves in the fight against toxic social media content. The model has demonstrated an impressive 87% accuracy in identifying harmful comments, offering a promising solution to combat cyberbullying, hate speech, and other forms of online abuse.

The team, from East West University in Bangladesh and the University of South Australia, presented their findings at the 2024 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies. Their work introduces a significant improvement over existing automated detection systems, many of which struggle with false positives and fail to efficiently differentiate between harmful and non-harmful content, according to TechXplore.

As the prevalence of online hate speech continues to rise, these toxic comments have had destructive effects in real life. According to lead author Ms. Afia Ahsan, identifying harmful content manually has become increasingly impractical due to the vast volume of interactions taking place online. With over 5.5 billion internet users globally, filtering out toxic content without automated systems is simply not feasible.

The team’s solution uses an optimized machine learning algorithm that was tested on a diverse dataset of English and Bangla comments collected from popular social media platforms such as Facebook, YouTube, and Instagram. Their chosen model, a Support Vector Machine (SVM), was the most reliable, achieving an accuracy rate of 87.6%, significantly outperforming other models they tried.

According to TechXplore, Dr. Abdullahi Chowdhury, an IT and AI researcher at UniSA, explained that the optimized SVM model has proven to be the most effective for real-world applications, where accurate detection of toxic comments is essential. This advancement could be a game-changer for social media platforms looking to mitigate the impact of harmful content.

Looking forward, the researchers plan to refine the model further by incorporating deep learning techniques and expanding the dataset to include additional languages and regional dialects. They are also in talks with social media platforms to integrate the technology, ensuring a safer and more respectful online experience for users worldwide.