Monitoring Online Terror with AI

Monitoring Online Terror with AI

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Social media platforms on the internet are constantly flooded with huge amounts of content, and this vast ocean of material needs to be constantly monitored for harmful or illegal content, like the promotion of terrorism and violence. The immensity in volume means people cannot manually inspect all of it, which is why automated tools like AI are essential. However, these tools have their limitations.

Recent efforts to develop tools for the identification and removal of online terrorist content have been partially fueled by new laws and regulations, including the EU’s terrorist content online regulation that requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.

According to Techxplore, there are two types of tools to eliminate terrorist content – The first tool goes over certain account and message behavior, including how old the account is, the use of trending or unrelated hashtags and abnormal posting volume. This tool is similar to spam detection since it does not look at the content and is valuable for detecting the often bot-driven rapid dissemination of large volumes of content.

The second type of tool is content-based, focusing on linguistic characteristics, word use, images and web addresses. Automated content-based tools take one of two approaches:

  • Matching: comparing new images or videos to an existing database of images and videos that were previously identified as terrorist in nature. The issue is that terror groups often try to evade such methods by producing subtle variants of the same piece of content.
    To deal with this issue, matching-based tools generally use perceptual hashing, which focuses on similarity, overlooks minor changes (like pixel color adjustments), but identifies images with the same core content.
  • Classification: uses machine learning and other forms of AI to classify content. For this, the AI needs many examples labeled as terrorist content, and by analyzing these examples, it learns which features distinguish different types of content, allowing it to categorize new content on its own.
    Once trained, the algorithms can predict whether a new item of content belongs to one of the specified categories, which is then removed or flagged for human review.
    This approach is problematic because collecting and preparing a large dataset to train the algorithms is time-consuming and resource-intensive. Furthermore, the training data can quickly become dated as terrorists make use of new terms and discuss new world events and current affairs.

The conclusion is human input remains essential despite AI advances, and is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged.

Experts recommend collaborative initiatives between governments and the private sector, claiming international organizations, governments and tech platforms must prioritize the development of such collaborative resources because without them it will be impossible to effectively address online terror content.