This post is also available in: heעברית (Hebrew)

Facebook plans to use artificial intelligence (AI) to identify posts that might promote or glorify terrorism, a move that follows growing concern about terrorists’ efforts to recruit on social networks. Currently, Facebook largely relies on users to flag questionable content.

Facebook CEO Mark Zuckerberg wrote in a message that the technology “will take many years to fully develop” because it requires software sophisticated enough to distinguish between a news story about a terrorist attack and efforts to recruit on behalf of a terrorist organization.

According to the, Zuckerberg also expounded on the company’s efforts to build a global community through Facebook, and wrote that its success will depend on “whether we’re building a community that helps keep us safe — that prevents harm, helps during crises, and rebuilds afterwards.”

The letter states: “There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us… Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”

Critics have taken aim at Facebook, along with other social networks, for what they see as insufficient efforts to police the content transmitted across its network, i.e. propaganda shared by suspected terrorists or suicides streamed live. Terrorism has been a particularly sensitive topic.

“Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” Zuckerberg wrote.

Facebook has attempted to tamp down potential terrorist propaganda for more than a year. A Wall Street Journal report from February 2016 said pressure from the government prompted the company to remove profiles of those suspected of supporting terrorism and scrutinize their friends’ posts more carefully.

It was recently reported that Facebook, Twitter, Google and Microsoft would create a shared database to track and delete “violent terrorist imagery or terrorist recruitment videos.”

Zuckerberg also acknowledged the fake news problem in his letter, though he did not say whether artificial intelligence might help solve that challenge as well.

According to Facebook research website, the company’s Artificial Intelligence Researchers (FAIR) seek to understand and develop systems with human level intelligence by advancing the longer-term academic problems surrounding AI. The company is actively engage with the research community through publications, open source software, etc.