When faced with hard questions about how Facebook will remove terrorist content from its platforms, CEO Mark Zuckerberg offers a simple answer: artificial intelligence will do it. But according to Facebook’s chief AI scientist, Yann LeCun, AI is years away from being able to fully shoulder the burden of moderation, particularly when it comes to screening live video.
Using AI to screen live video of terrorism is ‘very far from being solved,’ says Facebook AI chief
Live-streamed attacks like Christchurch shooting require human moderation
Live-streamed attacks like Christchurch shooting require human moderation


Speaking at an event at Facebook’s AI Research Lab in Paris last week, LeCun said Facebook was years away from using AI to moderate live video at scale, reports Bloomberg News.
“This problem is very far from being solved,” said LeCun, who was recently awarded the Turing Prize, known as the Nobel Prize of computing, along with other AI luminaries.
Screening live video is a particularly pressing issue at a time where terrorists commit atrocities with the aim of going viral. Facebook’s inability to meet this challenge became distressingly clear in the aftermath of the Christchurch shooting in New Zealand this year. The attack was streamed live on Facebook, and although the company claims it was seen by fewer than 200 people during its broadcast, it was this stream that was then downloaded and shared across the rest of the internet.
AI can remove unwanted content, but only after a human has tagged it
The inability of automated systems to understand and block content like this isn’t news for AI experts like LeCun. They’ve long warned that machine learning just isn’t able to understand the variety and nuances of these videos. Automated systems are very good at removing content that has already been identified by humans as unwanted (Facebook says it automatically blocks 99 percent of terrorist content from al-Qaeda, for example), but spotting previously unseen examples is a much harder task.
One problem LeCun noted in Paris is the lack of training data. “Thankfully, we don’t have a lot of examples of real people shooting other people,” said the scientist. It’s possible to train systems to recognize violence using footage from movies, he added, but then content containing simulated violence would be inadvertently removed along with the real thing.
Instead, companies like Facebook are focusing on using automated systems as assistants to human moderators. The AI flags troubling content, and humans manually vet it. Of course, the system of human moderation also has its own problems.
But remember: next time someone presents AI as a silver bullet for online moderation, the people who are actually building these systems know it’s a lot harder than that.











