According to sources, YouTube and Facebook, among other sites, are now using automated systems to block or take down extremist material. The used technology is based on the same processes that were developed for identifying and removing copyright-protected content.
The system apparently recognises undesirable videos by comparing new content with videos that have already been identified as unacceptable, however the details of the process are not public, and the companies have not discussed the method or even confirmed they are using it.
The automated filtering or blocking raises a few questions, however. Although it has been successfully used against copyright-protected content and child porn, there is no clear, widely accepted definition of illegal extremist content. How will the tech companies define "illegality"? Will the policies for blocking be consistent across platforms?
Moreover, any move towards developing and implementing systems that can be used for automatic censorship should be done with careful consideration, as their use can plausibly be expanded from illegally defined videos to blocking other forms of content.
If the automated systems become widely used, calls for transparency of the methodology, for instance in the form of “1st amendment for social platforms”, are likely to intensify.
Tech companies seem to have stepped up their resolve to address the issue of hate speech: earlier this month major tech companies committed to EU rules to fight hate speech online.