Software Developed At Dartmouth Looks To Stop The Spread Of Extremist Videos

Jul 5, 2016

A computer scientist at Dartmouth College has developed new software aimed at quickly identifying and stopping the spread of extremist videos online that are used to incite violent attacks. 

Dr. Hany Farid developed the software as part of Dartmouth's Counter Extremism Project.

The system was designed in a way that is similar to one used to prevent the dissemination of child pornography online. That software was developed with Microsoft, and it allowed social media companies to find, remove and report instances of child pornography.

"The way that technology works is that every image, every video, every audio recording has a distinct signature, very much like human DNA. There's a distinct signature that we are able to extract from the underlying medium. Even as that medium undergoes changes as it makes its way through the internet, we can identify it," Farid explained. "Working in partnership with the National Center for missing and exploited children, we extracted digital signatures from known child pornography content that we know continually, year after year, gets distributed. We then compare the signatures to things being uploaded to Facebook, to Twitter, to Instagram, etc., and we simply ask, is this a known bad content? And if it is, we will remove and report it."

Right now, most sites follow a manual process for removing content. They wait for a report of a violation or offensive content, which is then removed manually and taken down.

"Once it's removed with a manual intervention and with the understanding that it violates terms of service, we can now remove it forever from a site." -Dr. Hany Farid, Counter Extremism Project

"They suffer from a whack-a-mole problem. It comes up, it stays up for a little while, it comes down, and then the poster posts it again and they have to wait a few weeks," Farid said. "So what we are saying is that if you have agreed that the content violates your terms of service and it has to come down, we will extract a signature from that image, audio or video recording and then any time the content that matches that signature comes up, we will not allow it up. Therefore once it's removed with a manual intervention and with the understanding that it violates terms of service, we can now remove it forever from a site."

When they were looking for child pornography, at the time videos were not as ubiquitous, the software mostly tracked photographs. Now video and audio dominate. The software extracts digital signatures from video and audio recordings. The CEP is careful about how much technical information they release, but they do look at the actual contents. They'll extract distinct features, and those that are stable over the lifetime of the video.

"One of the very nice things about these signatures is that if I hand you a signature, which is just a bunch of numbers, you can't reconstruct the content that it comes from. That means we can safely ship those signatures around without worrying about the content leaking," Farid said.

Social media companies such as Facebook and Twitter have yet to commit to using the software. When the child pornography software was released, they were hesitant.

"This not a First Amendment issue," Farid said. "Facebook, Twitter, YouTube, Google, they get to decide what goes on their network or not. What makes them nervous is the precedence this sets. That if we start taking down child pornography and now we start taking down counter extremism, what's next down the line? It's a legitimate concern to have. We feel like this is a narrowly-defined area. We are saying, you are already agreeing to take down certain content. We are trying to make that fully automatic, extremely reliable and efficient."

The technology is going under final engineering in order to be deployed, and will be ready in the next month or two.