To monitor misinformation and violent images, social networks put content moderation rules in place a decade ago. Now some of that is unraveling.

Transcript

MICHEL MARTIN, HOST:

Graphic videos and images of the Israel-Hamas war are flooding social media. Years ago, the company set guardrails to deal with violent content. NPR's tech correspondent Dara Kerr looks at why some limits haven't kept up.

DARA KERR, BYLINE: About 10 years ago, a series of horrific videos went viral online. They showed Islamic state militants beheading their captives.

(SOUNDBITE OF MONTAGE)

UNIDENTIFIED NEWS ANCHOR #1: The terrorist group ISIS today released...

UNIDENTIFIED NEWS ANCHOR #2: Another gruesome video...

UNIDENTIFIED NEWS ANCHOR #3: Showing the horrific killing and its gruesome aftermath.

KERR: The video stirred intense debate on how social media companies should handle such graphic content. Many people said keeping them online gave ISIS publicity.

BRIAN FISHMAN: Companies were very slow to recognize that this was a real problem for them.

KERR: Brian Fishman is a terrorism expert who's written extensively about ISIS. Facebook brought him on as a policy director after the ISIS videos went viral. He set about trying to balance when these types of videos should be allowed. He says social media companies need to think about respecting the victims and mitigating future violence. They also need to ensure atrocities aren't erased.

FISHMAN: Those various imperatives do not always align with each other very well.

KERR: Facebook ended up mostly blocking content from ISIS. Other platforms like Twitter and YouTube did the same. In 2017, those companies, along with Microsoft, joined together to create the Global Internet Forum To Counter Terrorism. The group, now an independent nonprofit, uses technology that essentially acts like a digital fingerprint to identify terrorist and violent, extremist content so that content doesn't spread. There have been fewer instances of graphic videos going viral until now. Naureen Chowdhury Fink is the group's executive director.

NAUREEN CHOWDHURY FINK: Part of the challenge is always that you prepare for a case and then the next one is going to be different.

KERR: And right now during the Israel-Hamas war, there's an overwhelming amount of gruesome and violent videos and propaganda across social media. On top of this, fake videos and footage not related to this war are rife.

DINA SADEK: The graphic footage that is being shared, some of them are true, and some of them might not be true. And as a result of that confusion, there's a lot of hate that's being fueled at the moment that would just incite further violence.

KERR: Dina Sadek is a Middle East research fellow at the Digital Forensic Research Lab. She's been tracking the spike of this type of content across the social media landscape.

SADEK: The two platforms that we have seen more content were X and Telegram.

KERR: Telegram is a messaging platform based in Dubai that has little to no content moderation. It's what Hamas mostly uses to circulate its graphic videos. Since Elon Musk bought X, formerly known as Twitter, he's fired most of its safety and content moderation teams and made it harder to verify where information is coming from.

SADEK: We're seeing a lot of content traveling to X. There's a trend that we're seeing footage that gets posted in Telegram and makes its way to X.

KERR: Telegram and X didn't respond to requests for comment. In a public post, X said it's coordinating with the anti-terrorism forum and removing Hamas-affiliated accounts. Brian Fishman, the terrorism expert, says, however, that without robust content moderation teams, violent videos can still slip through.

FISHMAN: There's no such thing as a perfect final resolution to this kind of problem on these platforms. It's a constant struggle.

KERR: A struggle that's nearly impossible to overcome when the internet connects everything.

Dara Kerr, NPR news.

(SOUNDBITE OF THRUPENCE'S "FOLDS") Transcript provided by NPR, Copyright NPR.