Business

After Buffalo Shooting Video Spreads, Social Platforms Face Questions

In March 2019, before a gunman murdered 51 people at two mosques in Christchurch, New Zealand, he went live on Facebook to broadcast his attack. In October of that year, a man in Germany broadcast his own mass shooting live on Twitch, the Amazon-owned livestreaming site popular with gamers.

On Saturday, a gunman in Buffalo, N.Y., mounted a camera to his helmet and livestreamed on Twitch as he killed 10 people and injured three more at a grocery store in what the authorities said was a racist attack. In a manifesto posted online, Payton S. Gendron, the 18-year-old whom the authorities identified as the shooter, wrote that he had been inspired by the Christchurch gunman and others.

Twitch said it reacted swiftly to take down the video of the Buffalo shooting, removing the stream within two minutes of the start of the violence. But two minutes was enough time for the video to be shared elsewhere.

By Sunday, recordings of the video had circulated widely on other social platforms, including Facebook and Twitter. An excerpt from the original video on a site called Streamable was viewed more than three million times before it was removed.

Mass shootings — and live broadcasts — raise questions about the role and responsibility of social media sites in allowing violent and hateful content to proliferate. Many of the gunmen in the shootings have written that they developed their racist and antisemitic beliefs trawling online forums like Reddit and 4chan, and were spurred on by watching other shooters stream their attacks live.

“It’s a sad fact of the world that these kind of attacks are going to keep on happening, and the way that it works now is there’s a social media aspect as well,” said Evelyn Douek, a senior research fellow at Columbia University’s Knight First Amendment Institute who studies content moderation. “It’s totally inevitable and foreseeable these days. It’s just a matter of when.”

Questions about the responsibilities of social media sites are part of a broader debate over how aggressively platforms should moderate their content. That discussion has been escalated since Elon Musk, the chief executive of Tesla, recently agreed to purchase Twitter and has said he wants to make unfettered speech on the site a primary objective.

Social media and content moderation experts said Twitch’s quick response was the best that could reasonably be expected. But the fact that the response did not prevent the video of the attack from being spread widely on other sites also raises the issue of whether the ability to livestream should be so easily accessible.

“I’m impressed that they got it down in two minutes,” said Micah Schaffer, a consultant who has led trust and safety decisions at Snapchat and YouTube. “But if the feeling is that even that’s too much, then you really are at an impasse: Is it worth having this?”

In a statement, Angela Hession, Twitch’s vice president of trust and safety, said that the site’s rapid action was a “very strong response time considering the challenges of live content moderation, and shows good progress.” Ms. Hession said the site was working with the Global Internet Forum to Counter Terrorism, a nonprofit coalition of social media sites, as well as other social platforms to prevent the spread of the video.

“In the end, we are all part of one internet, and we know by now that that content or behavior rarely — if ever — will stay contained on one platform,” she said.

In a document that appeared to be posted to the forum 4chan and the messaging platform Discord before the attack, Mr. Gendron explained why he had chosen to stream on Twitch, writing that “it was compatible with livestreaming for free and all people with the internet could watch and record.” (Discord said it was working with law enforcement to investigate.)

Twitch also allows anyone with an account to go live, unlike sites like YouTube, which requires users to verify their account to do so and to have at least 50 subscribers to stream from a mobile device.

“I think that livestreaming this attack gives me some motivation in the way that I know that some people will be cheering for me,” Mr. Gendron wrote.

He also said that he had been inspired by Reddit, far-right sites like The Daily Stormer and the writings of Brenton Tarrant, the Christchurch shooter.

In remarks on Saturday, Gov. Kathy Hochul of New York criticized social media platforms for their role in influencing Mr. Gendron’s racist beliefs and allowing video of his attack to circulate.

“This spreads like a virus,” Ms. Hochul said, demanding that social media executives evaluate their policies to ensure that “everything is being done that they can to make sure that this information is not spread.”

There may be no easy answers. Platforms like Facebook, Twitch and Twitter have made strides in recent years, the experts said, in removing violent content and videos faster. In the wake of the shooting in New Zealand, social platforms and countries around the world joined an initiative called the Christchurch Call to Action and agreed to work closely to combat terrorism and violent extremism content. One tool that social sites have used is a shared database of hashes, or digital footprints of images, that can flag inappropriate content and have it taken down quickly.

But in this case, Ms. Douek said, Facebook seemed to have fallen short despite the hash system. Facebook posts that linked to the video posted on Streamable generated more than 43,000 interactions, according to CrowdTangle, a web analytics tool, and some posts were up for more than nine hours.

When users tried to flag the content as violating Facebook’s rules, which do not permit content that “glorifies violence,” they were told in some cases that the links did not run afoul of Facebook’s policies, according to screenshots viewed by The New York Times.

Facebook has since started to remove posts with links to the video, and a Facebook spokesman said the posts do violate the platform’s rules. Asked why some users were notified that posts with links to the video did not violate its standards, the spokesman did not have an answer.

Twitter had not removed many posts with links to the shooting video, and in several cases, the video had been uploaded directly to the platform. A company spokeswoman initially said the site might remove some instances of the video or add a sensitive content warning, then later said Twitter would remove all videos related to the attack after The Times asked for clarification.

A spokeswoman at Hopin, the video conferencing service that owns Streamable, said the platform was working to remove the video and delete the accounts of people who had uploaded it.

Removing violent content is “like trying to plug your fingers into leaks in a dam,” said Ms. Douek, the researcher. “It’s going to be fundamentally really difficult to find stuff, especially at the speed that this stuff spreads now.”

Back to top button