Social media platforms have scrambled to remove uploaded and re-uploaded footage of the Christchurch attack, drawing renewed attention to their role in distributing extremist material.
At least 50 people died in the attack and another 50 were injured.
The gunman livestreamed the deadly massacre on Facebook. Hours after the shooting, the social media platform, as well as Twitter and YouTube were still removing copies of the footage.
In a statement, Facebook said it had deleted some 1.5 million copies of the video in the first 24 hours after the attack. It also removed the suspect’s personal Facebook and Instagram accounts.
— The Sydney Morning Herald (@smh) March 17, 2019
Footage of the attack was shared widely online
Authorities say footage of the attack documented the 28-year-old gunman driving to the Al Noor Mosque in Christchurch, then entering the building and opening fire on worshippers.
The social media platforms were widely criticised for not reacting quicker in taking down the video stream and preventing its spread across the internet.
“Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube. As with any major tragedy, we will work cooperatively with the authorities,” a YouTube spokeswoman told ABC News in a statement.
Facebook New Zealand Spokesperson, Mia Garlick, also said in a statement that the site was deleting praise for the crime and the shooter. “Our hearts go out to the victims, their families and the community affected by the horrendous shootings in New Zealand,” the statement said.
“Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video…We will continue working directly with New Zealand Police as their response and investigation continues.”
Facebook removed 1.5m videos of New Zealand terror attack in first 24 hours https://t.co/HJOBuJPZgM
— The Guardian (@guardian) March 17, 2019
YouTube has a mechanism to automatically delete re-uploads of videos it has banned (for example, on copyright infringement grounds) based on matching metadata and visual cues. It uses human moderators to review videos that contain sections of banned footage. One reason for this is to ensure that news videos legitimately using footage aren’t automatically removed.
The spread of footage from the mosque attack has underlined concerns about the ability of user-generated content platforms such as YouTube, Facebook, Twitter and Reddit to effectively self-police and remove offensive content. The shooter is widely understood to have been further radicalised online by extremist and alt-right content.
Facebook, YouTube and Twitter are struggling to halt the spread of horrific footage that appears to show the New Zealand shooting as it was taking place https://t.co/ElYXZiVAd7 pic.twitter.com/OA8ZrqFPGp
— CNN Business (@CNNBusiness) March 15, 2019
Jacinda Ardern concerned about the attack footage being broadcast
New Zealand’s Prime Minister Jacinda Ardern said that Facebook Chief Operating Officer Sheryl Sandberg had reached out to her on the issue but the pair had not yet spoken.
“We did as much as we could to remove or seek to have removed some of the footage that was being circulated in the aftermath of this terrorist attack. Ultimately, though, it has been up to those platforms to facilitate their removal and support their removal,” Ardern said.
“I do think there are further questions to be answered. Obviously these social media platforms have wide reach.
“This is a problem that goes well beyond New Zealand, it has played out in other parts of the world. So whilst we might have seen action taken here, that hasn’t prevented (the footage) being circulated beyond New Zealand shores.”
Header image credit: Štefan Štefančík