Survivors of the mosque attacks in New Zealand describe terror at a door they couldn't open while trying to escape the shooting in the Al Noor mosque.
Facebook has been under pressure to change its policiessince a gunman livestreamed the killing of 50 people at two mosques in Christchurch, New Zealand, on March 15.
Critics say Facebook did not remove the video quickly enough, allowing it to spread across the internet and to be uploaded to other online services such as Google's YouTube.
Facebook removed the video 12 minutes after the livestream ended, butusers grabbed the footage and reposted clips from it, making it challenging for Facebook to block all the footage.
Sandberg says Facebookidentified more than 900 different videos showing portions of the attacks and deployed artificial intelligence tools to identify and remove hate groups in Australia and New Zealand including the Lads Society, the United Patriots Front, the Antipodean Resistanceand National Front New Zealand.
For years now, Facebook has been buffeted by sharp criticismfor rolling out thelive-streaming product without an adequate plan to prevent acts of violencefrom being shown to its more than 2 billion users.Videos that glorify violence violateFacebook's rules, but Facebook mostly relies on users to alert the companyto videos that shouldbe taken down.
The company has hired thousands more moderators, in part, to spot violence in live videos and is training its artificial intelligence to detect it as well.
Related story: After New Zealand mosque shootings and civil rights backlash, Facebook bans white nationalism, separatism
We use cookies and analyse traffic to this site. By continuing to use this site, closing this banner, or clicking "I Agree", you agree to the use of cookies. Read our privacy poplicy for more information.