News

Social media platforms struggle as violent videos go viral

Photo: Shutterstock, photo editor: Adelina Mamedova

After two recent tragedies involving Iryna Zarutska’s fatal stabbing on a North Carolina train and Charlie Kirk’s shooting in Utah, social media companies are having a hard time stopping the spread of graphic footage. Even after trying to remove these videos, violent content keeps showing up again, highlighting their ongoing struggle to control what gets shared, CNN reported.

People using Instagram and TikTok have reported seeing upsetting videos days after the incidents occurred. At one point, TikTok’s algorithm even suggested search terms like “raw video footage” before those suggestions were taken down. TikTok says it is removing close-up shots, but Jamie Favazza, the company’s spokesperson, admitted that some wider-angle videos might still be available.

Big tech companies are exploring various approaches to address the issue. Meta uses warning labels and blocks posts that show or praise violence, and some content is only visible to adults. YouTube removes the most graphic videos and instead sends viewers to news coverage, saying this helps keep people informed.

But these safety measures do not work the same way on every platform. Katie Paul from the Tech Transparency Project attempted to create a fake teen account on Instagram and quickly discovered uncensored shooting videos that played automatically. Her experience raises questions about how well Instagram protects users compared to other sites.

Though Meta acknowledged they were slow to screen some edited versions of the videos, they pushed back against criticism of their teen safety measures.

This situation highlights a bigger problem. While traditional media follow clear rules for handling violent content, social media platforms are not as prepared. Without clear standards, users are more likely to see graphic material, and mental health experts warn that this can cause real harm, including vicarious trauma.