4 politico News
  • Spain Politics 🇪🇸
  • Europe & EU 🇪🇺
  • World Affairs
  • Opinion & Analysis
Reading: Videos of Charlie Kirk’s Murder Are Still on Social Media. That’s No Accident.
Share
Notification
4 politico News4 politico News
Font ResizerAa
Search
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Videos of Charlie Kirk’s Murder Are Still on Social Media. That’s No Accident.
4 politico News > World Affairs > Videos of Charlie Kirk’s Murder Are Still on Social Media. That’s No Accident.
World Affairs

Videos of Charlie Kirk’s Murder Are Still on Social Media. That’s No Accident.

sp
Last updated: November 17, 2025 7:34 pm
sp
Share
11 Min Read
Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, on Sept. 10, 2025.  Photo: Tess Crowley/The Deseret News via AP

4 politico News

Contents
  • We’re independent of corporate interests — and powered by members. Join us.
  • Join Our Newsletter Thank You For Joining!
Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, Wednesday, Sept. 10, 2025.
Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, on Sept. 10, 2025.  Photo: Tess Crowley/The Deseret News via AP

After Charlie Kirk was murdered at Utah Valley University, graphic videos of the right-wing provocateur’s assassination went viral on every major social media platform. It’s not surprising that such violent footage quickly spread — especially around a killing as high-profile as Kirk’s. What’s unusual, however, is how long those videos have been allowed to stay up.

Search Kirk’s name on Instagram right now, and for every three videos of him “owning” a college student in a debate, there’s at least one of him bleeding out. Search “Charlie Kirk shooting,” and your feed will be inundated with videos of the incident. This was not always the case. After a gunman livestreamed his attack at a mosque in Christchurch, New Zealand in 2019, Meta said it took down 1.2 million versions of the video before users could upload them to the platform. The Southern Poverty Law Center also tracked uploads of videos after mass shootings in Christchurch; Halle, Germany; and Buffalo, New York, and found a dramatic decrease after the seventh day of each of those shootings. 

Owners of social media companies like Facebook, Instagram, X, and YouTube have traditionally responded much faster to the proliferation of such graphic violence on their platforms, at least in the West. (Internet users in places where these platforms dedicate less resources to moderation like Gaza or Tigray are all too familiar with the kind of deluge of gore American users were subject to these past few weeks.)

Lawmakers including Rep. Lauren Boebert, R-Colo., and Rep. Anna Paulina Luna, R-Fla., have called on the platforms to delete the videos of Kirk’s gruesome assassination.

“He has a family, young children, and no one should be forced to relive this tragedy online. These are not the only graphic videos of horrifying murders circulating— at some point, social media begins to desensitize humanity. We must still value life,” Luna wrote on her X account. “Please take them down.”

But for several years, Republican legislators, in the name of free speech, have pushed tech companies to gut the very systems they now expect to protect them. It was part of pressure campaign intended to force social media companies to fire moderators, abandon fact-checking, and weaken their hate speech policies. As Luna and Boebert now demand the removal of videos of Kirk’s gruesome assassination, they’re experiencing the predictable consequence of the information ecosystem their party created — and are now horrified that the chaos has turned inward.

In 2023, after Rep. Jim Jordan, R-Ohio, succeeded Jerry Nadler, D-N.Y., as chair of the House Judiciary Committee; he immediately used his platform to start subpoenaing Big Tech and research organizations that study online hate speech and misinformation, like the Stanford Internet Observatory. Jordan accused them of a “marriage of big government, big tech [and] big academia” that attacked “American citizens’ First Amendment liberties.” Notably, last year, congressional Republicans accused the FBI and tech platforms of collaborating to defeat Donald Trump in the 2020 election by suppressing posts related to Hunter Biden’s laptop. 

Meanwhile, conservative activists sued the Biden administration complaining that it pressured social media companies to censor conservative views on Covid-19 vaccines and election fraud. Though they lost the suit, Republicans have long held that platforms have overly censored their posts. Studies also show that Republicans are far more likely to spread misinformation. During the 2016 election, for example, 80 percent of the disinformation on Facebook came from Republican-leaning posts. Another 2023 study found that conservatives were eight times more likely to spread misleading content than those who lean liberal. In other words, Republicans were more likely to be censored by social media because their posts were more likely to violate their policies.

Of course, a lot has changed since then, and tech companies have gone much further in appeasing conservatives. Perhaps, the biggest coup d’état for conservatives in the battle against “liberal tech” was Elon Musk’s purchase and subsequent rebranding of Twitter. To appease Republican activists, Musk — who recently advocated for the imprisonment of those who belittle the death of Kirk — promised to turn Twitter into a “free speech” platform. His first move was laying off a majority of the company’s staff involved in devising and implementing its content moderation policies. One former Twitter staffer who used to work in this division estimated that almost 90 percent of the company’s content moderation staff was laid off. Twitter, now X, also said it would rely on its Community Notes feature and AI to moderate content.  

Musk’s changes were not only in staffing, but also in how strongly the company enforces its policies. While Twitter’s hate speech policies still exist on paper, the platform has chosen not to enforce and has instead verified hundreds of accounts belonging to white supremacists, reinstated the accounts of notorious promoters of anti-trans content, and of course, brought back Trump who was excommunicated from the platform for his role in inciting the January 6 riots. Musk also joined Republicans’ attack on researchers who monitor disinformation by suing the Center for Countering Digital Hate in 2023 — though that lawsuit was later dismissed. 

The inflection point for this yearslong campaign by conservative activists was Meta’s capitulation to their demands shortly after Trump’s election win. In January, CEO Mark Zuckerberg, dressed in a loose black T-shirt and a gold chain, told Facebook and Instagram users the company would drastically scale back its third-party fact-checking operation. He told users the company would also ease enforcement of its hate speech rules, especially around immigration and gender. “It’s time to get back to our roots around free expression on Facebook and Instagram,” Zuck said. 

While Meta, YouTube, and others have said their content policies would apply to the Kirk assassination videos, to capitulate to Republican demands, they have not only reduced how strongly they review content but also gotten rid of much of the staff that does that work.

“You can’t have it both ways: Weakening moderation inevitably means violent and graphic content is left up for longer and spreads more quickly.”

Like Twitter, Meta has since quietly laid off many of the people that work on its trust and safety teams while also announcing it would double-down on AI based moderation. Not even a month after Meta announced its content policy changes, users reported seeing more graphic content on the platform.

“Underinvesting in platform safety has serious consequences,” says Martha Dark, the co-executive director of Foxglove Legal, a tech accountability nonprofit that advocates for content moderators. “It’s striking that after years of demanding platforms ease up on enforcement, some politicians are now outraged at the very consequences of that pressure. You can’t have it both ways: Weakening moderation inevitably means violent and graphic content is left up for longer and spreads more quickly,” Dark adds.

We’re independent of corporate interests — and powered by members. Join us.


Become a member


Join Our Newsletter


Thank You For Joining!


Original reporting. Fearless journalism. Delivered to you.


Will you take the next step to support our independent journalism by becoming a member of The Intercept?


Become a member

By signing up, I agree to receive emails from The Intercept and to the Privacy Policy and Terms of Use.

As for the tech companies’ claims that AI can carry the burden of their content moderation load: Olivia Conti, a former Twitter product manager who focused on abuse detection algorithms, told me that these algorithms may as well be “pizza detectors” because they “flag anything with predominantly red tones.” Even the hashing technology that tech platforms have traditionally used to identify these videos can easily be evaded through small edits.

Ellery Biddle, the director of impact at Meedan, a technology nonprofit that studies harmful speech and gender-based violence online, says that while some content moderation can be assisted by AI, “you still need teams of smart people to tell the AI what to do.”

Republicans intended to take aim at the teams that moderate hate speech and harassment. But those very people are also responsible for the job of monitoring and removing gruesome videos, like that of Kirk’s death.

TAGGED:AccidentCharlieKirksMediaMurderSocialVideos
Share This Article
Facebook Email Copy Link Print

Popular News

Trump's FCC Chief Says His Censorship Protects the Little Guy. It Really Serves One Powerful Man.
Trump’s FCC Chief Says His Censorship Protects the Little Guy. It Really Serves One Powerful Man.
April 2, 2026
Supreme Court’s Dangerous Anti-Trans Precedent
Supreme Court’s Dangerous Anti-Trans Precedent
April 2, 2026
The “Casualty Cover-Up” Amid Trump's Wars in the Middle East
The “Casualty Cover-Up” Amid Trump’s Wars in the Middle East
April 2, 2026
How to Keep ICE Agents Out of Your Phone at the Airport
How to Keep ICE Agents Out of Your Phone at the Airport
April 2, 2026
4 politico News

Quick Links

  • Spain Politics 🇪🇸
  • Europe & EU 🇪🇺
  • World Affairs
  • Opinion & Analysis

© 4 Politico. All Rights Reserved.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?