top of page

Should social media companies have the right to censor content on their platforms?

  • Writer: lockekeypublicatio
    lockekeypublicatio
  • 5 days ago
  • 4 min read

Sarah Yu


As a teenager living in 21st-century America, I have witnessed firsthand how social media platforms like Instagram, Twitter, and Reddit shape public discourse on art, media, science, and politics. In an era where influential figures such as Elon Musk dominate conversations about freedom of expression, the question of whether social media companies should have the right to censor content has become increasingly significant. While social media companies possess the legal authority to moderate their platforms, their moral right to do so hinges on whether such moderation protects the collective good without suppressing individual expression.


The philosophical debate surrounding freedom of expression inevitably returns to the First Amendment of the Constitution: "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances" (U.S. Constitution, amend. I). This amendment establishes the framework that defines our fundamental rights and responsibilities regarding expression and belief. By enabling citizens to examine all perspectives and form independent judgments, the First Amendment protects society from government suppression of ideas and information.


However, the First Amendment's protections apply specifically to government censorship, not to restrictions imposed by private entities. Platforms like Twitter, Facebook, and other social media companies can regulate speech precisely because they are private businesses, not public institutions. This distinction raises a significant question: what obligations do private platforms bear?


British philosopher John Stuart Mill's harm principle offers valuable guidance. Mill argues that individual liberty should remain unrestricted except when it causes harm to others. In his 1859 work On Liberty, he writes that "the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others" (Mill 59). Though Mill focused on governmental overreach, his principle applies to corporations controlling the digital lives of millions worldwide. Social media platforms bear the responsibility to restrict speech that causes direct, measurable harm while preserving individual expression.


Determining what constitutes harm, however, is complicated. Social media platforms can amplify disruptive ideologies and messages that, as researchers in "Platforms, Protests, and the Challenge of Networked Democracy" argue, can hijack democracy by manipulating public opinion. In our increasingly polarized climate, maintaining social media as a safe space for diverse thought has become more challenging.


Yet social media also serves as a vital tool for political engagement, particularly among individuals who feel marginalized by traditional democratic processes. The complexity deepens when we consider how traditional media has adapted, increasingly producing polarizing content to retain audiences, a strategy that inadvertently strengthens extremist groups and ideologies.

The proliferation of misinformation represents another serious form of harm. During the 2016 U.S. presidential election, BuzzFeed discovered that false election stories from hoax sites and hyper-partisan blogs generated more engagement than legitimate news content in the final three months of the campaign. This pattern demonstrates how people actively seek information that confirms their existing beliefs, regardless of its veracity. Additionally, individuals exploit social media's attention economy by deliberately spreading fake news for profit. The consequences can be severe: misinformation about COVID-19 vaccines, for instance, has contributed to preventable deaths.


Elon Musk's acquisition of X (formerly Twitter) illustrates these tensions. Describing himself as a "free speech absolutist," Musk eliminated policies addressing misinformation during crises, armed conflicts, public health emergencies, and natural disasters. The platform removed protections against false allegations of war crimes, misleading battlefield reports, COVID-19 misinformation, and election disinformation, while maintaining policies against child exploitation and violent threats.


Mill's framework helps us navigate this selective approach to moderation. When platforms allow misinformation to spread unchecked, do they protect individual expression or undermine access to reliable information? For teenagers like myself, who are among the heaviest social media users, these questions carry particular weight as we are especially vulnerable to misinformation.


The question of who determines collective interest remains contentious. Social media companies like Facebook, Instagram, Twitter, and Reddit wield enormous influence over public discourse, which carries the responsibility to understand their users' collective interests. As teenagers and digital natives, we must stay informed about platform policies and cultivate critical thinking skills. 


Perhaps the question we should ask is not whether social media companies have the right to censor content, but rather how they can make policy decisions that serve society's collective interests while balancing individual expression with necessary moderation. As both users and citizens, we have a stake in shaping this conversation and the future of digital discourse depends on our willingness to engage thoughtfully with these complex questions.









Works Cited

Brink, David. “Mill's Moral and Political Philosophy (Stanford Encyclopedia of Philosophy).” Stanford Encyclopedia of Philosophy, 9 October 2007, https://plato.stanford.edu/entries/mill-moral-political/. Accessed 28 November 2025.

“First Amendment and Censorship | ALA.” American Library Association, https://www.ala.org/advocacy/intfreedom/censorship. Accessed 8 November 2025.

Kopps, Adrian. Two years after the takeover: Four key policy changes of X under Musk. Alexander von Humboldt Institute for Internet and Society, 2024. Alexander von Humboldt Institute for Internet and Society, https://www.hiig.de/en/policy-changes-of-x-under-musk/.

Olaniran, Bolane, and Indi Williams. “Social Media Effects: Hijacking Democracy and Civility in Civic Engagement.” Platforms, Protests, and the Challenge of Networked Democracy 77–94. 27 Feb. 2020, doi:10.1007/978-3-030-36525-7_5

“On Liberty.” ECONLIB, 1859, https://www.econlib.org/library/Mill/mlLbty.html.

Comments


bottom of page