Free to Tweet?

Jason Thacker May/June 2022

Content Moderation, Religious Freedom, and the Digital Public Square

If religious freedom is advocated only for pragmatic reasons, it can and will be sacrificed to expediency.”1 Those words, spoken in 1983 by the late evangelical theologian and ethicist Carl F. H. Henry, were prophetic. They foreshadowed much of what was to come after his death in 2003 as a new digital epoch unfolded, marked by the meteoric rise of social media as one of society’s main communication conduits. And while traditional threats to religious freedom and free expression are still unnervingly prevalent throughout much of the world today, this social media revolution raises a new and pressing question: How can we preserve freedoms of religion and speech in an increasingly digital society, where governments have taken a side seat to transnational technology companies that now wield an outsized influence on the digital public square?

What began as a way to simply connect with others and share innocuous information has now become one of the most important ways we communicate with one another and seek to promote the common good. At the same time, our public square has become increasingly disconnected from a transcendent moral framework and has become allied, instead, to the rampant moral autonomy of our secular age. In this new environment, people of faith must take a hard look at how moral convictions are to be expressed in the public square of today, and work to preserve a robust foundation of religious freedom for future generations.

Big Tech and the New Public Square

Concern about the outsized influence of technology companies on our public discourse is one of the rare bipartisan points of agreement in U.S. society today. But there is little agreement on the particulars. Progressives traditionally argue for more content moderation, especially with the growing influence of misinformation, fake news, and hate speech online. They argue that more must be done to curb these dangers, and if Big Tech won’t step up to the challenge, then the government must step in. Conservatives, though, have recently argued for less moderation because of the notion that conservative speech and values have been unfairly taken down or suppressed. They cite instances of users being banned or even specific social media platforms being shut down completely, simply because of the prevailing ideological agenda of Silicon Valley.2 They likewise argue that if Big Tech doesn’t rise to the occasion, then the government will need to step in to regulate this influential industry.

These debates are often categorized under the moniker of “Big Tech,” which is designed to signify the disproportionate influence and ubiquity of these media platforms in the public space. However, the term fails to account for some of the largest “big tech” companies in the United States, including Microsoft, Disney, Comcast, Verizon, and others. It is also narrowly focused on American companies, excluding tech and media giants such as Tencent and Alibaba of China, which have extremely concerning records on free speech and religious expression because of the rule of the Chinese Communist Party.3 The term is specifically intended to include such companies as Meta (Facebook), Alphabet (Google/Youtube), and Amazon, as well as companies with much smaller user bases that have enormous influence in the digital public square, such as Twitter. One of the main ways these companies exert such influence over public conversation and opinion today is through the ways they choose to moderate user content on their platforms.

The Purpose of Content Moderation

Content moderation is difficult work for any social media company. Every day, millions of posts and messages are shared on these platforms. Most are benign in nature, but sometimes abusive, hateful, or violent content is shared or promoted by individuals and organizations. Most social media companies expect their users to engage on these platforms within a certain set of rules or community standards. These content policies are usually developed with care and reflect the gravity of the task of providing a safe and appropriate place for users. It is an ethically thorny exercise, not only because social media plays such a massive role in our diverse society, but also because of the hyperpoliticization of many of the issues surrounding online speech. 

During the past few years, content moderation practices have come under intense scrutiny because of the breadth of the policies themselves as well as the misapplication—or, more precisely, the inconsistent application—of these rules for online conduct. Whether it’s preventing terrorist groups such as ISIS or authoritarian regimes such as the Chinese Communist Party from using these platforms for mass propaganda, or even banning a former president of the United States, the moderation decisions made by these companies shape the public conversation in countless ways for both good and ill.4

One of the most common questions I hear from those concerned about content moderation is whether technology companies should be moderating content at all. Some argue that moderation is inherently anti–free expression because we have the right to express ourselves in any way we see fit. While freedom of expression is central to our democratic experiment in America, the issue is a bit more complex than it may first appear. It is important to recognize that there is a difference between censoring speech that is disagreeable and limiting speech that encourages or glorifies physical violence, that is illicit, sensual, or exploitative in nature, which promotes crime, or is inauthentic. 

Content moderation practices are actually encouraged by Section 230 of the 1996 Communications Decency Act, which was bipartisan legislation passed in the mid-1990s and designed to promote the growth of the fledgling Internet. Section 230 gives Internet companies a liability shield for user-generated online content—meaning users and not the platforms themselves are responsible for the content of posts. The hope was these companies would enact “good faith” measures to remove objectionable content in order to make the Internet a safer place for our society.5

These “good faith” measures are designed to create safer online environments for all users. The debate over content moderation often centers on exactly what these measures should entail, not the presence of the measures in the first place. An Internet or social media platform without any type of moderation or rules would quickly devolve into a dangerous environment filled with misinformation, and endless unfiltered or illegal content. Even with content moderation rules in place, it is undeniable that social media has been utilized in ways leading to real–world harm. 

Yet, although content moderation is key to maintaining a safe and healthy digital public square, it is also increasingly being used to suppress certain viewpoints deemed unworthy by the court of public opinion—viewpoints that frequently touch on matters central to the Christian moral tradition, such as human sexuality and religious freedom. And these content moderation decisions often rest on an ill-defined understanding of what constitutes hate speech.

Hate Speech Online

Another question I’m often asked is how, exactly, content moderation policies limit religious expression. This is a question that is often framed in light of a rampant moral autonomy that champions free expression only for what is popular or seen as righteous in the secular sense. Ideas that run contrary to this secular orthodoxy are often deemed hateful and backward, not fit to be expressed in the public square. Hate speech has thus become a hugely consequential area of public debate, with religious expression increasingly seen as inherently hateful and deleterious to civil discourse. 

While many technology companies refer to international norms when dealing with controversial topics—including the nature of human rights—it should be noted that hate speech is often left undefined in legal terms because of the deep tension that exists between hate speech and free expression. The United Nations’ own plan of action on hate speech from May 2019 makes this clear by saying, “There is no international legal definition of hate speech, and the characterization of what is ‘hateful’ is controversial and disputed.” While the UN leaves hate speech undefined, it clearly desires robust protections against hate speech and calls it “a menace to democratic values, social stability and peace” that “must [be confronted] . . . at every turn.”6

Similarly, in the United States, there is no legal definition of hate speech and the U.S. Supreme Court has routinely affirmed that hate speech is protected by the First Amendment. According to the American Library Association, “Under current First Amendment jurisprudence, hate speech can only be criminalized when it directly incites imminent criminal activity or consists of specific threats of violence targeted against a person or group.”7 

Defining hate speech is a perennially difficult task, made even more complex with the rise of online speech through social media platforms.8 There are constant debates in society and the academy over what actually constitutes hate speech and whether the label should be limited to speech that incites physical violence or harm. Many companies such as Meta and Twitter have defined hate speech broadly, an approach that necessarily infringes on free expression and religious freedom concerning some of the most contentious issues of our day—namely, human sexuality and marriage. 

Most people would agree that many of the proscribed categories laid out by Twitter—including threats of physical violence, “wishing, hoping or calling for serious harm on a person or group of people,” and “references to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims”—fall clearly under good faith content moderation.9 Christians, in particular, can affirm these guidelines because of their belief in the innate value and dignity of all people as created in God’s image and the freedom of conscience that flows from their understanding of the imago Dei (Genesis 1:26–28). But when hate speech is broadened to include speech that makes one feel uncomfortable or that one simply does not like, we have set a dangerous precedent for public discourse and for the future of religious speech online.

Religious Freedom and Digital Technology

In October 2020 the Oversight Board created by Meta (Facebook) began operation. The board was designed as a review mechanism for users to appeal content moderation decisions and a place for Meta to refer some of its most difficult content decisions. The stated purpose of the board is to help Meta “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why” through independent, expert judgment.10 This board—which can have up to 40 members from around the world, including four cochairs—has taken up a number of cases in recent years especially concerned with freedom of expression and religious freedom abroad. The outsized number of cases concerning religious freedom speaks not only to the gravity of moderation decision-making today but also the complexities of navigating religious speech online in a transnational environment. While some companies have a better—but not perfect—track record than others in protecting religious expression, others such as Twitter have routinely suppressed people of faith from freely expressing their beliefs, especially around issues of human sexuality and gender.11

Most content moderation technology policies do not explicitly mention religious freedom as a goal of free expression, even though they speak to a robust commitment to diversity and inclusion. Religion is often only mentioned in terms of hateful speech based on someone’s religious affiliation, but religious expression is not seen as a core value of many companies. But true inclusion and diversity must be for all people, not just those who hold popular or widely accepted views on some of the most consequential issues of the day. 

Religious freedom isn’t just the freedom to believe or to worship, but the freedom to live in accordance with those beliefs in all parts of life. Limiting religious expression is increasingly being normalized in our secular society. People of faith must be aware of the shifting dialogue around religious speech, especially as it pertains to our digital environment, where so much communication and conversation takes place on privately held social platforms. 

Faith, by its nature, is public, and as the late Richard John Neuhaus wrote, there is no such thing as a truly “naked public square,” one that is devoid of religious-like undertones and ideological commitments even from its most secular members. Everyone brings their beliefs into the public square and suppressing religious expression is at odds with the nature of free expression and human dignity that our nation—and many other nations around the world—uphold as a cherished ideal. No matter what some will claim, people of faith simply cannot check their beliefs at the door or act as if their faith doesn’t have bearing on their public life.

Some people of faith have been rightfully criticized for taking an exclusive approach to religious freedom; an approach which says that “religious liberty [is] for me but not for thee.” A similar criticism can be leveled at those who prize free expression for socially acceptable ideas but not for those deemed unworthy of expression, such as the traditional Christian sexual ethics. 

Carl F. H. Henry once said, “It is not the role of government to judge between rival systems of metaphysics and to legislate one over others; rather its role is to protect and preserve a free course for its constitutional guarantees.”12 While there are complexities in applying this approach in the digital public square, which is governed by privately–held companies, the principles of free expression and religious freedom are still vital for a robust and healthy public square. With Big Tech’s outsized influence over public discourse, especially in a time of such polarization and division, we need a truly inclusive approach, an approach that doesn’t suppress religious viewpoints in an ironic call for toleration, diversity, and inclusion. 

1 Carl F. H. Henry, The Christian Mindset in a Secular Society: Promoting Evangelical Renewal and National Righteousness (Portland, Oreg.: Multnomah Press, 1984), p. 65.

2 For more about various state–level bills and proposals, see Jason Thacker, “Should the Government Regulate Social Media?” Ethics & Religious Liberty Commssion (ERLC), June 30, 2021, https://bit.ly/3plx9bM.

3 Jason Thacker, “Wired for Tyranny?” Liberty, September-October 2021, p. 4. 

4 Jason Thacker and Joshua B. Wester, “Understanding Twitter Suspensions and the Need for Consistent Policies.” ERLC, February 17, 2021, https://bit.ly/3sqDkNT.

5 For more on Section 230 and its history, see Jeff Kosseff, The Twenty-Six Words That Created the Internet (Ithaca, N.Y.: Cornell University Press, 2019).

6 “UN Strategy and Plan of Action on Hate Speech” (United Nations, May 2019), https://bit.ly/3hpjYT8.

7 “Hate Speech and Hate Crime,” Advocacy, Legislation & Issues, December 12, 2017, https://bit.ly/36Bc86u. (Italics supplied.)

8 For more on hate speech and free expression, see Jason Thacker, “Where Do We Draw the Line on Hate Speech?” ERLC, August 9, 2021, https://bit.ly/3M78TEg, accessed September 30, 2021. 

9 “Twitter’s Policy on Hateful Conduct | Twitter Help,” https://bit.ly/3ItQkIe, accessed February 16, 2022.

10 Learn more about the Oversight Board at https://oversightboard.com/.

11 Jason Thacker, “Is Content Moderation Stifling Public Discourse?” ERLC, May 5, 2021, https://bit.ly/3BUNnxX, accessed September 29, 2021.

12 Henry, p. 80.

Illustration by Tim O’Brien


Article Author: Jason Thacker

Jason Thacker serves as assistant professor of philosophy and ethics at Boyce College and a senior fellow in Christian ethics at The Ethics and Religious Liberty Commission. He is the author of several books, including Following Jesus in a Digital Age and The Digital Public Square: Christian Ethics in a Technological Society.