Introduction
Is Artificial Intelligence (AI) a civilization killer? How should Christians think about AI? It seems clear that AI—and the notion of progress that often undergirds technological advancement—brings both upsides and downsides. Still, we should be cautious about soundings the apocalyptic alarms. Rather than viewing AI as an alien threat, we should see it as the latest mirror reflecting humanity’s existing inclinations—toward both creation and corruption. That is not to suggest that AI is a neutral tool incapable of “acting back” on humanity (see Why Things Bite Back). I suspect that the move toward AI will have consequences—both positive and negative. Instead, we must remember that the underlying dynamics of reality don’t change. Unredeemed humanity tends to exercise its capacities apart from the guidance of God’s word and without the restraint of unqualified allegiance to Christ—regardless of the technologies available at the time. What we are examining, then, is how a given technology allows humanity to recognize truth, goodness, and beauty while simultaneously encouraging dehumanization by promoting the illusion of independence from the Triune God.
To evaluate AI—and technology more broadly—from a theological standpoint, we need frameworks that move beyond surface-level concerns about convenience or risk. One such framework comes from Marshall McLuhan and his “four laws of media,” which may be applied to emerging technologies like AI. McLuhan proposed that every technology:
1. Enhances or extends a human capacity
2. Obsolesces something else
3. Retrieves something from the past
4. Reverses into its opposite when pushed to its extreme.
These four laws offer not just a media theory but a diagnostic tool for discerning how technology interacts with the human condition—and for Christians, with sin, sanctification, and our dependence on God.
In what follows, we will consider (1) how AI amplifies certain human capabilities, (2) what AI makes obsolete, (3) what AI retrieves from the past, and (4) how AI may end up reversing its original character—or whether AI is itself a reversal of some earlier technology. Each of these areas would require a book-length treatment. For now, we will focus on selected aspects of AI, leaving others for a future exploration. Before turning to these questions, however, it seems appropriate to offer a brief treatment of technology more generally.
What Is Technology?
Our everyday understanding of technology tends to involve mechanical, electric, and digital devices (e.g., engines, smartphones, software programs, etc.). However, other commonplace items—like forks, roads, or socks—are also technologies. Transhumanist thinker Nick Bostrom argues, “in its broadest sense, technology includes not only machines and physical devices but also ‘instrumentally efficacious templates and procedures’—including scientific ideas, institutional designs, organizational techniques, concepts, and memes” (see “The Vulnerable World Hypothesis”).
This broader understanding of technology isn’t unique to transhumanists. Derek Schuurman, a Christian professor of computer scientist and author, writes, “The term technology encompasses a broad range of objects, including ones that are not often associated with the word. Indeed, clothes and utensils are types of technology even if they are not commonly recognized as such” (see Shaping a Digital World).
In Marshall McLuhan’s thought, technologies were “extensions of man”—that is, tools that amplify human faculties such as vision, hearing, memory, or locomotion. They tended to extend the physical bodies and mental/nervous systems into our environments in a variety of ways. Chopsticks, for instance, are an extension of the fingers. They are a technology of grasping that are designed to allow humans to move and handle food without touching food with their fingers. Technologies in this sense have always been with us. They involve not only physical tools, but what cognitive psychologist and philosopher John Vervaeke calls “psychotechnology”—ways in which we process information so as to improve our cognition both individually and collectively (e.g., the development of writing, software and applications, etc.). AI may be classified as a psychotechnology designed as an extension and amplification of information ingestion, curation, and synthesis.
Construed in this manner, we begin to see that we are surrounded by technologies that are fitted to us and to our environment. These technologies—ways of extending ourselves—have been available since Adam and Eve used fig leaves for coverings. We might say that in a fallen world technology functions in a protective fashion. At the same time, technologies of various sorts act back on us shaping the way we understand reality. As such, while AI presents new challenges because it is a new tool that is uniquely fitted to us and provides us with unique capabilities—for good and ill—that we wouldn’t have without it, the challenges are not wholly new. Christians are, in a very real sense, facing the same challenge we always have from technologies of various sorts: the challenge of unconstrained and unguided human capacity leading us to assume we have no need of God or, at the very least, that God may be pushed to the margins of our lives.
AI through McLuhan’s Four Laws
McLuhan’s laws help us understand AI as a technology whose strengths reflect the best of humanity—and whose dangers mirror our fallenness and frailty. Though I’ve outlined McLuhan’s four laws above, they must be resituated within biblical and theological constraints—boundaries that shape how we relate to technology in the first place. As Alicia Juarrero suggests, “Constraints not only reduce alternatives—they also create alternatives. In other words, constraints generate new possibilities. They shape the properties a component exhibits because it is embedded in a system—properties it wouldn’t otherwise have. So, it is with the constraints of God’s reality: they do not merely limit; they also reveal, generate, and refine. These properties that emerge within constraints—the constraints of reality, particularly the reality of God—may be good and bad at the same time. McLuhan is neither fully utopian nor dystopian. Yet without theological grounding, his categories risk importing secular assumptions about progress, neutrality, and human agency.
Embedding McLuhan’s laws into a biblical and theological framework yields something like the following:
- Enhancement and extension- Technology amplifies human capacities—both those that reflect truth, goodness, and beauty, and those that distort and destroy (e.g., the creation of nuclear weapons). The amplification of human capacities can, and often does, contribute to human flourishing; however, it can also amplify capacities that diminish human flourishing. For example, advances in medical imaging improve on traditional healing practices, contributing to human flourishing. By contrast, automatic weapons and algorithmically curated propaganda reveal how technology can magnify human destructiveness. The former might be illustrated by improvements in medical technology that improve on herbal and natural remedies while the latter might range from the distribution of ideologically damaging thoughts to more efficient ways of killing others (e.g., automatic weapons). In theological perspective, we must recognize the bane and blessing of human capacity exercised apart from a deep, uncompromising loyalty to God.
- Obsolescence- Declaring a technology “obsolete” is not a neutral act. It requires a set of normative criteria grounded in a deep understanding of human purpose and the structure of reality. When we label something “obsolete,” we are making a judgment based on an implicit set of values apart from contextual features. For instance, is a bicycle made obsolete by a car? It depends. If we value relatively fast travel over a long distance with multiple people, the car certainly has an advantage over a bicycle. If, however, we are looking to improve our cardiovascular system, a car is far less useful—it doesn’t make the bicycle obsolete in that instance. McLuhan’s idea of obsolescence may be best understood as perceived obsolescence –an abstraction detached from the full context of being human. “Obsolete,” to put it differently, is better understood as a sort of judgment that occurs through the process of relevance realization. That process involves making judgments in a given situation (for more on relevance realization and perspectival knowing see “Discernment and Discipleship: Four Ways of Knowing”). In theological terms, obsolescence is better understood through the lens of relevance realization—the discernment of what matters in a given moment. For Christians, this process is shaped by obedience to God’s word, which no technology can make irrelevant, however quaint it may appear.
- Retrieval- Retrieval refers to a technology’s ability to resurrect a medium previously displaced or diminished by another. For example, audiobooks have reintroduced orality—once largely overshadowed by print—as a primary mode of communication. Similarly, dictation tools (e.g., talk-to-text) echo earlier practices, such as executives dictating memos to secretaries. Theologically, retrieval highlights the continuity and orderliness of God’s world. When newer technologies resemble older ones, we are reminded that our fundamental needs and vulnerabilities remain more constant than we often assume.
- Reversal- This characteristic implies a sort of exploitation, or the inability of a technology pushed to its limits to continually progress. Consider the internet: it amplifies human connection by overcoming barriers of time and space. Yet, as social media has shown, those connections often degrade into hostility. “Trolls,” internet pornography, the vulnerabilities related to privacy and information security, and a variety of other problems are reflective of the internet’s “reversal” on itself. The internet not only allows for our mutual edification but for trivialization, covetousness, gossip, slander, and uncharitable critique. As an open, democratized platform, the internet also tends toward “reversal” as the amount of available information increase. There is, in other words, more information than anyone could grasp even with the help of AI. As a technology intended to enhance human communication, the internet has, in many ways, diminished human communication because it can become difficult to find just what you are looking for. On some level, then, we may think of “reversal” as similar to the negative amplification of human capacities. Theologically, it points to human finitude and frailty. Because humans are finite and frail, our technologies cannot help but take on the limitations of our finitude.
So, how might AI be construed through these four laws? While a full treatment would go beyond the scope of this piece, we can offer some brief comments regarding AI in light of McLuhan’s four laws of media. Note the following:
- Enhancement and extension- AI amplifies humanity’s ability to ingest, curate, and synthesize information into intelligible text-based summaries, graphics, and other creative works at speeds that no individual human or collection of humans could match. AI also amplifies humanity’s capacity to promote falsehood through a combination of ideological capture and selection biases that cause, at times, AI’s abbreviations overly simple so that they reach “the point of falsification” (see Huxley, Brave New World Revisited).
- Obsolescence- AI gives the impression that previous modes of research (e.g., reading full books or journal articles), internet searches, and visiting individual web pages are obsolete. I use the phrase “gives the impression” because, like “progress,” obsolescence too often assumes an ideal. In this case, we might say that the ideal is efficiency. We have a goal, say, of investigating the meaning of the book of Revelation. We can accomplish that goal in a number of ways. We could sit down and read a physical commentary on Revelation, study the Greek text, or converse with experts on the book of Revelation. AI, however, accomplishes the goal more efficiently. To the extent that we allow efficiency to become a constraint, using AI for research will come to seem like something that should be done. It takes on a “normative” tone rather than remaining one option among others. Non-AI forms of research such as reading books and articles, visiting libraries, creating one’s own summaries of literature, and various other activities require formative efforts. They shape our characters in ways that AI can’t. When we think about what AI makes obsolete, we are making a value judgment that is largely based on efficiency. Consider, for instance, whether an email really makes a letter obsolete. It doesn’t. There is an intimacy to a letter that an email can’t match. There is something about using a pen and paper that is haptically formative in a way that typing or dictating an email isn’t. While the utopian vision might see AI eliminating the need for human research, that is a pragmatic vision rooted in efficiency—if an AI can do it faster, why have a human do it at all? The answer is that human effort has effects that AI can’t and will never be able to produce. Human effort shapes character. The dystopian denies the potential for AI to amplify the good, truth, and beauty humans are capable of exhibiting.
- Retrieval- Almost ironically, AI retrieves a centralization of information similar to pre-digital age publishing. Prior to the internet, an editor (or team of editors) would select authors and books to publish based on various factors. They would then distribute the books they had chosen to publish. AI does something similar though on a different scale. In doing so, it “retrieves” a more controlled approach to information distribution that was, to some large degree, lost with the democratization of the internet.
- Reversal- The sheer amount of information available on the internet means that developing common knowledge becomes extremely difficult. There is always some piece of information that doesn’t reach the masses. By curating, synthesizing, and summarizing this information, AI helps us take advantage of the “collective intelligence” available on the internet. At the same time, because AI has to select from all the information available to it, it must utilize criteria for determining which information is relevant and which irrelevant. These criteria may end up making AI “lean” toward a particular ideological perspective. An AI, for instance, could be more “liberal” or more “conservative” depending on its programming or on who is using it. As such, AI’s not only have the potential to promote ideologies contrary to Christianity, but to scratch the itching ears of users by learning what those users really want to hear. It seems to me that the market pressures will demand either that AI’s tailor the information selected for the individual user (something that could devolve into a sort of digital age “yellow journalism” or re-envisioned version of groupthink) or we will see the development of “verticals” with different AI’s being “branded” to attract different groups. Of course, AI could “reverse” in other ways. The technology that is created to serve humanity could also act back on humanity so that its intelligence enhances mankind’s delusions or captures humanity through the control of information. If the servant were to become the master, this would certainly be a reversal. Though science fiction versions of this problem makes for interesting literature and movies, this latter reversal need not be total to be problematic. This sort of reversal could happen in little ways that eat away at our humanness so that we don’t notice what we are giving up.
This analysis is partial. What I hope it illustrates is the possibility, if not necessity, of calm, thoughtful, nuanced evaluations of AI and other technologies. My hunch is that AI is the culmination of a set of individual and collective decision-making that has been less concerned with the way humans relate to God, selves, others, and the world than with overcoming human biological limitations through technology in the name of efficiency and ambiguous notions of progress (see “How Should Christians Approach Progress in Technology?”).
How Might We Think about AI?
AI will bring trade-offs. It is certainly appropriate to speculate about how lopsided those trade-offs may be. To do so, however, we need a vision of humanity deeply rooted in God’s word. Without a deep understanding of reality’s “underlying dynamics.” As I write in “God, Wisdom, and What Matters Most,” “In Deuteronomy 30, the Israelites are given the choice between life and death. God calls them to love him by obeying his commands. Turning away from God will bring death. Remaining loyal to God will bring life. In verse 20, Moses reminds the Israelites that the Lord “is your life and length of days.” The passage points to the governing dynamics—the deep structure—of reality. Unqualified loyalty to God is intrinsic to reality and, as such, is the only way we can truly see through illusion.” Without this understanding of realities “governing dynamics” our judgment will lean toward what we often refer to as pragmatic concern rather than asking how we might make decisions so that only God gets the glory.
In thinking about AI, then, we need to be careful to avoid both utopian and dystopian stories. AI may pose certain threats. We may want to preserve certain aspects of our humanity rather than surrendering them to AI, but to suggest that AI will lead to the demise of humankind is difficult to justify biblically. The world’s current state isn’t due to technology but to the human desire to be independent of God. Is AI an expression of that desire? To some degree, yes. However, not everything that humans create is evil. As Christians, we need to recognize that increasing human capacity in any way won’t be the impetus for the new heavens and the new earth (e.g., utopia)—Christ’s victory has done that for us. We participate in that victory by giving Christ our allegiance. Any enslavement that might come about because of AI (e.g., the dystopian impetus) isn’t the direct result of technological innovation but of sin. As such, we need to think deeply about AI and other technologies, but we must do so within a Christian framework that neither denies the potential downsides of AI nor inflates its upsides.