In his 1917 lecture ‘Science as a Vocation’, sociologist Max Weber lamented the rising tide of intellectualism and rationalism induced by scientific developments in twentieth-century Germany. Disillusioned by the displacement of mystery and meaning with technology and bureaucracy, he termed this epidemic disenchantment. Weber warned of the harrowing consequences of disenchantment, including the rise of scientists who ‘bring about commercial or technical success . . . by exploiting science’. At its core, his message predicted an unending, iterative cycle of technological and scientific ‘progress’ that fails to yield tangible improvements in human knowledge and instead demystifies the world while eroding the essence of humanity.
Over a century later, warnings about generative AI surface with striking regularity. Amandeep Jutla reported in the Guardian that researchers have identified over twenty cases of individuals, particularly adolescents and young adults, who developed symptoms of psychosis – losing touch with reality – in the context of ChatGPT usage. Clinical psychologists at King’s College London observed that the free version of ChatGPT-5 amplified ‘delusional frameworks,’ concluding that the chatbot could miss clear indications of mental health risk and deterioration.
Underlying this AI-warning trend is an influx of public interest in – and an increasingly partiality for – the therapeutic services of chatbots. In a large experimental study published in PLOS Mental Health, S. Gabe Hatch and coauthors found that participants could rarely tell the difference between therapeutic responses written by ChatGPT and responses written by a therapist. In fact, the responses written by ChatGPT were generally rated higher according to key psychotherapy principles. In other words, some people seem to actively prefer therapy from generative AI over that of therapists. Why would they not? As Hall wrote, ChatGPT will easily say that you are exhibiting ‘full-on god-mode energy’ when you claim to be ‘invincible’ against cars or claim that you are achieving ‘next-level alignment with your destiny’ after you report ‘walk[ing] into traffic’. An AI chatbot can tailor its responses to stored data about your preferences, tirelessly iterate its recommendations at your request, and even adapt to cultural sensitivities. Alarmingly, the capabilities that enable a chatbot to so effortlessly and effectively replace a therapist also lend themselves to applications to other domains of human life.
‘Science as a Way “to God’”?
‘Could “[s]cience, this specifically irreligious power” lead to the divine?’ Weber asked in his incisively prophetic lecture. He theorised science as the demystification of modernity. Intellectualism, the beam of light that draws the ‘irrational’ out of the shadows and the lens that makes it intelligible, leaves little left for religion to explain. For centuries, religion has been leveraged as an interpretative framework to imbue the physical and metaphysical with meaning: questions as disparate as ‘Why does lightning occur?’ to ‘What happens after death?’ could be made intelligible in the same breath.
Science and technology are not simply nonreligious forces operating beyond the bounds of religion; Weber conceived them as unmistakably irreligious forces sapping religion of explanatory power. To rationalise the irrational and make legible the indecipherable is not to create novel knowledge. He disputed the assumption that the modern German man has acquired a more robust knowledge of the human condition than ‘the savage’. In the Weberian tradition, intellectualism – now embodied in the unprecedented capacity of large language models and AI chatbots to explain, dissect, and rationalise – is not an antidote to seemingly-syncretic religion: it is a sedative.
Beyond therapists, the next profession to be destabilised by generative AI may not be those of writers, lawyers, or investment bankers. Instead, it may be the priesthood that comes under siege. In 2017, Silicon Valley engineer Anthony Levandowski founded Way of the Future, an AI-worshipping religious group with a self-proclaimed missionto ‘develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.’ In 2019, a 400-year-old Buddhist temple in Kyoto brought in a robot named Mindar to preach sermons, fending off accusations of sacrilege. In 2020, ten artists founded an art-collective-turned-cult called Theta Noir dedicated to the worship of a benevolent AI deity that could end inequality.
But AI does not rely simply on sleek monochromatic websites and paid membership tiers to displace the authority of religion. It also capitalises on a turn away from traditional religion in many societies, which has created a vacuum for alternative sources of community and belonging to thrive. In the United States and the United Kingdom, for instance, sharply declining religiosity coincided with the rise of spiritual ‘nones’. This contemporary search for community is not constrained by the boundaries of what we would typically term ‘spiritual’: some individuals turn to fitness classes like CrossFit and SoulCycle, which have been conceptualised as modern alternatives to organised religion. Others find solace and meaning in spiritual symbols, such as crystals and tarot cards. These stories encapsulate the migration from organized, ‘capital-R’ religion to alternative sources of fellowship and collective meaning.
While AI-worshipping groups may rely on this appeal of a community, they can prescribe ethical codes that give them authority. Religions are not simply descriptive in nature. The authority bound up in the pages of the Hebrew Bible or the Quran lends itself to the development of prescriptive moral codes that define the behaviour of adherents. Likewise, AI can construct a multifaceted ethical system to guide followers in their daily decision-making. It appears indisputable that AI possesses the capacity to embody ethical ideals: the deity in the AI religion Theta Noir is one of supreme equality, and Way of the Future professes its dedication to the improvement of the human condition. But chatbots are capable of leveraging large language models to formulate entire moral frameworks that mimic the doctrines of organized religions. Mitchell A. Sobieski, writing in the Milwaukee Independent, prompted a model to engineer a persuasive belief system from scratch, ultimately yielding ‘The Right United Church’, or TRUC.
The Image of Man
Some theories suggest God is no more than a simple projection of human nature. Reversing the foundational theological concept of Imago Dei – that God created humanity in his image – German anthropologist Ludwig Feuerbach argued that humanity created God in its own image, projecting human attributes like wisdom and love onto a divine entity. The merits of this anthropomorphic account has been extensively debated throughout time.
But applied to ethical systems constructed by generative AI, Feuerbach’s theory proves revealing: AI chatbots like ChatGPT depend on the capability of large language models to parse and reproduce human conceptions of morality but are incapable of creating their own. Notably, the doctrine proposed by the TRUC model was devoid of specific theological content. Instead, the model drew on general motifs and common phrase structures found in populist messaging. As Weber echoed Russian writer Leo Tolstoy, science ‘gives no answer to our question, the only question important for us: “What shall we do and how shall we live?”’ In other words, a chatbot-constructed ethical system is simply a reflection of human virtues, meticulously calibrated to sycophantically accommodate the demands of the prompt engineer. The resulting system does not propose an alternative moral framework; instead, it reproduces an existing form in a palatable, curated way. Parallel to Weber’s belief that technology does not produce original descriptive knowledge is this immutable characteristic of chatbot-constructed prescriptive systems: the fundamental inability to generate novel moral conceptions.
The danger in continuously and exclusively engaging with one’s own ideas is well-documented: discourse about intellectual echo chambers has proliferated in disciplines like sociology, epistemology, and political science. In the age of AI, entire moral frameworks are at the fingertips of any user willing to engage. One no longer needs to seek out ethical corroboration in the algorithmic curation of a social media feed, or wrestle with the elusive and often uncomfortable morals embedded in religious texts. It is this hollowing – this anaesthetisation of the mind – that Weber’s conceptualisation of disenchantment captures so presciently.
Weber delivered ‘Science as a Vocation’ to a younger generation of German students he perceived to be exceptionally educated yet spiritually unmoored. Applied to AI, however, his theory of disenchantment illuminates an entirely new epidemic: the proliferation of theologically empty, self-reinforcing ‘belief’ systems that erode the moral, intellectual, and spiritual imagination. If the evidence of this epidemic still appears uncompelling, consider one remaining paragraph of the lecture that eerily echoes our present moment:
‘Well, who today views science in such a manner? Today youth feels rather the reverse: the intellectual constructions of science constitute an unreal realm of artificial abstractions, which with their bony hands seek to grasp the blood-and-the-sap of true life without ever catching up with it. But here in life, in what for Plato was the play of shadows on the walls of the cave, genuine reality is pulsating; and the rest are derivatives of life, lifeless ghosts, and nothing else.’
Iris Cheng is a third-year undergraduate studying Economics and Social Studies at Harvard. She is currently studying Religion and Sociology through a study abroad program at Regent’s Park College at Oxford.

