Challenges and Promises of Artificial Intelligence in Religion

Stanislav Panin holds a PhD in Philosophy from Moscow State University and is a PhD candidate in the Department of Religion at Rice University. The post is a part the Sacralization of AI series.

Nowadays, applications based on artificial intelligence (AI) influence almost all areas of life, and religion is no exception. After all, one can train an artificial neural network using any set of texts, including religious texts, which allows to create applications explicitly designed to engage in conversations concerning spiritual matters. In fact, AI enthusiasts have already made their first attempts. While these developments might seem overwhelming, they are not entirely new. For centuries, similar themes have captured the religious imagination.

Putting Things in Historical Perspective

In the thirteenth century, Catholic philosopher Ramón Llull developed a philosophical system that he called the Art. Llull’s aspiration was to encompass all knowledge available at the time, but he also had explicitly religious goals—to facilitate theological debates and enable missionaries to demonstrate the verity of Christian doctrine to non-Christians. Based on the principles of the Art, Llull envisioned the so-called “thinking machine”—a set of disks that, through rotation, produced combinations of words, yielding statements concerning a given subject. Due to Llull’s interest in the use of mechanical devices to facilitate logical analysis and his role in developing a gamut of concepts important for computer science, some scholars consider Llull one of the first computer scientists and a forefather of AI research.

Another unexpected precursor of AI in the history of religions was the nineteenth-century Spiritualist movement. Spiritualists aspired to answer religious questions scientifically by gathering empirical data through séances during which participants presumably communicated with the dead through mediums. People attended séances with different motivations. Some wanted to talk to their deceased loved ones, others hoped to learn what happens to us after death, and yet others sought to acquire posthumous writings of famous authors, such as Charles Dickens.

Spiritualism fascinated several renowned scientists, such as biologist Alfred Russel Wallace, chemist William Crookes, and psychologist William James, and attracted quite a few scientifically-minded people of the time. Given that many of them had a background in the sciences, it comes as no surprise that adherents of Spiritualism sought to organize séances in a way that would resemble a laboratory. As part of these efforts, they invented devices for communication with spirits inspired by the latest technological advances, such as typewriting and telegraphy.

Contemporary conversations about AI, too, often touch on the topic of communication with the dead. For example, a developer of a popular AI-driven chatbot Replika explained that she created it after her best friend’s death in an accident, in an attempt to partly recreate his mind—a practice that mirrors Spiritualist communication with the dead. What is new is the way in which this communication happens.

These new ways of communication create new challenges. In the case of Replika, privacy has been a major concern, as the application intentionally designed to encourage deeply personal conversations can collect enormous amounts of user data including information that a user provides through these conversations, leading the Mozilla Foundation to call it “the worst app [they]’ve ever reviewed” in terms of privacy. Problems like unrestricted data collection can affect many areas of life, including the right to freedom of religion or belief.

New Challenges of Modern Technologies

The challenges of contemporary use of AI in the domain of religion are primarily related not to the use of technology for religious purposes per se (again, this is not inherently new) but to the specific context in which it is used today. This context is the world of surveillance capitalism, a term popularized in Shoshana Zuboff’s book The Age of Surveillance Capitalism, in which she outlines how big tech companies radically changed their business models during the 2000s in order to commercialize de facto unrestricted and unregulated access to user data.

Simply put, surveillance capitalism is a business model in which IT companies only pretend to produce and sell useful products while the majority of their efforts are invested in surreptitious surveillance of their users that generates most of their income. Exploitation of personal data has proven lucrative for IT companies, earning them profits that surpass most other industries by a large margin.

This situation creates new challenges for the right to privacy, the right that is essential for individual autonomy and the execution of other human rights, including freedom of religion and belief. As early as 1989, David H. Flaherty summarized this by arguing that

[t]he storage of personal data can be used to limit opportunity and to encourage conformity, especially when associated with a process of social control through surveillance. The existence of dossiers containing personal information collected over a long period of time can have a limiting effect on behavior. … Data collection on individuals also leads to an increase in the power of governments and other large institutions. [1]

All of this is true with regards to religion. In a world where surveillance is normalized, AI-enabled technologies such as facial recognition inevitably affect religion along with other spheres of life. After all, some companies already market facial recognition for religious congregations to track their members. Even if such companies do not intend to sell this data to advertisers, data breaches happen regularly in the industry, including in cases where hackers gain access to security cameras in presumably well-secured spaces such as prisons and hospitals.

Surveillance of religious activities for the purpose of profiling, both inside and outside places of worship, is especially dangerous for members of minority groups who can be specifically targeted by governmental and non-governmental actors. This is also true for AI-enabled profiling based on tracking online activities that can expose private information about religious affiliation, even if a user has never intended to make it public—for instance, when this information is revealed through search queries or location history.

These concerns are not merely speculative. For example, in Xinjiang, Chinese authorities use high-tech surveillance to single out Muslims “donating to mosques or preaching the Quran without authorization.” While this problem is much more acute in authoritarian societies, human rights organizations have also warned about the negative impact that profiling based on religion and ethnicity might have in the United States, particularly on certain communities of color.

Even when malevolence is not involved, profiling based on religious views might have a negative impact on religious tolerance as AI, by its very nature, has “potential to replicate, reinforce or amplify harmful biases.” In other cases, information about religious affiliation can be used intentionally to incite tensions for political purposes, such as when Russia used advertising targeting specific religious groups in the United States.

New Avenues for Religious Expression

It is not all doom and gloom, of course. As much as certain uses of artificial intelligence can threaten freedom of religion, others can open new avenues for religious expression and encourage exploration and creativity.

As early as 2017, some Buddhist temples in Japan experimented with religious ceremonies performed by humanoid robots programmed to recite Buddhist sutras. Interestingly, as with AI-facilitated communication with the dead, debates around this topic have emphasized that the practice is not entirely new. Technologies such as prayer wheels—sometimes powered by waterwheels and other similar devices designed to automate rotation—were part of Buddhist religious practice for centuries. And if rotating a wheel counts as a recitation of a mantra, why would we not accept mantras and sutras recited by a robot?

However, Japanese intellectuals went further in their debates about robots and Buddhism. Some fifty years ago, the topic opened a new direction in conversations concerning the Buddha-nature—a concept espoused by several schools of Mahayana Buddhism that, roughly speaking, refers to the ability inherent in all sentient beings to transcend illusions and achieve Buddhahood. Do robots also have Buddha-nature? In the mid-1970s, Japanese roboticist Masahiro Mori wrote,

It may surprise some of you when I say that I first began to acquire a knowledge of Buddhism through a study of robots, in which I am still engaged today. And it may surprise you even more when I add that I believe robots have the buddha-nature within them – that is, the potential for attaining buddhahood. What connection, you may want to ask, can there possibly be between Buddhism and robots? How can a mechanical device partake of the buddha-nature? The questions are understandable, but I can only reply that anyone who doubts the relationship fails to comprehend either Buddhism or robots or both. [2]

Mori’s book is just one example of how new technologies can encourage a deeper exploration and elaboration of pivotal religious concepts. Like other social phenomena, religion has evolved throughout history, and contemporary technologies will, as they always did in the past, lead to the emergence of new forms of spirituality and transformation of the old ones. Whether the impact of these technologies is beneficial or harmful largely depends on the ways in which we use them and on our ability to introduce meaningful regulations to protect fundamental human rights.

[1] David H. Flaherty, Protecting Privacy in Surveillance Societies 9 (1989).

[2] Masahiro Mori, The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion 13 (1st English ed. 1981).

Return to the Series introduction