Sacralization of AI

Yulia Razmetaeva is a visiting researcher at Uppsala University (Sweden) and Head of the Center for Law, Ethics and Digital Technologies at Yaroslav Mudryi National Law University (Ukraine). The post is a part the Sacralization of AI series.

We created AI. Now, AI is re-creating us. Smart algorithms have already changed much in our societies and lives. Artificial intelligence has changed our daily habits, decision-making processes, work environments, transportation management, natural disaster response and recovery, peacetime elections, and the defense of nations in wartime. There will be even more changes when we replace even more human effort with that of machines. There’s no way we can do without algorithms—they are indispensable, and we now rely on them for each and every step we take. As AI seems to have penetrated every tier of our lives, it is now part of the atmosphere, along with oxygen, and is essentially perceived as a deity or holy spirit that is omnipresent, omniscient, and omnipotent.

The Sacralization Process

There’s a clear tendency in contemporary societies to perceive AI as something that knows it all and knows better—so much so, it is almost divine. Sacralization of AI, meaning its association “with divinity, transcendence, or the unknowable,” seems to be a predictable and natural result of it being so all-embracing. As Vanina Papalini writes, “Sacred objects are inviolable and cannot be profaned, that is, they cannot be mixed with the mundane or placed in doubt.” We often see such attitudes  reflected in people’s reactions to AI.

AI is unknowable. Not only does an ordinary user not know how the backend code of their favorite app works—they do not want to know it. The better the site’s usability, the more smoothly it works, the less technical knowledge is required, until all a user has to do is press the buttons and . . . something magical happens. Our detachment from understanding AI is artificially increased by “oracles”—technologically advanced people, developers of intelligent machines.

AI is inviolable. Not only is the sacred shrine of backend code hidden from the eyes of ordinary people, a user is not to touch the code, or everything will go wrong. Hacking the code is a crime.

AI is not to be doubted. After all, the ordinary user is never knowledgeable enough to question the code and the underlying mechanisms. The complexity of cutting-edge AI technologies prevents anyone not involved in their development from even understanding them, not to mention doubting them.

AI has even begun to be perceived as “transcendent”—that is, close to crossing boundaries of normal, physical human experience—as magic on the front end has no explanation and “digital stuff” is intangible. While not independent of the material universe and its physical laws, AI is beginning to be perceived as such by people who use it directly or indirectly. Recall how people tend to describe it when they run into problems: “My computer deleted my document somehow,” “Something happened,” “It just popped up on my screen,” “I was shown such and such a site.”

Our language has long reflected our attitude to digital tech as an animate and intelligent being capable of its own decisions, often evil in nature or, on the contrary, divine. Endowing inanimate objects with qualities of life and even of intelligent beings is common to humans: “My hoover went crazy and sucked in my socks.” That said, due to its incomprehensible complexity, AI is perceived as much more than simply intelligent. At the same time, the understanding that AI technology is not one powerful “AI in a vacuum,” but a huge spectrum of various AI technologies, disappears from the horizon of perception. Superhuman efficiency, incomprehensible complexity, and intangibility are three factors that contribute to (lack of) this perception.

Human Imperfection

AI outperforms human beings in many ways. AI is far ahead of us in its computational power, speed of processing information, and piecing together bits of data, instant analytics, and awesome amounts of memory. It appears its development is proceeding by leaps and bounds, despite bursts of calls for caution or “a pause” in this.

The results of the McKinsey Global Survey published in December 2022 show that investment in artificial intelligence is increasing, scaling and accelerating the development of increasingly advanced algorithms. We are probably not far from truly intelligent conscious algorithms, but even more importantly, we already think of them as justly superior to us.

Thus, humans have moved far beyond understanding intelligent algorithms as merely very advanced tools. We already turn to AI with questions, problems, and hopes. We rely on AI in many ways, perceive it as better than humans at reasoning and problem-solving, rather than merely at computational tasks, and do not question its power over us. We entrust our lives to AI.

But why are humans so willing to believe in AI’s supernatural powers? Perhaps our desire to reject imperfection and the tendency to view something mechanically flawless as perfect is rooted in the industrial age. The focus on efficiency pertinent to the era of the industrial revolution may have played a cruel joke on us: we came to believe that this is what we should strive for and admire. “Bigger, better, faster” has become the primary measure of success and a principal virtue.

The striving for perfection has resulted in the creation of AI as a means to make up for human imperfection. A (vicious?) cycle has formed: while there’s a limit to mechanical perfection stemming from the materials machines are made from, there’s apparently no limit to digital perfection. Few, if any, material constraints seem to exist, so “bigger, faster, better” has given rise to outstanding technologies that definitely go too far. AI is now believed to not only outperform but also outwit the human being.

Expecting Miracles

For many, artificial intelligence is something beyond human comprehension. It’s one of those “magical inventions” that inspire fear, awe, wonder, and fascination. We may go as far as perceiving digital tech as independent of laws of physics, which is the result of its intangibility. What cannot be touched, smelled, and tasted becomes endowed with a certain magical flair and inscrutability. In addition, we often fail to understand how it works and why. As Stephen Marche puts it, “Technology is moving into realms that were considered, for millennia, divine mysteries.”

Magical, incomprehensible, and intangible, AI becomes an indefinite phenomenon. We think of AI as such, unaware of the vast variety of its types, uses, and applications. As a deity, it is now expected to work wonders, feeding millions with one loaf of bread and making everyone happy. For example, it is expected that it will help create a world free of hunger, develop new drugs, give people a happy home, and help humans stop the ageing process. AI even promises afterlife to those whose consciousness will be saved in digital form after their death. God is being born out of the Machine.

Brave New World

Worshiping tech, we run the risk of losing a crucial part of what makes us human. Most experts, optimistic or not, are concerned about the long-term impact of AI on the basic elements of being human. People are increasingly willing to remake themselves to replicate AI with its precise algorithms and clear-cut protocols. Unpredictability, divergence from the norm, irrationality—which are important aspects of our human nature—are now vulnerable. Algorithmic thinking, purely rational, is now preferable over emotional and intuitive thinking. Mental arithmetic classes are enjoying a rise in popularity, as parents strive to turn their kids into human calculators regardless of the fact that the children probably lack deep understanding of what the math is all about. We’re living in a world of checklists, mind charts, project stages, and prioritization. An efficient day is filled with dozens of tasks, no matter what. Inefficiency is a sin.

Some smart technologies have already contributed to making people dependent. Certain technologies are built on a mechanism that psychologists call “intermittent reinforcement”; in particular, this is how likes on social networks work when users keep checking their feeds to see how many people have reacted to their posts, quickly falling into dependency on reactions.

Sacrifice, too, is part of sacralization. We are ready to sacrifice our privacy, giving away even sensitive personal data, for the sake of being admitted to the shining world of digital tech. While remaining concerned about digital surveillance, we debate its nuances rather than question surveillance as such. This was well demonstrated by the European Court of Human Rights’s judgment in the Big Brother Watch and Others v. UK case.

Is AI about to become the new deity? Is the slow but sure advancement of AI’s sacralization something to fear? Is it starting to deprive us of free will? Are we ceasing to apply critical thinking principles when it comes to AI? Are we beginning to see human beings as secondary and inferior to AI? These are the questions to focus on if we want the new world to remain more welcoming for people than machines.

Return to the Series introduction