Today’s Technologies’ Apparent Neutrality and Freedom of Thought, Conscience, and Religion

Yulia Razmetaeva is the Head of the Center for Law, Ethics, and Digital Technologies at Yaroslav Mudryi National Law University (Ukraine)

We used to separate social life and face-to-face communications, as well as the country’s political life from what’s going on inside our gadgets. We appear to somehow separate the “people’s world” from the digital one. The truth of the matter, however, is that technologies have become so advanced they’ve embraced every aspect of the “people’s world.” Technologies’ influence is immense.

Having spread far beyond the limits of the digital world, technologies can literally define our lives, and not only in terms of everyday habits and preferences but also concerning issues important to society as democracy, rule of law, and human rights. The Cambridge Analytica case showed how dangerous it is to underestimate the impact of profiling and artificial contradictions in social media on elections. New technologies contribute to the fact that people find themselves in filter bubbles and spend more and more time with people who have similar views, rather than those who have different ones, which leads to narrow-mindedness, tunnel vision, and, therefore, intolerance.

Technologies are exacerbating divides in views, thereby deepening intolerance, including religious ones. They also contribute to divisions in society promoting all types of polarization. Today’s technologies can even interfere with our thoughts using new advanced means to alter and manipulate them. How many times have we noticed that advertisements on social media feeds or websites seem to anticipate our desire to buy something? There is no magic in this; in fact, we either already saw this product briefly and did not remember it, or discussed it in correspondence in messengers, or discussed it out loud next to our smartphone.

Technologies and Religious Freedom

We can also see that technologies affect human rights, including freedom of thought, conscience, and religion, both directly and indirectly. Mass surveillance helps authoritarian governments control people and limit their freedom to freely practice their religion or beliefs. A good example of digital authoritarianism and, in particular, of the dark side of surveillance is seen in the actions of the Chinese government against the Uyghur in Xinjiang, which has been turned into a “surveillance state.”

In addition, technologies can draw out certain opinions and spread religious hatred incredibly fast. For instance, hate speech promoted via Facebook posts probably intensified the crisis of religious freedom in Myanmar and played a key role in fueling continuing violence against the Rohingya religious minority in the country.

Are They Neutral?

Who or what is responsible for those cases? Are technologies merely a tool, as neutral as a hammer, with which one can build a house or murder a person? Is it we who decides how to use this tool? There are at least three reasons why technologies might not be neutral: (1) the creator bias, (2) the non-neutral nature of some technologies, and (3) the dramatically increased possibilities for technologies to influence people’s opinions.

Technologies and Their Creators

It is no secret that technologies can reflect the preconceptions of their creators. And what makes it so dangerous today is the scale of the consequences. Smart algorithms are a good example. Seemingly impartial and accurate, they, in fact, can replicate and increase biases. When compiled by a biased creator, the algorithms will be biased as well. The data we feed AI may not sufficiently represent vulnerable groups or may bear the imprint of past discriminatory practices. This is well illustrated by the biases in AI designed for litigation, such as racist algorithmic decisions based on court cases collected over the years, where a statistically significant number of decisions made by white people were not in favor of blacks.

Manipulative Technologies

The second reason is the inherent non-neutrality of certain technologies. Presumably, several digital technologies are manipulative by nature. The manipulative design of landing pages is aimed at getting you to press the “purchase button.” Overly user-friendly websites, being seamless and smoothly taking you from bullet point to bullet point, reduce your urge to check and doubt. Search engines returning different results for the same query for different users depending on what they previously defined as their preference are often intentionally or unintentionally biased. The newsfeed curator algorithms in social media filters off some content based on ambiguous and obscure rules. Users are intentionally exposed to a large amount of negative news and radical opinions to evoke stronger reactions and harvest more “hate clicks.” Most of the algorithms, tuned to keep you online and engaged, are set to detect affective reactions. If hate speech elicits a stronger reaction, it will be used in one way or another, despite all the assurances of the social media managers about their efforts to root out violence.

Susser, Roessler, and Nissenbaum argue that some technologies, for  several reasons, make “engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating.” The SMM technology, again, is aimed at twisting the sales funnel, and it doesn’t care what it sells or imposes—certain types of tea or certain religious or anti-religious views.

Technologies and Public Forums

The third reason to consider today’s technologies non-neutral is the dramatically increased possibilities for them to influence people’s opinions. People’s neutral attitudes and perceptions can be easily shaken. Attitude to religion is no exception. Posts, images, and other content that go viral can seriously shatter opinions about politicians on the eve of elections, trigger waves of social protests, and even contribute to revolution. Viral content can be truthful and brought to the surface by accident. But it can be fake and specially promoted to the surface. Viral fake posts could lead to violence and even lynching. Private regulation of speech in cyberspace, which is increasingly becoming a public forum, and engagement-based ranking algorithms have been already putting too much influence into the hands of those who don’t care about neutrality. In addition, social media technologies contribute to situations where the opinion of society is formed by the most accessible content, not the highest quality content.

Conclusion

Today’s technologies can make one person or a group of people extremely influential. They can make the opinions of a few vitally important for many. Yet, those few may not be the best representatives of society or act in good faith, and their opinions may not be the most trustworthy. It seems that the Flat Earth Society and similar groups were only made possible by the ease with which any opinion is spread and finds its followers these days.

Neither today’s technologies nor their creators and main beneficiaries are ideal, morally impeccable, and trustworthy. Immersed in the digital world on a day-to-day basis, thinking that we are in control of our devices, mesmerized by user-friendly interfaces, and unaware of the back-end tricks and backstage algorithms, we often overlook these facts, ultimately, allowing a large degree of interference with our most sincere convictions and most important freedoms.

Return to the Series introduction