What Is and What Should Never Be: Corporate and Digital Specters that Threaten Fundamental Freedoms

Neville Rochow QC is an Australian Barrister, Associate Professor (Adjunct) at the University of Adelaide Law School, and a member of Anthony Mason Chambers in Adelaid

Corporations are notorious for their bad behavior in the pursuit of profits [1] and the need for laws to regulate them [2]. In relation to religious and other freedoms, where corporations have any influence upon their exercise, laws and regulatory regimes could work to enhance the enjoyment of rights and freedoms. But there are legal and regulatory measures that just should not be undertaken since they diminish that enjoyment. The distinctions between what can be done and what should not be done, what is and what should never be, have become all the more important as our lives are increasingly ruled by corporate powers and now their digital servants.

As to potential impact, consider the instance of what happens to religious freedom when an algorithm in the emergency room computer decides whether or not to administer a blood transfusion. If nothing in the program asks a question whether the patient is a faithful Jehovah’s Witness, the machine will decide the question without consultation on religious faith. Religious freedom is rendered irrelevant.

To take another example, what if a gynecologist’s computer is programmed to book immediate termination when a pregnant mother is diagnosed with the rubella virus? If the program takes no account of Catholic or other religious beliefs on abortion, termination may result without consultation regarding the patient’s faith [3]. And as more jurisdictions legalize assisted dying, yet another set of questions will arise with increasing frequency: what if the patient’s pain levels, inoperability, age, health, and socio-economic status combine in the program to favor active termination of life? If social biases in favor of euthanasia have been programmed into the machine without any algorithmic religious amelioration, then strongly held religious or ethical positions on the part of the patient or their family may be overridden.

And, lastly in the medico-ethical field, what if corporations that design the software for medical practice refuse to re-design the software to open up algorithmic pathways for religious options since to do so would be inefficient and unprofitable? Or what if they refuse because there was no call for it from their consumers?

These are but samples of the religio-ethical questions that multiply continuously. Since the medico-ethical field is but one among many, questions as to what to do to protect religious freedom multiply across the platforms and ethical fields: social media; telecommunications; robotics; military weaponry; and so on, virtually ad infinitum.

It should be obvious that the corporation [4] and the algorithm [5] have infinite potential to threaten to human rights generally, and religious freedom specifically [6]. Corporations have “legal personality”: they own property, sell goods and services, sue and are sued, they make profits and losses, and permeate commercial life. And they are immortal. Most importantly for current purposes, they control digital markets and outcomes. Notwithstanding their market and social power, they are mere specters, legal fictions. Their existence derives from the legal imaginary. They neither think nor feel. Except as required by law, in their pursuit of profit, they do not “care” about fundamental human rights [7].

Simply put, an algorithm is merely a set of rules to define a sequence of operations, or, if one must, a recipe. The type of algorithm that we have considered is a special set of mathematical sequences that produce various levels of “artificial” intelligence and decision-making power in triage. It, like other computing algorithms, is a product of the mathematical imaginary, comprised of ciphers and digits. Though ephemeral, they have lasting impact. This algorithm poses a potential threat to freedom of religion. Like the corporation, the algorithm and the “intelligence” it can produce does not “think” about human rights and freedoms. Neither does it feel. Unless instructed by the programmer, or required by law, neither will this algorithm ever “care” [8].

Regarding the corporation, Justice Louis Brandeis, in his Honor’s dissenting opinion in Louis K. Liggett Co. v. Lee, famously drew the comparison between the corporation and Mary Shelley’s [9] horrific brainchild, Frankenstein’s monster [10]. From the East India and Hudson’s Bay Companies to Standard Oil, Royal Dutch Shell, and British Petroleum, to Amazon, Facebook, Google, and Microsoft, history is littered with examples of corporations behaving monstrously in the pursuit of their prime motive of profit [11]. Since the earliest of times in corporate history, they have run roughshod over human rights [12] and have had scant regard for state borders or national sovereignty. At their disposal, they now have the algorithm to exploit in the pursuit of profit. Thus, regulation is foremost among actions what we can take.

Ensuring corporations and algorithms do “care” about human rights has proven an elusive task. The work of coordinated international regulation is ongoing. But despite initiatives by the United Nations [13] and other international bodies [14], no universal legal or ethical regime has yet emerged to govern the ethical conduct of corporations or intelligent machines.

So, while progress on comprehensive regimes to protect human rights has proven slow, one asks what, in the meantime ought one not do? At the top of any list of what not to do, and most obvious given the potential harm to human rights and freedoms, is not to accord non-humans human rights. Yet that is precisely what the Supreme Court of the United States has done and continues to do.

Adam Winkler points out that the error of extending full rights-bearing personhood to corporations began one hundred and thirty years ago when the Supreme Court was misled by counsel, Roscoe Conkling, while making submissions on behalf of the robber baron Leland Stanford’s corporation, Southern Pacific Railroad. Conkling falsely said he knew corporations were intended by the word “person” when the legislative committee had drafted the Fourteenth Amendment [15]. The error resulting form that lie was perpetuated in subsequent cases, including the recent cases of Citizens United v. Federal Election Commission and Burwell v. Hobby Lobby. In Citizens United, the court held that corporations were entitled to protection of freedom of speech under the First Amendment. But in the Hobby Lobby, the Supreme Court extended religious freedom to a corporation under the federal Religious Freedom Restoration Act (RFRA). Apart from the fact that corporations have no faith or religious beliefs, do not attend worship services, and receive no sacraments, the decision marks a trend away from greater corporate accountability and towards providing corporations with more bases upon which they can avoid obedience to the law upon the pretext that they are entitled to human rights.

The future of fundamental freedoms is now one of facing not only the specter of corporate interference with rights and freedoms. Corporations, armed with digital weapon, wield enhanced power to act to the detriment of rights and freedoms. That is, of course, unless positive action is taken by regulators and the courts that enforce regulatory schemes.

One would have thought that humans would have learnt a thing or two about corporations and the analogy they provide for other non-human being accorded human rights. But it would appear that they haven’t. Or, at least, the decisions of the Supreme Court of the United States mentioned here would suggest that there are still lessons to be learned.

[1] Joel Bakan, The Corporation – The Pathological of Pursuit of Profit and Power, (2005, Constable).

[2] Ibid. See also, for example, in relation to climate change, Babie, Paul T., ‘Climate Change: An Existential Threat to Corporations’; (May 27, 2019). (2019) The Bulletin 41 (4) LSB(SA). ISSN 1038-6777, U. of Adelaide Law Research Paper No. 2019-46, Available at SSRN: https://ssrn.com/abstract=3394688

[3] The opposite outcome might be expected if the programmer were Catholic and instilled a belief system into the triage program making abortion only available in circumstances that conformed with Catholic teachings.

[4] Joel Bakan, The Corporation – The Pathological of Pursuit of Profit and Power, n 1, 149; Timothy Wu, The Curse of Bigness: Antitrust in the New Gilded Age, (Columbia Global Reports, 2018); John Gerard Ruggie, Just Business: Multinational Corporations and Human Rights, (2013, Norton &​ Company). See also the principles developed by Ruggie and adopted by the United Nations Human Rights Council on 6 June 2011. See also Neville Rochow QC, ‘Immortal Beings without Soul or Conscience: Toward a Corporate and an AI Ethic’ – chapter in Paul Babie and Rick Sarre (eds), Religion Matters (Springer Nature Singapore, 2020).

[5] James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, (2013, Thomas Dunne Books/St Martin’s Press). See also Neville Rochow QC, ‘‘Somnambulating Towards AI Dystopia? The Future of Rights and Freedoms’, in AI and IA: Utopia or Extinction? Edited by Ted Peters, special issue of Agathon: A Journal of Ethics and Value in the Modern World, Volume 5, (2018), 43 – 67.

[6] Ana Beduschi, ‘Technology dominates our lives – that’s why we should teach human rights law to software engineers’, 26 September 2018, The Conversation. Siva Vaidhyanathan, Anti-social media – How Facebook Disconnects Us and Undermines Democracy, (2018, Oxford University Press); Max Tegmark, Life 3.0 – Being Human in the Age of Artificial Intelligence, (2017, Penguin Books); Franklin Foer, World Without Mind – The Existential Threat of Big Tech, (2017, Penguin Press); David J Gunkel, The Machine Question – Critical Perspectives on AI, Robots, and Ethics, (2017, MIT Press); Jesse Russell and Ronald Cohn, Ethics of Artificial Intelligence, (2012, Lennex Corp.). See also Yuval Noah Harari, Homo Deus – A Brief History of Tomorrow, (2014, Harvill Secker), 353 – 359; Scott Galloway, The Four – The Hidden DNA of Amazon, Apple, Google and Facebook, (2017, Transworld Publishers), chapter 1 and 196 – 200; Technology and Human Rights: Artificial Intelligence’, Business and Human Rights Centre.

[7] Neville Rochow QC, ‘‘Somnambulating Towards AI Dystopia? The Future of Rights and Freedoms’, above n 2.

[8] Ana Beduschi, ‘Technology dominates our lives – that’s why we should teach human rights law to software engineers’, above n 6.

[9] Frankenstein; or The Modern Prometheus.

[10] Louis K. Liggett Co. v. Lee 288 U.S. 517 (1933), at 567. See also Louis D. Brandeis, ‘The curse of bigness’ in Miscellaneous Papers of Louis D Brandeis, (Osmond K. Fraenkel and Clarence M. Lewis, eds, Kennikat Press,1965, original c1934.); Louis D Brandeis, Other People’s Money and How Bankers Use It, (McLure Publications, 1914), cited in Timothy Wu, The Curse of Bigness: Antitrust in the New Gilded Age, (Columbia Global Reports, 2018) at 15.

[11] See nn. 1 and 4 above.

[12] See nn. 1 and 4 above.

[13] Neville Rochow QC, ‘‘Somnambulating Towards AI Dystopia? The Future of Rights and Freedoms’, above n 5.

[14] Tim Dutton, ‘An Overview of National AI Strategies’, 28 June 2018, Medium; Secretary-General’s High-level Panel on Digital Cooperation.

[15] Adam Winkler, ‘Corporate Personhood and the Rights of Corporate Speech’, 30 Seattle U. L. Rev. 863 (2007); Adam Winkler, We the Corporations, supra, xiii – xv and chapter 2l; and Adam Winkler, “‘Corporations Are People’ Is Built on an Incredible 19th-Century Lie”, 6 March 2018, The Atlantic.

Return to the Series introduction