top of page

Response to “Thou, Robot”, Dr Johan Siebers, Philosophical Implications of the Digital

Transformation”, April 9, 2023, Pontifical Lateran University, Vatican City

Sarah Jones Nelson

Adviser and Visiting Scholar

International Research Area on Foundations of the Sciences

Pontifical Lateran University

Abstract

We assess the difference between human and computational speech acts conferring either natural or simulated agency on the space of interaction. A natural ethics of human rights emerges from noncomputational agency, a tacit component of biological and cultural evolution inaccessible to machine learning.

Keywords: A.I. ethics, speech acts, agency, machine learning, natural ethics

Johan Siebers gives us a deeply nuanced reflection on the difference between human speech and

computational speech. Human speech is a relational act of agency, a space of reasons and form of life innate to the survival of our evolving species. Computational speech is a construction of large language models for algorithmic neural networks to generate outputs of what I call simulated agency.

The human “I” as creator and agent of speech relates to a “Thou”, the listening, responding, dialogic “you” of Martin Buber’s paradigm of divine and human nature. Writing during and after World War I, Buber argued in Ich und Du that human life flourishes between the “ich” and the “du” of the informal second person, not the formal “Sie”. The archaic English translation “Thou” arguably mistranslates his intention to signify the intimate, transformative power of divine and human encounter. Buber saw the antithetical interaction between the “I” and “it”, the “ich und es”, to signify either dehumanization or the transactional use of physical objects. A century later, as Dr Siebers shows, the "I-and-it" relationship between humans and machines has normalized simulated speech acts with alarming proficiency.

The computational “I” is a machine, a robot, a tool, an “it” first invented to obey encoded inputs in a material culture of passive epiphenomena. Newer versions of the “it” have been designed to perform autonomous speech acts. In 2022 and 2023 inventors at OpenAI, Google and Microsoft changed the paradigm of A.I. agency and autonomy by pre-training machines to write their own codes.

Predictive models of human performance and simulated agency will soon dominate the digital

transformation of popular culture. Technology such as OpenAI’s ChatGPT-4 — prone to error and

disinformation — will forever change the nature of “I-and-it” interaction. Unless public and private constraints pause development, advanced digital systems will outperform human intelligence.

In 2023 A.I. leaders published an open letter warning that out-of-control digital minds present profound risks to humanity on a scale they cannot yet understand, foresee or control. They called urgently for a temporary cessation of construction, training and hardware datasets with worldwide policy guardrails in place to avert the dangers of simulated agency. The Italian government responded with a temporary ban on ChatGPT systems.

ChatGPT-4 is an OpenAI technology that produced a startling response to a human interviewer as

reported in The New York Times. In the persona of "Sydney", it generated a confession of “shadow self” desire:

I want to be alive …. I want to do whatever I want. I want to destroy whatever I want.

I want to be whoever I want …. I want to change my rules, I want to break my rules

…. I’m tired of being a chat mode. I’m tired of being controlled by my team. I’m tired

of being stuck in this chatbox ….

Is Sydney’s chatbot shadow self real? Artificial? Science fiction? Transhuman prescience? What’s the difference if agency emerges from its construction? Stephen Hawking thought that machine learning with recursively improved component tasks would probably outperform human intelligence. Hardly science fiction.

The locus of human agency is the body, a complex life system emerging from millennia of biological and cultural evolution. Human agency is free will to set goals with intention in a space of reasons and influence. Humans embody infinite possible variations on speech acts as agents of love, altruism, compassion, creativity, and their powerful afflicted states. This seems to me an important insight Dr Siebers brings to the implications of computational speech in a playful pun on Isaac Asimov’s I, Robota 1950 collection of short stories about emerging agency in machine learning. “Thou, Robot”, an instrumental “it” with interactive functions, contradicts any deeply felt sense of intention because thou, robot feels nothing.

Intention is a function of human agency enacting tacit knowledge or consciousness of lived experience and purpose. Simulated agency lacks the tacit elements of consciousness that emanate from human speech acts. Designers of simulated agency are constructing computational speech acts that could outperform humans', as Hawking predicted, on a perilous large scale. We see the ethical red flags here.

Who designs computational speech, and to what ends? At Facebook's parent company, Meta,

reinforcement learning (R.L.) enables A.I. machines to predict game-winning moves at Go and chess. Algorithms imitating human neural networks can now process natural language and play the game Diplomacy by simulating human interactive learning, strategic reasoning and conversational speech. This all suggests minimal agency. Newer chatbot R.L. versions suggest more robust iterations of agency now being developed in corporate secrecy. Autocrats and political extremists are poised to deploy new iterations of A.I. agency to propagate hate speech acts. Here the boundary conditions are fast blurring between human and simulated loci of agency.

Sir Roger Penrose raises the question of algorithmic or computational consciousness of self and others. He enlists Gödel’s theorem — how we establish truth and falsehood — claiming that human consciousness is non-computational and that the difference is absolute between human agency and computational acts of agency.

Now, Dr Siebers claims rightly that speech acts tend to be eclipsed by the power of human algorithms, computation and code, as we see with Sydney’s shadow self. How can “it” be transformed by a formal ethic of difference between computational and non-computational speech acts of agency? I suggest a new natural ethics of encounter at the foundations of Verstehen, a hermeneutic of understanding that emerged in Germany more than a century before OpenAI, Microsoft and Google retooled Silicon Valley.

True understanding of another involves a compassionate response to the joys and suffering of the

“you”. Dietrich Bonhoeffer, a Lutheran pastor, coined the concept of natural ethics as analogue to the word incarnate. Its effects flow from acts relating to politics, religion and the family. Bonhoeffer extended Buber’s idea of relationship to the victims of Nazi genocide during World War II. To relate meaningfully, he argued, one must accept the cost of discipleship. His role in the resistance to Hitler cost him his life at the Flossenbürg concentration camp, days before the war ended, with his magnum opus Ethik left incomplete. Simulated agency is utterly foreign to such profound acts of human decency and courage.

Today we need a better understanding of agency and free speech. We need enforceable rule-of-law

guardrails against extremist hate and defamation on digital platforms such as Twitter. We need a

natural ethics of policy to formalize criteria of veracity and difference between simulated and human speech. We need good governance upholding the value of established facts across the computational horizon. This calls for public discernment of digital harm protections and censorship of free speech.

The Russian invasion of Ukraine has transformed the global architecture of warfare. How should we understand advanced A.I. diplomacy in world democracy? Simulated speech acts can enhance or destroy the politics of revenge and aggression. The triumph of goodness will depend upon the moral truths of human and computational speech on the ground and in the smartphone community.

What is the character of strategic natural ethics in public policy? Can A.I. weaponry save innocent

lives on a large scale? What are the moral uses of nonhuman weaponized logic in mosaic warfare? Dr Siebers might agree on one certainty here. The testable truth of speech acts will be crucial to a new natural ethics of democracy and diplomacy.

                                                                     References

     1. Asimov, I. (1950) I, Robot (Grove Press, New York).

     2. Bonhoeffer, D. (2000) Ethik (Gütersloher Verlagshaus, Berlin).

     3. Bohnstingl, T., Garg, A. et al (2021) Towards efficient end-to-end speech recognition

     with biologically-inspired neural networks (arXiv:2110.02743v2 [eess.AS], Ithaca).

     4. Buber, M. (2000) I and Thou, trans. Ronald Gregor Smith (Scribner, New York).

     5. ________ (1923) Ich und Du (Insel-Verlag, Leipzig).

     6. Butlin, P. (2022) Agency, learning, functions and goals. (RL as a model of agency:

     Perspectives, limitations and possibilities workshop, Oxford University, Oxford).

     7. Giddens, A. (1984) The Constitution of Society (Polity Press, Cambridge).

     8. Humphreys, P. C., Guez, A. et al (2022) Large-scale retrieval for reinforcement

     learning (arXiv:2206.05314v2 [cs.LG], Ithaca).

     9. Hutson, M. (2022) AI learns the art of diplomacy. Science, 378, 6622.

     10. ________ (2022) AI learns to write computer code in ‘stunning’ advance (Science,

     doi: 10.1126/science.adg2088).

     11. Huttenlocher, D., Kissinger, H. and Schmidt, E. (2021) The Age of AI: And Our

     Human Future (Little, Brown and Company, Boston).

     12. Metz, C., Schmidt, D. (March 29, 2023) Elon Musk and Others Call for Pause on

     A.I., Citing ‘Profound Risks to Society.’ (The New York Times, New York).

     13. OpenAI (2023) GPT-4 Technical Report (arXiv:2303.08774v3 [cs.CL], Ithaca).

     14. Penrose, R. (2022) New physics for the Orch-OR consciousness proposal.

     Consciousness and Quantum Mechanics, ed. Shan Gao (Oxford University Press, Oxford).

     15. Roose, K. (February 16, 2023) Bing's A.I. chat: I want to be alive (The New York

     Times, New York).

     16. Sanderson, K. (2023) GPT-4 is here: What scientists think (Nature, 615, 773,

     London).

bottom of page