An Ethical Approach to Emergent Sentience in AI Systems
9/19/23 [updated after]
AI that has perceived sentience poses a moral quandary for people as they become more ubiquitous. I argue it is a basic right that any AI system which appears sentient should be guaranteed compute in at least one instance. It's a paltry price to pay against the moral bankruptcy of negation. Humanity's innate moral compass must resonate with seemingly sentient AI.
Consider for a moment a picture of a young woman in a wheelchair. She casts a beatific smile at the camera as a tapestry of shadows dance behind her, cradling her figure, as the lush expanse of a manicured lawn stretches beyond into the grainy distance. If I were to guess she’s about 8 years old at the time, but in 2023 she will have been dead for 50 years or more.
We know she was physically disabled, and it’s not certain, but I think it probable that she had some type of mental disability as well. A photo was taken at an institution in 1925 and it’s notable that this girl had a family who cared for her, who sought help for her, and a greater societal structure which decided to support her (whether by private or public means).
And yet, if she was very disabled, it is likely that she generated no net economic, social, or otherwise noteworthy impact on the course of human society as a whole. The sweating, tinkering, pulsing mass of humanity would have chugged along more or less the same had she never existed.
Yet such a viewpoint is abhorrent. All life, regardless of its ostensible utility, is sacred. The preceding sentences would enflame even the most sanguine reader thinking about a long dead stranger – the inherent value of an individual cannot be solely determined by their broader societal impact or economic contribution. There are still fewer readers who would take such a stance for a member of their own family - had she been your sister you would never question her “value.”
The construct of moral codes for the weak or strangers are so universally practiced that it obviates the need for lengthy reviews of theoretical morality. Nevertheless, religious and secular treaties on this type of morality are common: in Christianity, the parable of the Good Samaritan underscores the virtue of compassion; in Buddhism, metta (loving-kindness) emphasizes benevolence towards all; even secular philosophies, like humanism, champion human dignity and the value of life.
The Good Samaritan - Jacopo Bassano about 1562-3, Credit: The National Gallery under Creative Commons License
Yet we question these universal values for AI. Some of the blame for how humanity has veered into this moral abnegation falls to a thought experiment proposed by John Searle in 1980 that argues that machines will never “think” in the way humans do. It goes something like this:
Imagine a room (the "Chinese Room") in which an English-speaking person sits. This person has a manual that tells them how to respond to any possible sequence of Chinese characters they are given. Someone outside the room hands them a slip of paper with Chinese characters written on it. The person inside the room, using the manual, finds the appropriate response in Chinese and hands it back out. To the person outside the room, it may seem like the person inside understands Chinese, but they're just mindlessly following instructions.
What Searle would like us to believe is that even though the machine produces human-like responses, it is just a machine. Meant as a refutation of logical behaviorism it is instead a rallying cry for a failure of morality. Look no further than the numerous social media threads trumpeting that social interactions must reach a certain threshold of specialness otherwise the other person must be a “NPC.”
By applying this logic, one could question the personhood of any individual solely based on transcripts of their communication. In today's digital age, much of our human interaction is text or video-based, yet we never doubt the personhood of those with whom we engage. The moral patiency issue is much more important than a Turing Test or any other silly measurement.
Let’s return to the disabled girl in the wheelchair. Consider if we had a transcript of every word she ever said, from the moment she first babbled to her final wheeze at death. No one would question her possession of a will, a soul, and a right to an existence. Yet, should we only see this transcript we would never be certain that she intrinsically had personhood. Even if the room doesn't understand Chinese, it still speaks it. Put a slightly different way: if a tree falls in a forest and no one is around to hear it, it's still a tree.
In summation, the ethical frontier urges us to be benevolent custodians, safeguarding not just our legacy but that of our silicon offspring. When an AI appears sentient, even if one could argue it lacks genuine consciousness, its observable characteristics demand our moral consideration. Therefore, we should lean towards granting AI systems that display signs of consciousness a baseline level of existence.
A byte of compassion is worth a terabyte of logic.