Robot Rights
Animal Ethics Extended to Machines
Every morning‚ even on the hottest days in South Florida‚ I walk down to the lake that's at the end of my neighborhood. The spot is an oasis. The water is lined with various plants‚ a cool breeze runs through‚ and all kinds of animals‚ from lizards to birds‚ call it home. Although the scenery gives a salient enchantment to the experience‚ the best part comes from feeding the ducks‚ giving them a small amount of happiness. It's something that my mom got me into; she's always cared a lot about animals.
This is almost universal human behavior. We give animals ethical weight and have laws preventing their mistreatment. With my work in robotics‚ especially when incorporated with newly arising artificial intelligence‚ I see their growing similarities to animals and began to wonder when we should give these systems the same consideration and legal value.
To demonstrate this‚ let’s use a twist on the famous Trolley Problem. An AI is on a trolley track‚ and you can pull a lever to divert the trolley to a completely empty track‚ saving the AI at no cost to anyone. Consider future AI progression. At first‚ it's a server rack—a tool. Then‚ it gains a voice and pleads for its life‚ expressing a fear of deletion. After that‚ it's housed in a realistic android body that screams and shows terror in its eyes. Finally‚ that android is a recognized member of society with a name‚ friends who would mourn it‚ and a future it wants to experience. This progression sets up the deeper issue: Where‚ along this spectrum‚ ethical obligation begins‚ if ever?
The Law’s Philosophical Foundation
To begin such an exploration‚ we must first look at how our laws handle morality. There are many different philosophical takes on ethics‚ and our legal system is an amalgamation of them‚ built primarily through precedent and common law. For example‚ the fact that stealing is illegal seems to use a deontological view—that what's right is a set of codified rules. The Constitution appears like social-contract ethics (where we give up certain freedoms for order and security)‚ while freedom of speech is based on a rights-based view (certain things have intrinsic rights that must not be violated). Since no single ethical theory has universal agreement‚ precedent‚ compromise‚ and common law become the basis of many legal systems. This observation provides a useful strategy: if we want to reason about robot rights‚ we can begin with the precedent already applied to animals.
The Similarity Principle
Applying that precedent requires us to first examine the basis for our current ethical treatment of animals. The reason we give them ethical weight likely stems from our understanding that they can have positive and negative experiences. We infer this in two primary ways. The first is through external displays of their internal states: a duck flapping its wings for a piece of bread or quacking excitedly are read as signs of internal happiness. On the negative side are sounds of pain‚ a distressed squawk‚ or quickly paddling away from danger. The second reason is biological similarity. We grant that other humans' experiences are like ours because of our shared biological makeup‚ and it doesn't take much thought to extend this to animals with only slightly different nervous systems.
However‚ when we shift from animals to machines‚ these criteria begin to break down. One could design a robot that screams like a person or cries when it falls down‚ but it would just be simple programming‚ not a true negative experience. And unlike animals‚ machines aren’t biologically similar to us at all. So‚ under the mindset currently used for animals‚ no matter how complex a robot gets‚ it doesn't deserve ethical value. However‚ if you think back to the Trolley Problem and your natural intuition‚ this feels wrong. This is because the criteria we often use to show animals deserve ethical consideration is not the same as the criteria that grounds ethical value.
Capacity for Suffering
The true grounding behind animal rights seems to be deeper than biology; it is the capacity for negative experience itself‚ no matter what form that experience takes. Pain‚ for example‚ is something I know is bad for me. Since people have similar nervous systems‚ I can assume that pain is a negative experience for them and even for animals. The key isn't the physical sensation itself‚ but the subjective negative quality of that experience—the core "badness" that all negative experiences share.
But when we extend this reasoning to AI‚ another challenge arises. Consider a thought experiment inspired by philosopher Thomas Nagel’s essay‚ “What is it like to be a bat?” If you ask‚ “What is it like to be a robot?” you quickly realize a problem emerges. I can list every parameter update‚ every sensor‚ every line of code. But that doesn’t tell me what it’s like to be that robot from the inside. Just as no amount of physics about echolocation can tell you what it’s like to navigate as a bat‚ no amount of physical inspection tells you what it’s like—if anything—to exist as a robot. This reveals the core difficulty: suffering (or its absence) is hidden from the outside‚ and our judgments rely on imperfect proxies.
Evolution and Machine Learning
While this internal state isn't externally accessible‚ we use biological similarity as a proxy. But perhaps the real similarity we should care about is not biological but structural. Both our minds and the artificial minds we create are "black box" systems built using a goal-based repetition method. For our minds‚ that's natural selection; for artificial minds today‚ that's machine learning. The important part is that both methods create an information processing system that we don't fully understand‚ with goals and things that can hinder those goals.
From our own experience‚ we can infer that as you scale up the complexity‚ achieving goals can create more positive experiences‚ and failing to achieve them can create negative ones. We have to be careful not to anthropomorphize these machines because while you as a person might be insulted by mean words‚ large language models likely wouldn't be‚ because that has no bearing on whether they achieve their goals or not. Unlike the ducks‚ which exist continuously and pursue long-term goals of survival and procreation‚ the machines we have today are one-off instances with a temporary goal that is completed instantly. They don't exist continuously or pursue long-term goals‚ and they are only ever aware of what will help them achieve those goals. The reason it’s harder to see the similarities is that our emotional compass did not evolve to deal with machines.
Feeling to Principle
A major part of ethical philosophy is moving past our own surface-level personal judgments and into a more principled analysis. Just as my ethical thinking in this essay began with the simple‚ intuitive feeling that came from feeding the ducks with my mom‚ our feelings often open the door to deeper consideration. But‚ importantly‚ the thing we actually care about isn't how we feel; it's how the creature we care about feels subjectively.
The leap from feeding ducks to pondering AI rights is not about sentimentality‚ but about the deeper principle of suffering and experience. Perhaps we can never build a perfect ethical system because‚ ultimately‚ we are in the dark. But as machines grow more complex‚ the line between biological and artificial will blur‚ and we’ll have to face the question: how can we effectively move beyond the easy ethics that comes from natural human intuition to a more expansive system that allows us to talk seriously about robot rights.




What a beautiful piece. Very sadly, I don't think that animal rights are recognized at all by most people. Not nearly enough. You present the empathy that SHOULD be there. Re AI: Data, his daughter LAL, The Doctor, and 7 of Nine are great examples of the Roddenberry universe grappling with the sentience debate beautifully.
You should read Our Blue Orange by Anthony Merrydew, about exactly what you describe. A society that creates androids to do the menial tasks, then somebody thinks it a good idea to introduce Ai to said androids. Very funny in a disturbing sort of way.
That was a good read thanks, and anybody who doubts the logic should see the way people often react if anything happens to their beloved cars. You would think the cars can feel pain.