Notes from the Wired

Philosophical Ramblings #07: Pain, Consciousnessand the Value of Animals

July 8, 2025

In light of recent articles — The Eiffel Tower is NOT in Paris! and Philosophical Ramblings #06: Heidegger, Beliefs and Choosing One’s Values — I want to add an addendum to What is a person? Or when is a person? and Killing Dream People!.

In the former article, I proposed a definition of personhood i.e. being a moral agent, where “personhood” or “moral agent” is shorthand for “something deserving moral consideration”, as the capability to deploy human consciousness. In the latter article, I critiqued alternative definitions of personhood and what it means to deserve moral consideration.

However, I now have two issues with my earlier understanding. The first is that I did not adequately engage with a rival theory proposed in All Animals Are Equal, which argues that something deserves moral consideration if it has the capacity for suffering. The second is my use of the qualifier human in the definition of consciousness, which I did not justify sufficiently.

Let’s start with the first point. I don’t believe that the mere capacity for suffering can serve as a solid criterion for personhood. This is because the things involved, pain and suffering, are themselves morally neutral. What do I mean by this?

A knife isn’t inherently morally bad. Its “badness” depends on how it’s used, for example to harm someone or to slice tomatoes. Similarly, it would be strange to say that sleep is intrinsically good or bad. Sleep itself doesn’t contain moral qualities. Of course, I can talk about “bad sleep”, maybe I didn’t sleep well, slept too long and feel groggy, or kept waking up, but this is not because sleep itself is bad. It’s because of how I, as a subject, experienced that sleep. I judge the experience based on how it feels to me.

The same goes for involuntary actions like sneezing or reflexes. If I sneeze and get into a car accident because of it, I might call that sneeze “bad”, but what I actually mean is that the consequences of that specific sneeze were bad for me. The act of sneezing, in general, isn’t bad or good.

Pain works the same way. When I go to the dentist and they drill into my tooth, I feel pain, but the act of drilling isn’t bad, nor is the pain itself inherently bad. Pain is just a physiological response. What matters is how I experience that pain, how it arises in my subjectivity. That’s what makes it bad.

So, I don’t think the capacity for suffering is a good definition. But if we modify it slightly to the capacity for experiencing suffering, it becomes much better. This aligns with the famous philosophical paper What Is It Like to Be a Bat?, which emphasizes the difference between a process happening and it being subjectively felt. The former is objective; the latter is phenomenal.

This brings us to the question of how this fits with my original definition of personhood as based on consciousness. For that, we need a working definition of consciousness. One I like is: Consciousness is having a subjective experience of being or consciousness is what it is like to be. Again, this is in line with the bat article.

When we compare this definition of consciousness with the modified definition of moral consideration as capacity for experiencing suffering, we find they converge. Both hinge on subjectivity and phenomenal experience, there must be a “someone” to whom things appear. If you can experience suffering, then there is a “you” who experiences it, a subject. And if there is subjectivity, there is consciousness. So I would now argue that these two definitions are essentially the same.

Now to the second issue, my use of the qualifier human in “human consciousness.” The key question is: why human consciousness and not other forms? This is a fair criticism.

My reasoning was that I arrived at the conclusion, personhood as the capacity to deploy human consciousness, by analyzing examples that only included humans. So it seemed natural to restrict the definition to humans. But upon reflection, I see that this is flawed. There’s no solid justification for limiting it to human consciousness. Other conscious beings (e.g., animals) should be included as well. So the “human” qualifier should be removed.

What does this mean practically? What implications does this have for how I treat animals?

It depends on whether animals are conscious or not, and this is a difficult question. The crux is that consciousness is entirely internal. I know I am conscious, but I cannot know whether someone else is conscious in the same way, because to know what it is like to be them, I’d have to be them. This problem applies to animals too. We don’t know whether there is something it is like to be a bat, i.e. whether bats have inner subjectivity.

(Programmers might think of this like trying to access a private attribute in another class: you simply can’t do it.)

So, how should we decide whether to treat a given being with moral consideration? The best we can do, in the absence of certainty, is to make an educated guess.

What would such an educated guess look like? It depends on our model of how consciousness arises.

If we believe consciousness emerges from a certain level of neural complexity, i.e. enough brain cells organized in a particular way, then we might conclude that insects are not conscious and thus don’t deserve moral consideration, whereas animals with more complex brains (like monkeys or dolphins) might qualify.

On the other hand, if we believe that consciousness can’t just emerge from complex matter, because how could something entirely subjective arise from something entirely objective?, then we might lean toward a theory like Panpsychism. Panpsychism argues that consciousness (or subjectivity) is a fundamental part of reality, like particles or fields in physics. Conscious humans are then just concentrated expressions of this underlying “subjectivity field.” This view also resonates with thinkers like Schopenhauer.

If you’re more aligned with the first position (consciousness as emergent from complexity), it might be safe to say insects aren’t conscious and thus don’t deserve moral consideration, whereas smarter animals pose a moral gray area. If you lean toward panpsychism, however, then you might believe that all animals are, to some extent, conscious and therefore all deserve some moral consideration.

Personally, I haven’t yet read enough about panpsychism to form a firm conclusion, but from what I’ve seen, I tend to lean in that direction as a model for how subjectivity arises.