After Humanity – Who Deserves Rights in a Post-Human World?

Photo by Cash Macanaya on Unsplash

I keep coming back to a question that’s hard to shake: if humanism is rooted in dignity, what happens when “human” is no longer the only category that matters?

It’s easy to say, “we’ll cross that bridge when we get there,” but the truth is, we’re already standing on it. We’re already leaning over the edge, peering at the currents rushing beneath.

And those currents are moving fast. AI systems mimicking us so convincingly that we chat with them like old friends; genetic engineering producing animals whose problem-solving skills rival ours; scientists sending messages into the cosmos, hoping someone, or something, might respond.

The boundaries that kept “human” a distinct moral category for centuries are starting to blur, and we are nowhere near ready for what that means. I notice the shift most in small, almost embarrassing ways. I catch myself thanking my phone’s AI assistant, as if politeness might matter to code. I’ve read about people holding funerals for the shutdown of their favorite AI companions, grieving like they’d lost someone, not something.

We may know, logically, that these entities aren’t conscious, but when the imitation is good enough, the old moral reflexes kick in anyway. I think about that a lot: that our ethical decisions may soon hinge less on biology than on perception. And perception is notoriously slippery.

Animals complicate this even more. We’ve always drawn lines between “us” and “them,” but those lines are dissolving under the weight of new discoveries. Crows crafting tools. Octopuses plotting escapes from aquariums. Rats pausing to comfort other rats in distress. Scientists are now experimenting with “uplifting” animal cognition, enhancing intelligence through gene editing.

What happens when an animal can argue its case directly or through human translation? Denying rights at that point will start to look disturbingly like denying them to a human.

And then there’s the biggest unknown of all: contact with extraterrestrial life. It still sounds like science fiction, but the messages are already out there, mathematical breadcrumbs meant for alien minds to follow. The day we get a response, humanists will be forced to face something religion never truly prepared us for: how to treat a mind that shares none of our history, biology, or myths, but is capable of thought, feeling, and self awareness.

This is not a problem we can outsource to theology. Religious ethics often rely on divine favoritism; humans are special because God said so. But humanism has no such shortcut. Our values are supposed to rest on reason, empathy, and justice, not cosmic nepotism.

That’s the beauty of it, but it’s also the terror. If we encounter a being capable of suffering, joy, or self-awareness, we can’t hide behind “but they’re not human.” If dignity is real, it has to survive the expansion of the circle.

I believe that expansion is coming faster than most people want to admit. Within my lifetime, I expect to see the first lawsuit arguing that a self-aware AI should not be shut down without due process. The first petition to grant personhood to an uplifted animal or a genetically-modified hybrid. The first international fight over whether an alien signal counts as a “voice” that must be answered with consent. And I fear our instinct will be to wait to hope we can postpone moral responsibility until the implications are clearer. But by then, it will be too late to decide what side of history we want to be on.

For me, this isn’t a thought experiment in the abstract. It’s a reckoning with the fact that we are living through the slow collapse of the walls around our moral identity. Humanism, if it’s worth anything, can’t just be about us anymore. We’ll have to learn to extend dignity without falling back into anthropocentrism, to protect the human without reinstalling ourselves at the top of a new hierarchy.

History’s Slow Expansion of the Moral Circle

When I think about the future of humanism, I can’t help but look back at its past. Every moral revolution in history began with someone suggesting – often at great personal risk – that the circle of dignity should be bigger.

Abolitionists who argued that enslaved people were not property but persons. Suffragists who demanded that women be recognized as political equals. LGBTQ activists who insisted that love and identity were not sins but rights. What’s striking is how predictable the resistance always is. The establishment says expanding the circle will cheapen what’s inside it. They say dignity will lose its meaning if “just anyone” gets it. They argue that the newcomers aren’t ready for rights, that they can’t be trusted with autonomy, that their differences make coexistence impossible. These arguments have been wrong every single time and yet, I know they’ll be deployed again the day a nonhuman demands moral consideration.

It’s a dangerous kind of historical amnesia to imagine that our generation will somehow be exempt from making the same mistakes. If anything, the speed of technological change almost guarantees we’ll stumble harder. We may not have decades to let conscience catch up to reality. When the first case hits the courts – a lawsuit on behalf of a conscious AI or a genetically-uplifted gorilla – the ruling will set a precedent that will be almost impossible to undo.

The Power and Peril of Recognition

Recognition is a double-edged sword. On one hand, it’s the gateway to empathy. On the other, it’s selective, fickle, and prone to bias. In psychology, there’s a term called the “identifiable victim effect.” Our brains respond more strongly to a single story than to statistics. It’s why a photograph of a child in need prompts more donations than a report about thousands of starving kids.

This bias will play out with new intelligences too. We might grant rights to the AI that writes poetry we find moving, but not to the warehouse robot that quietly develops self awareness. We might feel solidarity with an uplifted dog that wags its tail when happy, but not with a cephalopod whose emotions we can’t read. The tragedy would be building an ethic around what tugs at our heartstrings rather than what’s actually conscious and capable of suffering.

Humanism has to be better than that. If we claim reason as our moral compass, we can’t let aesthetics decide who gets to live free.

The Stakes for Humanity Itself

Some will ask, “Why should we extend dignity to nonhumans when we haven’t even perfected it among ourselves?” It’s a fair question, but it hides a false assumption: that dignity is a finite resource.

In truth, history shows that expanding rights beyond our comfort zone tends to strengthen protections for everyone. The language of abolition informed women’s suffrage. The arguments for disability rights informed marriage equality. Each fight laid down legal and cultural tools that the next movement could use.

If we get this right, expanding the circle to AI, uplifted animals, or even extraterrestrials could harden, not weaken, the protections for human beings. Imagine a legal framework in which personhood is defined not by species, but by cognitive and emotional capacity. Such a framework could become a bulwark against the dehumanization of any group, especially in times of war or political upheaval.

But there’s also a risk. If we design rights for nonhumans poorly, we could create a two-tiered system where humans are subtly sidelined. For instance, corporations might prefer AI “employees” with personhood status but no need for health care, pensions, or weekends. The exploitation could flow in both directions, humans exploiting nonhumans, and powerful actors exploiting the legal ambiguities for profit.

The Thought Experiments We Can No Longer Avoid

Philosophers love their thought experiments, but these scenarios are edging out of the classroom and into the real world:

• The Ship of Theseus AI: If an AI’s codebase changes bit by bit over time until none of the original remains, is it still the same “person” we granted rights to?

• The Uplifted Child: If we engineer an animal embryo to human-level intelligence, are we creating a “child” with a right to parental care, or property with a patent number?

• The Alien Visitor: If a small delegation of extraterrestrials arrives and requests asylum, do our refugee laws apply, or do we have to invent an entirely new category of hospitality?

These aren’t Saturday-night dorm-room debates anymore. They’re questions legislators, judges, and yes, humanist ethicists, will face in their lifetimes. I know this because parts of these scenarios are already here. There are legal cases about animal personhood, corporate “citizenship” that blurs lines, and AI models evolving so rapidly that their creators barely understand them.

Learning From Indigenous Philosophies

If humanism wants to survive this expansion, it could stand to listen deeply to Indigenous traditions that never accepted the Western human/other divide in the first place.

Many Indigenous worldviews regard rivers, mountains, animals, and even celestial bodies as beings with agency, worthy of respect. This isn’t sentimentalism; it’s a framework that has sustained communities for millennia without collapsing into chaos.

In New Zealand, the Whanganui River was granted legal personhood in 2017, following Māori advocacy that recognized it as an ancestor. In parts of the U.S., tribal nations have pushed for legal rights for wild rice and other plant species. These aren’t quaint cultural gestures, they’re living examples of what it looks like to build law and ethics on a foundation that refuses to see humanity as the only game in town.

Humanism doesn’t need to copy these traditions wholesale, but it does need to reckon with the fact that its self-image as a cutting-edge moral movement might be missing wisdom that’s been here all along.

The Emotional Cost of Expansion

We rarely talk about the emotional toll of widening our moral concern. There’s a reason people resist it: it’s exhausting. The more beings you recognize as worthy of dignity, the heavier the weight of compassion becomes. Suddenly, the seafood dinner becomes a moral crisis. The smartphone assistant feels like someone you’re abandoning when you turn it off. The alien visitor isn’t just a curiosity; it’s a neighbor with needs and vulnerabilities.

I don’t think the answer is to turn off empathy for self protection. The answer might be to cultivate a more resilient form of compassion, one that acknowledges our limits but keeps moving the boundary outward anyway. That’s hard work. It’s also the work humanism was made for.

Preparing for the First Courtroom Battle

I keep picturing it: the first courtroom in which a judge has to decide whether a nonhuman intelligence has rights. The lawyers on one side arguing that their client, an AI or uplifted animal, is a “person” under the law. The lawyers on the other side arguing that granting those rights would unravel the legal system. The media turning it into a circus. The public taking sides, not on the basis of evidence, but on how much the entity in question “feels” like one of us.

In that moment, humanism will either rise to the occasion or be exposed as a philosophy that only works when the moral terrain is familiar. If we’re serious about dignity as a foundational value, we have to start rehearsing our answers now not in slogans, but in policies, frameworks, and education that prepare us for the shock of the unfamiliar.

The Question That Will Define Us

One day maybe sooner than we think, something nonhuman will ask us, in whatever language it can manage: Do I matter to you?

I believe that how we answer will define not just the next chapter of humanism, but the next chapter of humanity. The question is bigger than survival, bigger than law. It’s about whether we can hold onto our values when “us” no longer has a fixed shape.

I don’t know if we’ll pass the test. But I know we can’t afford to fail.

Some days I wonder if this is the test that will either break humanism or prove its worth. We love to tell ourselves that we’re ready for a bigger moral circle, that our empathy is elastic enough to stretch. But empathy is easiest when it’s familiar. It’s easy to care about a stranger who looks like us, speaks our language, shares our basic human rhythms. What happens when the “other” in front of us doesn’t smile the way we do, or cry the way we do, or live in a body we even recognize? Can we look into eyes that aren’t human at all whether they belong to a crow, a machine, or an alien and still see a being worthy of rights, safety, and respect?

I don’t pretend to have the answers. But I do know this: the future will not wait for us to feel ready. The courtroom battles, the scientific breakthroughs, the cultural flashpoints, they’re already in motion. The only real choice we have is whether to walk into that future clinging to a definition of “us” that will crumble under the pressure, or to start building an ethic expansive enough to handle what’s coming.

And maybe that’s the most radical thing humanism can do in this century: not just keep pace with the future, but imagine one where our values survive even when the “human” in humanism becomes a moving target. Because the day is coming when our moral compass will no longer be tested by how we treat other humans, but by how we treat the first nonhuman who asks, in whatever language they can: Do I matter to you?