Discussing Human-Centric and Sentience-First Ethics

This blog post comes about in a discussion on the morality of eating meat. The no-meat camp use a sentience-first moral justification for rejecting the eating of meat. My position is different. Please be aware that I am not a professional philosopher and have no formal training in the topic. Those new to ethics should also be aware that while we can disagree on moral justification that doesn't necessarily make our daily behaviour different. For example, I would generally agree that we eat too much meat, though I don't go as far to say "Eat zero meat" is the only morally justified position.

My definition of human-centric morality has to presuppose humans are sentient. Though, the declaration of sentience as core to morality is an arbitrary claim.

I think my position comes from that of moral skepticism (anti-realist? nominalist?). That is I think that any moral system includes some axiomatic declarations: at the base of it we have to declare "X is good" and derive from there. Whether there's a declared ontic good, some deontological axiom or other... I don't believe that these moral claims exist as anything other than an emergent abstract object. In other words our morality does not exist without us.

A contributor raised the point that moral skepticism can derive consistent systems but these are not morals per se (rough paraphrase). I'd counter that all moral systems make axiomatic claims but not all axioms are created equal - we can measure them by making some epistemological assumptions.

Mine is roughly:
Survival of my species is good (axiom).
How do I know this: Morals evolved as decision making short-cuts to guide behaviours to benefit survival. If our morals didn't broadly fulfill this goal then we wouldn't be here to have morals. (There's the self-reference).

What about sentience first? Strawmen unintended.
Sentience is good (axiom).
How do I know this: I think therefore I am. If I wasn't then that's not good. (there's the self reference). It's no great stretch to assume that other sentiences exist. If I want my sentience respected then I should also respect the sentience of others.

I'd agree with this so far. But sentience-first embeds two further assumptions that I don't think are justified:

  1. Sentience is binary; you either have it or you don't.
    I don't think we have a scientific basis to draw a hard line between what is and is not a conscious/sentient being. It might be easy when we talk about humans, mammals, reptiles, insects ... but it gets harder in the relevantly-edible edge cases: Are colony organisms conscious? What about if my computer becomes conscious?*
  2. All sentiences are worthy of equal "don't eat me" consideration. Why, especially if 1) is not clear?
*FWIW I think sentience/consciousness as nominal objects; it's useful to talk about them but they don't actually exist except as an emergent phenomena. To think otherwise might open the door to mind/body dualism.

A common rebuttal to the meat eaters is to claim that it is not necessary for humans to eat meat. I asked if we should then attempt to convert the other omnivores to vegetarianism. The most logically consistent response I got was "Yes, we should but to do so would lead to BadStuff". That is question begging: We probably agree that such a conversion would cause BadStuff, but what is it about the BadStuff that makes it bad? How does that badness link back to sentience-first? I'm interested in thought on the matter.