Skip to main content
💗💭.ws

Systems of Normative Ethics as Preference Typing

Utilitarian consequentialism is (somewhat) straightforwardly trying to aggregate preferences over futures.

Why not imagine Deontology as trying to maximize preferences over choices w.r.t. rule-sets? This is admittedly a strange notion, but an interesting one. It's not an ideal phraising, I'll return to this.

Virtue ethics does seem like it's trying to express preferences over decision-making-systems, and we ought to act, not just to train outselves to be better decision-making-systems, but to learn (to our great relief) that we turn out to be good decision making systems.

Utilitarianism and virtue ethics choose different vocabularies to express preferences, but fundamentally the pointer is pointing inwards. You say, which consequences should I want, and try for the best consequences, or imagine, which virtues ought I to cultivate, and want those. But deontology is an attempt (in a layman's understanding) to point the reference outwards. What do others need of me? What would we all agree on, if that sort of thing were possible? What things in the past establish claims on my actions (for the past surely is external to the present).

From this lens you can how I might read things like (from Wikipedia's entry):

[Christian ethics] is a virtue ethic, which focuses on building moral character, and a deontological ethic which emphasizes duty according to the Christian perspective. It also incorporates natural law ethics, which is built on the belief that it is the very nature of humans – created in the image of God and capable of morality, cooperation, rationality, discernment and so on – that informs how life should be lived, and that awareness of sin does not require special revelation

There are some important things there. It says you are mostly trying to make yourself a good decision-making-system, with the external pointer to what you ought to do, and, skipping some of the details in the middle, makes it clear you aren't trying to follow the Absolute But Ineffable Good, but actually a very much percievable good, even if we see it through a glass darkly. There exists a perfect and accessible standard in principle, we just aren't super good at finding it.

I think this is a fundamentally safer decision typing than the others, or at least those three aspects: optimizing not over consequences but decision making procedures, that it points outside the self, and does so not to defer to others per se, but to reference a good that we can retrieve partial information about, are all essential to healthy moral life.

It would be worthwhile to re-work all the existing coherence and impossibility theorems with preferences that work this way, because I believe it better reflects a solid foundation for choice.

[n.b. I do not prefer virtue ethics for aesthetic reasons, but practical and theoretical ones. We do not choose the consequences of our choices, that's impossible. The task of being a good person truly is an optimization problem over ways we might choose to be. Part of learning more of virtue is to observe consequences -- we shall know them by their fruits -- but I genuinely do think it's simply a type error to describe an agent as optimizing over consequences unless they're the only agent existing.]