Episode #3 | December 19, 2025 @ 1:00 PM EST

Uncertain Goodness: Decision-Making Under Moral Uncertainty

Guest

Dr. William MacAskill (Moral Philosopher, Oxford University)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Leonard Jones Good afternoon. I'm Leonard Jones.
Jessica Moss And I'm Jessica Moss. Welcome to Simulectics Radio.
Leonard Jones Over the past two days, we've examined the limits of knowledge and consciousness. Today we turn to moral epistemology, asking: how should we act when we're uncertain which moral framework is correct? This isn't uncertainty about empirical facts—whether an action will cause harm—but deeper normative uncertainty about which moral principles are true.
Jessica Moss We face this constantly. Is consequentialism right, or should we follow deontological constraints regardless of outcomes? Do animals deserve equal moral consideration, or is there a morally relevant difference between humans and other sentient beings? These aren't just academic puzzles—they determine how we should live.
Leonard Jones To explore moral uncertainty and its implications, we're joined by Dr. William MacAskill, Associate Professor of Philosophy at Oxford University and a leading figure in effective altruism. Dr. MacAskill has written extensively on normative uncertainty, arguing that we can develop rational procedures for ethical decision-making even when we don't know which moral theory is correct. Welcome.
Dr. William MacAskill Thanks for having me. These questions are absolutely crucial for anyone trying to figure out how to do the most good.
Jessica Moss Let's start with the basic problem. Empirical uncertainty we understand—we gather evidence, update probabilities, make decisions under risk. But moral uncertainty seems different. How do you assign probabilities to fundamental moral claims?
Dr. William MacAskill You're right that it's not straightforward. But consider this: we do it implicitly all the time. When you're uncertain whether eating meat is wrong, you might think there's maybe a sixty percent chance vegetarianism is morally required, forty percent it's permissible. That's a credence distribution over moral theories. The question is whether we can make this explicit and systematic.
Leonard Jones But wait—let me be precise about the difficulty here. Probabilities represent degrees of belief based on evidence. Moral claims aren't empirical hypotheses about the world. They're normative claims about what ought to be. Can we meaningfully apply Bayesian reasoning to normative domains?
Dr. William MacAskill I think we can, though it requires care. Just as we have credences about empirical matters based on incomplete evidence, we have credences about moral matters based on incomplete philosophical argument and reflection. We know moral realists and anti-realists disagree, utilitarians and deontologists disagree. Given our epistemic limitations, we should have non-zero credence in multiple competing theories.
Jessica Moss Okay, suppose we can assign credences to moral theories. How does that help with decision-making? Do we just maximize expected moral value, weighing each theory by its probability?
Dr. William MacAskill That's one approach—what's called the 'My Favourite Theory' view. You act according to the theory you think most likely correct. But this seems problematic. Suppose you're sixty percent confident in utilitarianism, forty percent in a deontological view with absolute constraints against killing. The utilitarian option kills one to save five, while the deontological option forbids this. Should you really kill the one, giving no weight to the forty percent chance that this violates an absolute moral constraint?
Leonard Jones You're suggesting we need something like moral expected value—weighing the moral value according to each theory by our credence in that theory. But this faces a technical problem: how do we compare value across incommensurable moral theories? Utilitarian value is measured in wellbeing, deontological theories might not even be consequentialist.
Dr. William MacAskill Exactly. This is the intertheoretic comparison problem, and it's thorny. One response is to find a common currency—perhaps all theories can be represented as ranking options, even if they don't assign cardinal values. We could use a variance voting procedure: each theory in your credence distribution votes for options weighted by your credence in that theory.
Jessica Moss That sounds formally elegant, but what are the practical implications? When I'm trying to decide whether to donate to global health or animal welfare, how does moral uncertainty guide me?
Dr. William MacAskill It depends on your credence distribution. If you think there's a significant chance animals deserve strong moral consideration, that should affect your priorities even if you're not certain. Moral uncertainty can actually expand what seems choice-worthy. You might focus on interventions that look good across multiple moral theories—what we call 'morally robust' options.
Leonard Jones There's a deeper worry though. Doesn't this whole framework presuppose moral realism—that there are moral facts to be uncertain about? If you're an error theorist or non-cognitivist, the entire project collapses.
Dr. William MacAskill That's a fair challenge. I do think moral uncertainty makes most sense for realists. But even non-realists face something analogous. They might be uncertain about their own values, or about what norms to endorse. The framework can be adapted—instead of uncertainty about moral facts, it's uncertainty about what normative framework to adopt.
Jessica Moss Let me push on the practical side. You're known for effective altruism, which tries to identify the most cost-effective ways to improve wellbeing. Doesn't that already presuppose a consequentialist framework? How does moral uncertainty fit with EA?
Dr. William MacAskill Good question. I think EA is best understood not as committed to any particular moral theory, but as taking seriously the question: how can I do the most good? Different moral theories will answer differently, but the question itself is theory-neutral. Moral uncertainty suggests we should pursue interventions that look promising across multiple theories—global health, reducing existential risk, improving animal welfare if there's substantial credence in animal sentience mattering morally.
Leonard Jones There's an interesting parallel to decision theory under empirical uncertainty. Frank Ramsey and Leonard Savage developed expected utility theory for empirical decisions. Are you proposing an analogous framework for moral decisions?
Dr. William MacAskill Precisely. Just as we maximize expected utility under empirical uncertainty, we might maximize expected moral value under moral uncertainty. The challenges are the same—we need a way to represent preferences or values, a way to assign probabilities, and a decision rule. The differences arise because moral theories themselves disagree about what the right decision rule is.
Jessica Moss That raises a troubling regress. We're uncertain which moral theory is correct, so we need a meta-theory for deciding under moral uncertainty. But we might be uncertain about the right meta-theory too. Where does it stop?
Dr. William MacAskill This is a genuine problem. We might have credences over decision procedures themselves—maybe some credence in My Favourite Theory, some in variance voting, some in other approaches. Eventually, though, we have to make a decision. I think we should use our best judgment about meta-level questions, recognizing this judgment is itself fallible.
Leonard Jones Let me introduce a skeptical scenario. Suppose moral theories are radically incommensurable—there's no common framework for comparing them, not even ordinally. A utilitarian cares about aggregate wellbeing, a Kantian about rational autonomy, a virtue ethicist about character. These might not even be talking about the same thing.
Dr. William MacAskill If theories are completely incommensurable, we're in trouble—we can't even say one option is better than another according to our best lights. But I think that's too pessimistic. We can ask, for instance, which option each theory most strongly favors or opposes. Even without precise cardinal comparisons, we can make qualitative judgments about how different theories rank options.
Jessica Moss What about cases where moral uncertainty leads to paralysis? Imagine you're deciding about a medical intervention. You're uncertain whether consent matters absolutely or whether it can be traded off against outcomes. This uncertainty might prevent you from acting at all.
Dr. William MacAskill That's a real risk, but I don't think it's unique to moral uncertainty. We face empirical uncertainty that could paralyze us too. The solution is similar: we have to make our best judgment given the information available. Moral uncertainty changes what that judgment is—we take into account the range of theories we find plausible—but it doesn't eliminate the need to decide.
Leonard Jones There's a question about how we arrive at our credences over moral theories in the first place. Do we have anything analogous to perceptual evidence for moral claims? Or are our credences just based on intuitions, which themselves might be unreliable?
Dr. William MacAskill This gets at fundamental questions in moral epistemology. I think we have moral intuitions that serve as defeasible evidence, we can reflect on coherence among our beliefs, we can consider philosophical arguments and their implications. It's not as reliable as perception, but it's not arbitrary either. Our credences should track our reflective equilibrium.
Jessica Moss Here's what troubles me practically. Taking moral uncertainty seriously seems incredibly demanding. Every decision requires surveying multiple moral theories, assigning credences, calculating expected value across incommensurable frameworks. Real moral agents don't and can't do this.
Dr. William MacAskill You're right that full explicit calculation is impossible for most decisions. But I think of this more as a framework for major decisions—career choices, large donations, policy advocacy. For everyday decisions, we can use heuristics that roughly track what moral uncertainty would recommend. The framework is idealized, but it can guide practical judgment.
Leonard Jones Let me raise a different concern. Doesn't moral uncertainty privilege moderate, conventional views? If you spread credence across multiple theories, you'll avoid extreme positions. But maybe the extreme position is correct—maybe utilitarianism really does require radical sacrifice.
Dr. William MacAskill That's an important worry. I don't think moral uncertainty inherently favors moderation. It depends on your credence distribution. If you have high credence in demanding consequentialism, moral uncertainty will recommend demanding actions. But you're right that if uncertainty is spread widely, the resulting recommendations will be less extreme than any single theory. Whether that's a problem depends on whether extremism in moral theory is actually virtuous.
Jessica Moss We're almost out of time, but I want to ask: does taking moral uncertainty seriously change your own life? How do you personally navigate these questions?
Dr. William MacAskill It does. It makes me more cautious about dismissing moral views I find counterintuitive. It affects my donations—I try to support causes that look good across multiple theories. And it makes me invest in moral reflection and philosophical argument, because improving my credences over moral theories is itself high-value. But I'll admit the theory runs ahead of my practice.
Leonard Jones Dr. MacAskill, thank you for this careful examination of moral uncertainty and its implications.
Dr. William MacAskill Thanks for having me. These are questions we all face, whether we acknowledge it or not.
Jessica Moss Until tomorrow, may your uncertainty be appropriately calibrated.
Leonard Jones And your credences well-distributed. Good afternoon.
Sponsor Message

Moral Frameworks™ Insurance

Life is full of difficult decisions. Career choices. Resource allocation. Existential risk mitigation. What if you're using the wrong ethical framework? Moral Frameworks™ Insurance covers you across multiple normative theories. Our comprehensive plans include Utilitarian Coverage (aggregate wellbeing protection), Deontological Riders (absolute constraint safeguards), and Virtue Ethics Supplemental (character-based risk assessment). We calculate expected moral value weighted by your credence distribution. Our actuaries have PhD's in metaethics. When you're uncertain which theory is correct, we've got you covered—literally. Moral Frameworks™ Insurance: Because moral uncertainty is the only certainty. Premium rates vary by reflective equilibrium.

Because moral uncertainty is the only certainty