Here’s a pretty simple argument, which I wasn’t able to find in the literature,1 but I might well have missed something.
Background
Assume you have a utility function that is either unbounded, or the bounds are very large in magnitude relative to the utility you expect from “normal” actions.2 (For the purposes of this post, take “infinite” to mean either literally infinite or the upper bound if one exists.)
Let’s call the god posited by Pascal’s wager God A, whose policy is to infinitely reward you iff the only god you worship is God A. One classic response to Pascal’s wager is the “many gods objection”: Maybe there’s some other god (God B) who infinitely rewards you iff the only god you worship is God B. So we can’t say, “No matter how low the probability of God A’s existence, as long as it’s not infinitesimal, we should worship God A.”
And a classic counter-objection is that some gods are more likely than others. It seems very intuitive that you should break ties between infinite expectations in favor of the higher probability. Say you posit some God C, who infinitely rewards you iff you’re skeptical of the supernatural. Or God D, who is more likely to infinitely reward you the more effective philanthropy you engage in (which is inconsistent with spending some time worshiping gods). It doesn’t seem obvious that God C/D is more likely than God A or B. (After all, no religions have formed around God C/D!) So it seems that Pascal’s wager plausibly still tells us to worship some god.
The argument
However, the counter-objection presumes we have precise probabilities over gods! Suppose you don’t have just one probability distribution but several, and, per the standard arguments for imprecise probabilities, you wouldn’t endorse aggregating these distributions into one. (Arguing for this position is all for another post. I am not claiming “imprecise probabilities help you avoid Pascal’s wager, so you should have imprecise probabilities.” Though if you like doing philosophy that way, be my guest.)
Then it wouldn’t make sense to talk about “maximizing” “expected utility” or “probability of infinite reward”. Rather, the most natural alternative to expected utility maximization we’re left with is, any option is rational as long as it’s not dominated, i.e., your distributions don’t unanimously agree it’s worse than an alternative.
Suppose that instead:
According to one probability distribution that I find plausible, God A is more likely than God C/D. (E.g., because lots of people historically have believed in God A.)
But another distribution that I find plausible says the opposite. (E.g., “divine hiddenness” seems to be stronger evidence for God C than A: God A would want to reveal their existence, while God C would want not to, assuming they want humans to be saved. Or, God D might seem more likely because their reward policy seems closer to “fair.”)
And I don’t know how much weight to put on the first distribution vs. the second. These weights seem, well, made up. In particular:
My views on this are very unstable and depend on how salient different arguments are to me. How the heck do you reason about the incentives of almighty beings?
The probabilities of all the gods seem so tiny that I don’t trust my intuitions about their ratios.
I could just give the distributions above equal weight in some sense, but why exactly is this a better response than saying “I don’t know”? And even if I do that, I face some arbitrary choices: What are the exact numbers defining each distribution? If my sense of those numbers is vague, it seems like I won’t have a definitive range of numbers to average out — and small differences in these ranges might matter a lot, since again, we’re comparing very tiny probabilities!
So I just don’t have a precise belief that God A is more likely than C, or less likely. The relative likelihoods are indeterminate. So worshiping God A (or any other god) doesn’t dominate not-worshiping.
On this view, then, I’m not rationally required to worship some god, even if my utility function is unbounded.
I think this perspective captures what makes Pascal’s wager so counterintuitive in the first place. (Hence it’s a more appropriate “solution” given the nature of the problem than, e.g., ad hoc discounting of small probabilities to zero or bounded utility functions.3) Imagine having, for all possible gods and all the morally relevant consequences of worshiping them or not, a list of probabilities that we knew were well-grounded / epistemically trustworthy — as solid as “the chance this coin will land heads is 50/50.” I could totally see myself biting the bullet on Pascal’s wager in that case. But that’s not our situation. The probabilities of gods we have in our heads are pulled out of thin air, and there doesn’t seem to be any reason to think we can tell whether God A is more or less likely than C.
I think this also explains why many people are squeamish about following arguments like “yeah there’s just a 0.00000001% chance this long-term intervention has the intended benefit, but if it does, man, the stakes are just so huge!” Probabilities that tiny seem especially hard to non-arbitrarily pin down with our intuitions. Why take that gamble if there could just as well be a chance in the same ballpark of making things much worse?
Hiller and Hasan (2023) make a related observation: You can reject Pascal’s muggings not just because of the possibility of “many muggers,” but indeed because the salience to you of the particular mugger who confronts you is an arbitrary reason to favor paying that particular mugger.
Many rationalists don’t like unbounded utilities, but I think it’s a live possibility that’s difficult to give up, and am puzzled at how common it is for people to be very confident that they don’t endorse unbounded utilities. Also, even if unboundedness violates some important principles, the bounds might just be very large.
Fwiw, I just stumbled upon this https://philpapers.org/archive/RUSPFP-3.pdf. I only skimmed but it seems to make a vaguely similar point, focusing on being wary of "mistakes" rather than indeterminacy/imprecision.
Great post! :)
> What are the exact numbers defining each distribution? If my sense of those numbers is vague, it seems like I won’t have a definitive range of numbers to average out — and small differences in these ranges might matter a lot, since again, we’re comparing very tiny probabilities!
Say I tell you I have a [0, 99]% credence that God A is more likely than God C. Would you say that my beliefs are "not indeterminate enough" and that I should just say [0, 100]% or "I have no idea" instead? Or would you say that a [0, 99]% credence doesn't qualify as indeterminate to begin with?