Consider absolute-level satisficing consequentialism:
ALSC: “There is a number, n, such that: An act is morally right iff either (i) it has a utility of at least n, or (ii) it maximizes utility” (Bradley 2006, 101).
ALSC, like many other versions of satisficing consequentialism, permits agents to go out of their way to prevent some good state of affairs from coming about and for absolutely no good reason. To illustrate, consider the following scenario:
Let U(x) = the overall utility that is produced if S does x, and let’s suppose that n = 100. Assume that a1 is the act of minding one’s own business while sitting on the couch watching TV, that a2 is the act of dissuading nine others from donating money to Oxfam, that a3 is the act of the act of dissuading six others from donating money to Oxfam, and that a4 is the act of giving someone who is easily convinced by Singer-type arguments a copy of “Famine, Affluence, and Morality,” and let’s suppose that someone is going to pay you handsomely if you do this and that, therefore, it is in your self-interest to give this person a copy of “Famine, Affluence, and Morality.” Also, suppose that dissuading others from donating money to Oxfam is not something that you enjoy doing and is, therefore, detrimental to your self-interest. So, in terms of your self-interest, the acts rank as follows: a4 > a1 > a3 > a2. Now, here, are the utilities of these acts along with what ALSC implies regarding their deontic statuses:
act; U(x); deontic status
a1; +180; permissible
a2; +90; impermissible
a3; +120; permissible
a4; +190; permissible
As this case illustrates, ALSC has implausible implications. Clearly, a3 should be impermissible, as there are alternatives that would be both morally and self-interestedly superior. It’s impermissible to make a sacrifice when this will make things not only morally worse, but also worse for you.
Two other versions of satisficing consequentialism avoid the implausible implication that a3 is permissible. The first is bird-in-hand satisficing consequentialism:
BIHSC: There is a number, n, such that: An act is morally right (i.e., permissible) iff (a) it has a utility that is at least as great as the utility of the “act” of doing nothing, and (b) either (i) it has a utility of at least n, or (ii) it maximizes utility. (I’m not sure how to precisely explicate the notion of ‘doing nothing’, but I want to hold that intentionally standing perfectly still so that dust will collect on some trigger mechanism and thereby kill someone is not doing nothing.)
The second is Cullity’s self-sacrificing absolute-level satisficing consequentialism:
CSSALSC: “There is a number, n, such that: An act, a, performed by agent S, is morally right iff either (i) a has a utility of at least n, and any better alternative is worse for S than a; or (ii) a maximizes utility” (Bradley 2006, 107).
Which is better? Well, on CSSALSC, but not on BIHSC, a4 is morally required, and this seems to me to be the right result, for the next best alternative, a1, involves forgoing a personal benefit when forgoing this benefit will only make things morally worse. (But, of course, this isn’t as bad as ALSC, which permits making a personal sacrifice when this will only make things morally worse.) So it seems to me that CSSALSC is the more plausible of the two. Now, here’s a question: When it comes to constructing a theory of rationality as opposed to a theory of morality, what would the analogues of CSSALSC and BIHSC look like, and which would be the more plausible?
Well, bird-in-hand satisficing rationality would, I think, look like this:
BIHSR: There is a number, n, such that: An act, x, performed by agent S, is rationally permissible iff (a) x has an S-utility that is at least as great as the S-utility of the act in which S does nothing and (b) either (i) x has an S-utility of at least n, or (ii) x maximizes S-utility. (An act’s S-utility is the amount of utility that S would have were S to perform that act.)
But is there any analogue of CSSALSC for a theory of rationality? The problem, of course, is that any better alternative in terms of S-utility cannot be worse for S. So it’s difficult to construct any obvious analogue. Is bird-in-hand satisficing, then, the most plausible version of rational satisficing? I should note that the idea of bird-in-hand satisficing is owed to Pat Greenspan’s “Sensible Satisficing?” (manuscript).