Welcome to our highly anticipated discussion of Joe Horton‘s “Aggregation, Risk, and Reductio.” The paper is published in the most recent issue of Ethics; you can find it here. Johann Frick‘s critical précis is immediately below. Please join the discussion!
Johann Frick writes:
It is a great pleasure to kick off our discussion of Joe Horton’s extremely rich and thought-provoking article “Aggregation, Risk, and Reductio” (Ethics, 2020). I will begin with a brief synopsis of some of Joe’s main claims, followed by a few critical comments.
Consider the following two questions:
QUESTION 1: Is there some number n such that I should save n people from a substantial burden, such as paralysis, rather than saving one person from a severe burden, such as death?
QUESTION 2: Is there some number m such that I should save m people from a minor burden, such as a migraine, rather than saving one person from a severe burden, such as death?
Proponents of full aggregation answer both questions in the affirmative. Proponents of full anti-aggregation answer both questions in the negative. Proponents of partial aggregation (PA) seek to justify an affirmative answer to the first and a negative answer to the second question.
The stated objective of Joe’s paper is nothing if not ambitious. He aims to offer a reductio trilemma for proponents of any partially aggregative view. (Though Joe does not say this, it seems to me that, if his arguments are sound, they would likewise cut against all versions of full anti-aggregation. So, the potential implications of Joe’s discussion are even more far-reaching than he advertises).
To be acceptable, Joe maintains, a partially aggregative moral view must satisfy three desiderata:
D1: It must avoid implausible implications in individual cases.
D2: It must avoid implications that are in tension with the intuitions that incline people toward partially aggregative views.
D3: It must avoid inconsistent implications across cases that are in all morally relevant respects equivalent.
However, Joe argues, no partially aggregative view can satisfy all three desiderata. Hence, no partially aggregative view is acceptable.
Joe seeks to demonstrate this by examining the verdicts that a PA view would render in the following trio of cases:
VILLAIN 1: A villain has kidnapped A and B. He will either (1) inflict a migraine on A, or (2) inflict a one-in-a-zillion chance of death on B. You must choose which.
VILLAIN 2: A villain has kidnapped ten zillion X people and ten zillion Y people. He will either (1) inflict a migraine on each X person, or (2) randomly select and kill ten Y people. You must choose which.
VILLAIN 3: A villain has kidnapped ten zillion X people and ten zillion Y people. He pairs each X person with a Y person. For each pair, the villain will either (1) inflict a migraine on the X person, or (2) give the Y person a ticket for a lottery with ten zillion tickets. You must choose between these options for each pair in turn. You know that, after you have chosen for each pair, the villain will randomly select ten tickets and kill anyone who has a corresponding ticket.
What verdicts should a PA view give about each of these three cases? Joe reasons as follows:
Lest it violate the first desideratum, a PA view must imply that we should (or at least can permissibly) choose (2) in Villain 1. Any other answer would be simply implausible. As Joe puts it “we frequently impose tiny chances of death on some people as a side effect of sparing others from minor burdens, and this behavior seems clearly permissible.” (p. 516)
With regard to Villain 2, Joe argues that a PA view must imply that you should choose (1). Denying this, he thinks, would violate his second desideratum: “It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, in some cases involving risk, you should save a huge number of people from migraines rather than saving ten people from death.” (p. 517).
But finally, in Villain 3, Joe thinks that a proponent of PA is committed to choosing (2) for each pair. After all, he reasons, what you confront in Villain 3 is a series of choices, each of which is exactly like the choice you faced in Villain 1. And in that case, we said, a proponent of PA would choose (2). So, this is also what they should (or at least permissibly can) choose for each pair in Villain 3.
But now notice that this judgment, combined with the verdict about Villain 2 above, seems to lead to a violation of Joe’s third desideratum: “[C]hoosing (2) for each pair in Villain 3 is choosing that the villain randomly select and kill ten Y people rather than inflicting a migraine on each X person. And that is the same choice PA will condemn in Villain 2. So, PA will be inconsistent, in the sense that it will have different implications across cases that are in all morally relevant respects equivalent.” (p. 517). Of course, if a proponent of PA tried to avoid this problem by revising their judgments about either Villain 1 or Villain 2, this would bring them into conflict with one of the other two desiderata instead. Hence, there is no partially aggregative view that is acceptable, because there is no partially aggregative view that satisfies all three of Joe’s desiderata.
Joe considers a way in which a proponent of PA might attempt to justify a different verdict about Villain 3. Perhaps, the proposal goes, we shouldn’t think of the choice you must make for each pair of X and Y people as analogous to the decision you had to make for A and B in Villain 1. This is because the right way to think of the choices in Villain 3 is not as individual acts, but as parts of a sequence of acts. And it is at this level that partial aggregation applies:
[A]s some proponents of partially aggregative views have suggested, PA could be a view that applies not to individual acts, but rather to sequences of acts. In Villain 3, if you choose (2) for each pair, you perform a sequence of acts that you know will result in ten deaths. If PA applies to sequences of acts, it could condemn this sequence. Suppose PA does apply to sequences of acts, and that it forbids you from choosing (2) for each pair in Villain 3. What will it imply you should do in this case? There are two possibilities. PA will imply either that you should choose (1) for every pair, or that you should choose (2) for some number of pairs and then choose (1) for the others. (p. 518)
For ease of future reference, I will dub this the “Sequence Proposal”. Such a proposal has indeed been put forward in the literature. Joe’s article mentions discussions by Lazar (2018), Tadros (2019) and Lazar and Lee-Stronach (2019). The most explicit recent discussion of this idea is Alec Walen’s, in his article “Risks and Weak Aggregation: Why Different Models of Risk Suit Different Types of Cases” (forthcoming in the next issue of Ethics). Walen writes, in response to Joe:
If the serial version of the case really is morally equivalent to the holistic version of the case, then [the agent] can modify what she would do in the individual cases (…). She could do that in two ways: she could decide that the villain should [inflict a migraine on each X person], or, better, she could decide that there is some acceptable risk of death that she could inflict for the sake of preventing migraines and give out lottery tickets [to Y people] until she reached that risk of death. (…) After that, [the agent] would have to choose that the villain [inflict migraines on the remaining X people].
Joe accepts that applying PA not to individual acts but to sequences of acts would permit the proponent of PA to avoid the third horn of his trilemma. But this is a Pyrrhic victory, Joe contends, because the Sequence Proposal will have implausible implications in other cases, thereby violating Joe’s first desideratum instead. One such case, Joe contends, is
LONG LIFE: You will live an extremely long time—zillions of years. As you look ahead at your long life, you know there will be frequent opportunities to spare some people from minor burdens, or give them minor benefits, by acting in ways that expose others to tiny chances of death. Given the extreme length of your life, it is a statistical certainty that, if you take these opportunities, eventually you will kill someone.
As Joe writes: “If PA applies to sequences of acts, it will forbid you from taking these risky opportunities. It will imply either that you should never take these opportunities, or that you should take some and then refuse to take any more. Both implications are implausible.” (p. 518).
Having laid out his reductio trilemma, Joe proceeds to review a number of extant PA views from the literature (put forward by Alex Voorhoeve, Mike Otsuka, Sophia Moreau, and Seth Lazar), seeking to demonstrate that they all find themselves impaled on at least one the horns of his trilemma. While this part of Joe’s paper is interesting and impressive, I won’t discuss it here. Instead, it is my hope that some of those targeted by Joe’s critique will jump in in the comments to defend themselves. 😉
Suppose all of this is right — what follows? The most plausible way out of his trilemma, Joe argues, is to swallow the second horn and accept that you should choose (2) for each pair in Villain 2. And the most natural explanation for why this is the case is that the aggregate of ten zillion migraines morally outweighs ten deaths. But if we accept this explanation, Joe argues, we should accept a fully aggregative view.
So much for my quick synopsis of Joe’s paper. I will now present a few critical comments about Joe’s argument, to get the discussion started.
A persuasive argument for full aggregation or an impossibility result?
Let us begin in a maximally concessive spirit. Let us suppose, for the sake of argument, that Joe is right and his trilemma is indeed inescapable for any partial (or fully) aggregative moral view. Even granting this, should we agree with Joe’s further claim that full aggregation emerges as the victor?
This isn’t clear to me. I am not convinced that embracing full aggregation would constitute a satisfactory way of resolving his trilemma either, by Joe’s own lights. For, given the highly counterintuitive implications that full aggregation has in many cases (TRANSMITTER ROOM; LOLLIPOPS FOR LIVES, etc.), it seems that a fully aggregative view cannot avoid violating Joe’s first desideratum. This matters, because avoiding implausible implications about cases surely is a desideratum for any moral view, not just for proponents of partial aggregation.
If this is right, then even if Joe’s other arguments are sound, they should perhaps be construed, not as establishing that we must accept full aggregation, but instead as suggesting an impossibility result. If Joe is right, there is no way of fully satisfying his three desiderata.
But are all of Joe’s other arguments sound? It seems to me that there are at least three different ways (not all of them mutually consistent) in which proponents of PA views might seek to challenge Joe’s trilemma argument:
CHALLENGE 1: Would a view that told us to choose (2) in Villain 2 necessarily be in tension with the intuitions that incline people to accept partially aggregative moral views, as Joe claims?
CHALLENGE 2: Is Joe right that the Sequence Proposal, which promises to avoid the third horn of his trilemma, would commit us to unacceptable verdicts about cases like Long Life (thereby impaling us on the first horn instead)?
CHALLENGE 3: Contra Joe, is there perhaps some morally relevant difference between Villain 2 and Villain 3, such that choosing (1) in the former case and (2) in the latter case would not, in fact, violate Joe’s third desideratum?
In what follows, I will briefly discuss the first of these challenges and will then spend a little more time developing the second. I will not here examine the third challenge, though I flag it as a dialectical possibility, in the hope that someone might take it up in the discussion.
Consider a view such as Seth Lazar’s (2018), which tells us to choose (2) in Villain 2. Would such a view necessarily be in tension with the intuitions that incline people to accept partially aggregative moral views, as Joe maintains? Recall how Joe argues for this claim:
“It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, in some cases involving risk, you should save a huge number of people from migraines rather than saving ten people from death.” (p. 517).
This reasoning strikes me as unpersuasive, since it seems to clearly overgeneralize. Consider the following pair of cases:
MIGRAINES VS DEATH (CERTAINTY): We can either (1) save 10 known people from death or (2) save 1 zillion known people from a migraine.
MIGRAINES VS DEATH (RISK): We can either (1) withhold our aspirin tablets. Then 1 zillion people will suffer a migraine, but no-one dies as a result of taking aspirin. Or we can (2) distribute our aspirin tablets to whoever needs them. Then approximately 10 people will die as a result of taking aspirin (we can’t know who), but 1 zillion people are spared a migraine.
Many people, myself included, judge that you should choose (1) in the former case, but (2) in the latter. Indeed, Migraines vs Death (Risk) is simply a stylized version of reality: aspirin tablets cure many people of migraines, but sometimes have fatal side-effects. Yet we don’t think it would be impermissible for a public health authority to distribute aspirins, even to a very large population.
Would it be a problem for an account of partial aggregation if it supported this combination of views (as the above passage from Joe suggests)? On the contrary. This would be a virtue, not a vice, of a partially aggregative view. There seems no tension in our judgments because, at least in cases like Migraines vs Death (Risk), the presence of risk clearly does seem to make a morally relevant difference: It is one thing to expose a zillion people to a tiny risk of death (even foreseeing that ten of them will die), when doing so is in their own interest ex ante, because it promises to cure them of a migraine. It is quite another to expect 10 people to accept the severe burden of certain death, so that others are spared the comparatively trivial burden of a migraine. While the kind of moral outlook that endorses partial aggregation should balk at the latter action, it need detect no problem with the former.
Admittedly, Villain 2 is a different sort of case. Whereas the interests of the parties in Migraines vs Death (Risk) are not “competitive ex ante” (in the terminology of my 2015 paper) in Villain 2 there are two groups of people, the X’s and Y’s, whose interests are in conflict from the get-go. Given this, many (myself included) are less sure about Lazar’s claim that choosing option (2) in this case, too, would indeed be the right thing to, all things considered.
If our main misgiving, here, is simply one of intuitive fit, then this is a problem that Lazar shares with proponents of full aggregation (such as Joe), who reach the same verdict albeit for different reasons. But perhaps one can dig deeper and find further reasons why a commitment to choosing option (2) in Villain 2 should be especially hard to stomach for proponents of partial aggregation.
My point, however, is that the specific argument to this effect which Joe has given us is not compelling. Rejecting a view like Lazar’s on the grounds that it would in general be ‘bizarre’ for proponents of partial aggregation to draw a sharp moral distinction between cases involving certainty and some cases involving risk, seems mistaken. For there are pairs of cases, such as Migraines vs Death (Certainty) and Migraines vs Death (Risk) where this seems like exactly the thing to say.
Let us turn now to the second challenge.
We sketched above the Sequence Proposal, which (at least in some cases) applies partial aggregation not to individual acts, but rather to sequences of acts. As we saw, this idea promises a way of escaping the third horn of Joe’s trilemma. Joe’s objection, however, is that this proposal would have unacceptable implications for other cases, such as Long Life.
But is this so? Joe is surely right in what he says about Long Life: it would be implausible to always refrain from risky acts of beneficence, or indeed to put a limit on how many such actions you can perform over your lifetime. The question is whether accepting the Sequence Proposal for cases like Villain 3 would commit its proponents, as a matter of consistency, to these implausible conclusions about Long Life. In what follows, I will try to make a case, on their behalf, that this is not so. Despite superficial similarities, the choice situations in Villain 3 and Long Life are importantly different, in ways that make it plausible for proponents of the Sequence Proposal to apply it to the former case but not the latter.
In Villain 3, you first decide how many Y people to put at risk of death by choosing (2) rather than (1) for their pair. Let n be the number of times that you choose to pursue the risky option. Then, once all these choices have been made, the villain’s lottery determines the fate of all Y-individuals whom you have placed in jeopardy. The structure of this case thus instantiates the following type of pattern:
PATTERN 1: Choice 1; Choice 2; Choice 3; etc. –> Lottery.
Notice that while the risk to each particular Y person of being killed is never greater than one-in-a-zillion, no matter the size of n, the risk that some person(s) will be selected to be killed, increases, the higher n. According to one family of partially aggregative views – “ex post” views – this fact is of inherent moral significance. It matters, according to such views, that the risk that some Y-person(s) are killed, in pursuit of moderate benefits for X-people, be held within acceptable bounds. For proponents of ex post views, it is therefore natural to consider the relevant object of moral assessment in a case like Villain 3 to be the totality of the agent’s decisions prior to the lottery, not each individual decision considered in isolation. To ensure that the risk that some Y person(s) will be killed remains within acceptable bounds, the agent must make it the case that n remains below a certain threshold over the sequence of decisions. (That threshold may, in some cases be 0; in other cases, it will be some positive integer). As a result, the permissibility of enrolling a given Y person in the villain’s lottery cannot be settled in isolation. Rather, we must know how often in the sequence we have already made (or will make) this type of risky choice.
The choice situation in Long Life, however, is crucially different. Your life will contain many points at which you have to decide whether to spare someone from a minor burden (or give them a minor benefit), by acting in ways that expose others to tiny chances of death. If you decide to take the risk, a ‘natural lottery’ takes place, which determines whether the risk for those endangered by this decision materializes or not. Over the subsequent course of your life, further opportunities for risky beneficence present themselves, and the story repeats. (Absent special circumstances, we can think of the different natural lotteries as probabilistically independent). The choice structure of this case thus instantiates the following pattern:
PATTERN 2: Choice 1 –> Lottery 1; Choice 2 –> Lottery 2; etc.
I submit that to proponents of an ex post view this difference in the choice structures of Villain 3 and Long Life should make an important moral difference.
To see why, consider first a modification of Long Life that would make it relevantly like Villain 3, by the lights of the Sequence Proposal:
Long Life*: At the start of your life, you have to enter a binding pre-commitment concerning whether or not you will take various opportunities to benefit others that will arise over the course of your long life. Each act of beneficence will come with a tiny risk of killing a third party. Let n be the number of times that you precommit to perform such risky acts of beneficence over the course of your life. You have to settle on n before any of the corresponding natural lotteries have played out.
This choice situation could be represented as follows:
PATTERN 1*: Choice 1; Choice 2; etc. –> Lottery 1; Lottery 2; etc.
In the morally relevant respects, this is like the choice situation in Villain 3. (That there is a single lottery in Villain 3, which settles the outcomes of all risky decisions at once, whereas the outcomes of the risky decisions in Long Life* are settled by different, probabilistically independent, lotteries is of no consequence, I believe. What matters is that, in both cases, all decisions have to be taken before the lottery/lotteries which resolve them take place). In Long Life*, as in Villain 3, the greater you make n, the higher the likelihood that you will kill someone over the course of your life. Hence, as in Villain 3, a proponent of an ex post view will think of the totality of your decisions as the appropriate object of moral assessment, and will instruct you to keep the value of n below a certain threshold. (To fix ideas, let’s suppose that you ought to make it the case that n < 50).
By contrast, I believe that taking an analogous approach to the original Long Life case (by, say, following a rule that instructs you to keep your number of risky acts of beneficence below 50) would be implausible, even by the lights of the ex post view.
To see this, suppose you have already helped 49 times so far, and nothing has gone wrong. Can you help some more or have you ‘used up’ your permissible occasions for helping? Clearly, it is permissible to go on helping. After all, the probability that someone will be killed as a result of your helping a further time is no higher at this point than at any earlier point in the sequence. The probabilities ‘reset’ after each lottery. “The dice have no memory”, as the gambler’s adage goes. Hence, how often you have already gambled and been lucky is irrelevant to what you are permitted to do now.
Likewise, suppose that you have been unlucky early on. Tragically, the very first time you chose to perform a risky act of beneficence, things went wrong and someone was killed as a result. Should we conclude that you are now “done with helping” for the rest of your life? Again, the answer is clearly ‘no’. It would be a mistake, akin to the sunk cost fallacy, to treat the fact that past gambles haven’t paid off as being, in itself, a reason to refrain from taking (probabilistically independent) gambles in the future. Whether a further act of risky beneficence is permissible depends solely on facts about the future (does the promise of moderately benefiting someone justify the tiny risk of severely harming someone third party?), not on what has already happened up to this point.
The general lesson is this: unlike situations that instantiate Pattern 1 or 1*, in cases that instantiate Pattern 2 your past decisions to engage in risky beneficence are irrelevant to the permissibility of deciding to do so again, even by the lights of the ex post view. This is why such cases naturally fall outside the scope of the Sequence Proposal. Your actions, at each choice point, should be assessed individually and not as part of a sequence. Given this, there would be no objection to your always opting for the risky beneficent option.
A proponent of the ex post view could thus embrace the Sequence Proposal as a plausible way of analyzing cases like Villain 3, without being committed to endorsing the implausible implications that such a proposal would have for cases like Long Life. If this is correct, then perhaps the Sequence Proposal represents a way of avoiding the force of Joe’s trilemma.
In his paper, Joe seems to anticipate – albeit in highly compressed form – some of the moves I have just sketched. However, he does not appear to believe that they can fundamentally blunt the force of his trilemma. He suggests that his trilemma argument could just as well be presented in a way that substitutes for Villain 3 a modified version of this case that makes it, in crucial respects, more like Long Life, namely
VILLAIN 3*: A villain has kidnapped ten zillion X people and ten zillion Y people. He pairs each X person with a Y person. For each pair, the villain will either (1) inflict a migraine on the X person, or (2) inflict a one-in-a-zillion chance of death on the Y person. You must choose between these options for each pair in turn.
But I am not so sure. As I read this case, the decision structure now conforms to Pattern 2, i.e. it is
Choice 1 –> Lottery 1; Choice 2 –> Lottery 2; etc.
This would indeed make the case relevantly more like Long Life than the original Villain 3, and therefore suggests that choosing the risky option (2) for each pair now ought to be permissible, since our choices for each pair should now be assessed individually and not as part of a sequence.
What I question is that this produces any kind of problematic inconsistency with our judgments about Villain 2 (assuming that we agree with Joe that a proponent of PA should choose (1) in that case). For our judgments across cases are only inconsistent if the cases are indeed alike in morally relevant respects. But this is precisely what a proponent of the Sequence Proposal ought to deny. To the extent that Villain 3* has indeed become more like Long Life than the original Villain 3 case was, it has become less like Villain 2. So the divergence of our judgments across these cases need not trouble us.
This concludes my critical précis of Joe’s excellent paper. Hearty thanks for having given us all so much to chew on. I very much look forward to the discussion!