Even though I’m not a real expert on his work (and his new book is way too expensive), I’m a huge fan of Timothy Williamson. The part of his work I know the best is his comments on anti-realism vs. realism debates of the Dummett and Wright kind. I want to reconstruct his argument from the margin of error for realism (i.e., the anti-luminosity argument) whilst applying it in the moral realm to argue for moral realism. I want to then ask how we should react to this argument.
Let us begin from assuming that there is a sorites sequence of thousand lies. Let us say that the first lie, lie1, is definitely bad – one lies to a friend about whether an electric wire is live in order to laugh at the funny way in which he gets fried. The last lie, lie1000, is definitely at least morally neutral and perhaps even good – one lies to a friend about her outfit in order to make her feel good. In between these lies, there are 998 lies where the circumstances of the lies change ever so slightly so that the difference in badness of any two consecutive lies is imperceptible in appearance.
Assume also that there is moral knowledge. Furthermore, assume that we know that the lie1 is bad (and that the lie1000 is not bad). Assume also, more controversially perhaps, that knowing that the lie1 is bad requires that one is ‘reliably right’ about the badness of lies. This reliability, furthermore, requires that in all relevantly similar lying-cases that could easily arise and which ‘one could easily fail to discriminate from the given case, it is true that’ these lies are also bad. Thus, reliable knowers in a certain domain only have an outright belief that p in that domain when it is true that p.
Let realism be ‘the claim that the alethic status of propositions is determined independently of the intentional attitudes taken by actual or ideal persons towards such propositions or the states they represent’ (Shafer-Landau, ‘Vagueness, Borderline Cases, and Moral Realism’). Moral realism then is realism about morals. It seems that if there are moral truths which no-one could even in principle know, the alethic status of moral propositions would have to be independent of our intentional attitudes towards moral propositions. Thus, unknowable moral truths would be sufficient (but not necessary) for moral realism, whereas it would be necessary (but not sufficient) for moral anti-realism that moral truths were evidentially constrained.
Now, given the earlier assumption, I know that the lie1 is bad. Because the difference between the badness of the lie1 and the lie2 is imperceptible in appearance, I’m disposed to believe that lie2 is also bad. These two facts and the fact that knowledge requires reliability (believing that p only when it is true p) mean that it must be true that the lie2 is bad. Otherwise, I would not be reliable. For a reductio, let us assume that moral anti-realism is true; there are no unknowable moral truths. This and the fact that it is true that the lie2 is bad imply that I must be in position to know that the lie2 is bad. Assuming that I’ll be considering whether the lie2 is bad, I will then have to know that the lie2 is bad.
We can then repeat the argument. I now know that the lie2 is bad. Given that it and the lie3 are not different in appearance, I will also believe that the lie3 is bad. Given that knowledge of the badness of the lie2 requires reliability, it must be true that the lie3 is bad. And, again, given anti-realism, I will know that it is. Repeating this argument gets us eventually to it being the case that I know that the lie1000 is bad. And, thus we end with a contradiction – we knew from the start that it isn’t. We have reached the Sorites paradox.
Williamson’s own way of stopping this argument is to deny anti-realism. If realism is true, then along the sequence there can be lies of which we cannot know whether they are bad or not. Thus, I can, for instance, still know that the lie345 is bad. I’ll still be disposed to believe that the lie346 is bad. But, given realism, I cannot know that this lie is bad. It does not have to be the case that the lie347 is bad even if I might be disposed to have some degree of belief that it is (it could be perceptibly different from lie345 of which I’m certain that it is bad). Given that I only have some degree of belief to the truth that lie347 is bad, even if this lie is not bad, this does not take away my reliability as the knower of the fact that the lie345 is bad.
Now, I’ve been thinking of the possible reactions we could have to Williamson’s argument. Here are the options I have thought about:
1. The argument never works because there are global problems with its premises (and thus it fails as an argument for moral realism). One could argue for instance that knowing the truth of some proposition does not require reliability in other cases (see, for instance, Sosa on animal knowledge).
2. The argument does not work here because there are local problems with its premises in the moral case. One could deny that there is moral knowledge and thus that we know that the lie1 is bad. One could also deny that one can construct sorites sequences in the moral case. This seems difficult. As Shafer-Landau argues, moral properties ‘are multidimensional, i.e., depend on the satisfaction of a number of distinct constitutive criteria for their instantiation’. These constitutive criteria are based on the non-moral properties of the actions. Given that we can construct sorites sequences for these properties, we should be able to construct sorites sequences also for the moral properties.
3. Anti-realism can accommodate unknowable moral truths. Thus, the argument cannot establish moral realism even if it does establish that there are unknowable moral truths. So, assume for example that whether an action is bad depends on whether fully rational agent who knew all the relevant non-moral truths and had maximally coherent set of motivations would advice against the action. On this view, whether an action would have a moral property would depend on the propositional attitudes of an ideal person. This implies moral anti-realism. But, given that we cannot know all that a person idealised to this degree would advice, we could not know all moral truths even if anti-realism was true.
4. It seems that we can accept that knowledge requires reliability in other cases but only in other ‘good’ cases just as long as the beliefs in the rest of resembling cases are not ‘faultfully’ mistaken. So, assume some sort of response-dependence view about badness. X is bad iff all normal persons in normal circumstances judge it to be bad. Assume that all normal persons (add your favourite description of her) judge the lie1 to be bad. Therefore, it is bad. At some point of the series, there is a first lie where the opinions of the normal judges diverge. This lie is the first one that is not definitely bad (even if it is not definitely not bad either). Now, as a normal judge I can be disposed to believe that this lie is bad. But, given that it is not true that it is not bad, I can do so faultlessly. Thus, my status as a reliable knower of the badness of the previous lie is not in question.
5. We should take it seriously that there is a case for moral realism on the basis of Williamson’s argument.
Any thoughts about where we should go?