Last time, I claimed that on a forward-looking account of the reactive attitudes, sympathy can play a role in mollifying or attenuating blame (as a reactive attitude). This goes against two lines of thinking. (1) The idea that sympathy shouldn’t mollify our reactive attitudes, even if those attitudes have some kind of affective content (resentment, indignation), and (2) the idea that the reactive attitudes don’t have any affective content at all. The second claim is stronger, and it is a form of cognitivism. Cognitivists hold that responsibility attributions are just judgments, devoid of affective content (see Zimmerman1998, Haji 1998) . I prefer to see the reactive attitudes as cognitive-affective dispositions, in part because I think that holding someone responsible is different than judging someone responsible: we can judge a painting to be bad without taking a blaming attitude toward it. When we blame someone, we do more than just judge that person to have negative characteristics.
There is a precedence in philosophy to think that emotions have no place not only in moral attribution, but in moral reasoning, period. I think that there is a very natural cognitivist reading of all three major moral frameworks: Kantianism, utilitarianism, and virtue ethics.
Kant famously said that only actions that stem from duty and go against inclination are ‘truly morally worthy.’ These aren’t the only worthy actions, but they’re the best kind. This suggests a cognitivist interpretation of Kantian moral reasoning. But not every Kantian is a cognitivist. In fact, Joshua Greene (et al. 2001) found that subjects who make ‘Kantian’ decisions in trolley problems (letting five people die) exhibit more emotional activation than subjects who make ‘utilitarian’ decisions (killing one person). But this describes what Kantian reasoners do, not necessarily what Kant thought they ought to do. Kant’s view could very well be interpreted to imply that when we make universalizing (Kantian) judgments, we do so by an application of ‘pure reason’ devoid of affective content.
Greene takes his experiments to show that Kantianism is wrong: emotions distort moral reasoning, so the utilitarian judgment must be the correct one. But this conclusion assumes without argument that cognitivism is right! First we have to establish that emotions are distorting, as opposed to truth-tracking, and Greene nowhere does this.
There is likewise a very natural cognitivist interpretation of utilitarianism, on which good utilitarians are rational utility-maximizers, who make completely impartial and unemotional judgments. Classic utilitarianism forbids you from favouring your loved ones, which suggests that good utilitarians ignore or suppress their emotions. There are more recent versions of ‘pluralistic utilitarianism’ that eschew this commitment to impartiality, but the traditional ‘monistic’ approach seems to support Greene’s view: that unemotional moral judgments are better: emotions just get in the way of utilitarian reasoning.
Aristotle could be construed as a cognitivist inasmuch he believed that knowledge is required for virtue (as well as responsibility). Specifically, he thought that you could be virtuous only if you knew that your action was virtuous. Julia Driver argues against this cognitivist interpretation in ‘Uneasy Virtue’ (2001). She claims that modesty is a virtue that not only permits ignorance, but requires it: a person can’t be modest while knowing his real worth. This would just be a false show of modesty. The important thing, for my purposes, is that this is a re-interpretation of virtue ethics, and the original view was a cognitivist one. (Arpaly  similarly offers a revisionary non-cognitivist account of praiseworthiness).
So cognitivism has a long history in philosophy. Maybe that’s why we’re reluctant to think that emotions can influence moral reasoning, including the reactive attitudes, in a non-distorting way.
We shouldn’t pretend that emotions never distort moral reasoning – anger can make us dogmatic, for instance – but are they necessarily distorting?
Some theorists think that emotional content necessarily figures in good moral reasoning. Shaun Nichols (2004, 2010 with Mallon), for instance, holds that the problem with psychopaths is that they are incapable of drawing the moral/conventional distinction: they see core moral violations (like murder) as equivalent in seriousness to conventional violations (like talking with your mouth full), and this is because they lack emotional sensitivity. They know what moral rules are but don’t care about them. By contrast, ordinary people tend to make immediate moral judgments on the basis of affect or intuition alone, in a phenomenon called ‘moral dumbfounding’ (Haidt 2001). They’re the opposite of psychopaths. If this is right, then moral reasoning not only does have emotional content, but must have emotional content. This is what makes it moral reasoning (as opposed to conventional reasoning). It reflects the appropriate degree of ‘seriousness.’
If something like this picture is correct, then maybe instead of trying to expunge emotional content from our moral judgments (which would probably be impossible, and might just bring us closer to psychopaths), we should try to refine our emotional reactions. That is, we should try to hone them to make them more fitting for the context. Maybe this would require higher-order reflection or finding the right situations – I’m not sure.
I’m actually going to do this in a new post because it digresses somewhat from the current discussion, but it’s very related. It ties this discussion to philosophical method: how we should do philosophy – in a ‘cognitivist’ way or a ‘non-cognitivist’ way.