Responsibility, Epistemic Confidence, and Trust


In my last post, I argued that severe deficits of epistemic confidence can undermine responsible agency by undermining a person’s ability to form resolutions and have a deep self. In this post, I want to discuss a related notion: trust. In writing about epistemic confidence, Miranda Fricker (2007) says that people who conspicuously lack epistemic confidence are perceived as less competent and less trustworthy. Being seen as less trustworthy undermines a person’s epistemic confidence, which in turn undermines the person’s agency or competency. Trust, epistemic confidence, and agency are thus related in a positive feedback loop. This is illustrated in the experiment on expectancy effect, in which certain students were randomly designated as academically gifted, and the teacher’s trust in the students’ academic competency actually improved their competency (as measured by test scores) over the course of the year (Rosenthal & Jacobson 1996, cited by Fricker 2007: 56).

In this post, I want to look more closely at trust and its relation to responsible agency.

Victoria McGeer also writes about trust. She argues the ‘substantial trust’—trust that goes beyond the evidence and abjures strategic judgment—enhances the trustee’s responsible agency (2008).[1] Substantial trust ‘goes beyond the evidence’ in the sense that it embodies a belief in the trustee’s moral worth that isn’t supported by the balance of evidence; and it ‘abjures strategic judgment’ in that it entails a refusal to evaluate the trustee’s worth on the basis of evidence. That is, trustors don’t meticulously scrutinize the evidence regarding their friend’s moral qualities; they take a leap of faith in favour of the friend’s potential to be good. To illustrate this epistemic state, if my friend is accused of bribery, I exhibit substantial trust if I’m biased in favour of her innocence, in spite of any evidence to the contrary. When we substantively trust someone, we refuse to judge her on evidential grounds.

A central element of substantial trust on McGeer’s view is hope: in trusting a friend, we hope the person will live up to our optimistic expectations of her moral worth, but we don’t know if she will. Yet substantive trust can’t (or shouldn’t) be delusory: if the evidence confirms our friend’s guilt beyond doubt, we shouldn’t trust in the person’s innocence; but it would still be reasonable in this case to trust in our friend’s capacity to improve. In this way, substantive trust is relatively resistant to disappointment: even if a friend fails several times, we can continue to trust in the person’s basic capacity to live up to our hopes. We trust that the person can gain new capacities or build on existing capacities to embody our ideal. Only in the face of repeated disappointment does substantive trust become irrational. ‘Irrational’ trust on McGeer’s view is pointless; it doesn’t reliably contribute to the trustee’s agency.

Substantive trust enhances the trustee’s agency because it “has a galvanizing effect on how trustees see themselves, as trustors avowedly do, in the fullest of their potential” (McGeer 2008: 252). That is, our trust inspires confidence in the trustee, who begins to believe in herself.

This picture of trust as agency-enhancing interests me for 3 reasons, which I’ll elaborate briefly here.

  1. Epistemic confidence: the mediating variable between trust and responsible agency

McGeer’s account helps to explain how epistemic confidence is related to responsible agency: substantial trust (when assimilated to Fricker’s moral epistemology) inspires epistemic confidence, which (in the right degree) facilitates responsible agency. The right degree, as per my last post, is midway between between epistemic insecurity and epistemic arrogance; it’s neither too much nor too little self-regard. Epistemic confidence, then, is the mediating variable between trust and responsible agency. McGeer doesn’t explicitly mention ‘epistemic confidence,’ but she’s interested in elucidating the psychological mechanism whereby trust enhances responsibility. She rejects Pettit’s theory (1995) that trust incites a desire for approval, as this isn’t a ‘morally decent’ motive, befitting of the trust relationship (2008: 252). Instead, McGeer proposes that trust ‘galvanizes’ the trustee to see herself in a more positive light—through the trustor’s eyes. The resultant state—let’s call it positive self-regard—motivates the trustee to aspire to a higher standard of conduct.

Positive self-regard can be seen as a weak form of epistemic confidence—an aspirational kind. Whereas epistemic confidence is a positive belief in one’s merit or abilities, self-regard (in McGeer’s sense) appears to be faith in one’s (as yet unproven) merits and abilities. But self-esteem and epistemic confidence are of a kind: one is just firmer than the other. So, we can see positive self-regard as a weak form of epistemic confidence, and both states as intermediary between two epistemic defects: epistemic insecurity and epistemic arrogance. These epistemic virtues—self-esteem and epistemic confidence—are positively correlated with responsible agency, in the following sense: they enhance the trustee’s confidence in herself, and thus her ability to have firm beliefs and values (or convictions) about herself, and to act on those states. Having convictions prevents people from being ‘wantons,’ akratics, and irresolute people—paradigms of irresponsibility or weak responsibility. Responsibility is enhanced by belief in oneself, and this belief tends to confer self-control, willpower, and resilience—competencies implicated in or constitutive of fully responsible agency.

These related virtues—positive self-regard and epistemic confidence—might serve slightly different purposes; specifically, self-esteem might be particularly adaptive in adverse circumstances where a positive outcome is unlikely (but possible), whereas epistemic confidence might be more fitting when success is reasonably probable; but both states facilitate responsibility. Trust is fitting, therefore, when it’s likely to enhance responsibility by either of these means. In other words, we’re rational to trust someone when our trusting attitude reliably confers agency-conducive epistemic virtues. This allows us to say (consistent with McGeer’s view) that trust is a ‘rational’ attitude even if it goes against the evidence, insofar as it tends to foster agency in the trustee. Trusting in someone ‘irrationally’ would mean trusting in someone who can’t reasonably be expected to live up to our ideal; in that case, we’re merely wishing (not trusting) that the person could be better. Trust is also irrational if the trustee is overconfident, since in that case, our trust is either wasted or positively harmful: it’s likely to increase the person’s epistemic narcissism.

On this (basically functionalist) account of trust, epistemic confidence is counterfactually dependent on trust in the following sense: it wouldn’t exist without some initial investment of trust, but it can become increasingly self-sustaining and self-perpetuating over time. That is, people who never receive trust probably (as a matter of statistical probability) won’t develop epistemic confidence, but people who do receive trust may become increasingly self-trusting and self-sufficient. This claim is based in part on facts about ordinary human psychology: As a matter of fact, trust tends to confer epistemic confidence in psychologically normal humans, which enhances responsibility as a measure of resoluteness, willpower, and resilience. This psychological picture is suggested (though not explicitly articulated) by McGeer and Fricker, who cite developmental and child education studies showing that trust from an adult inspires confidence and competency in children. (This is sometimes called ‘Pygmalion effect’). Fricker cites the famous teacher expectation study (Rosenthal & Jacobson 1996), and McGeer cites research in developmental psychology showing that children who receive support from parents—‘parental scaffolding,’ as she calls it (2007: 249)—develop stronger powers of agency than deprived and neglected children. This research suggests that agency typically, in ordinary humans, depends on positive self-regard, which depends on a non-trivial investment of trust, especially during a person’s formative years. Subsequent trusting relationships, however, can compensate for deficits in childhood, as other research indicates—for example, research on therapy showing how positive therapeutic relationships can remediate symptoms of childhood trauma (Pearlman & Saakvitne 1995). This is how I suggest we perceive the trust-epistemic confidence relationship: epistemic confidence is counterfactually dependent on a non-trivial investment of trust in psychologically normal people, but can eventually become relatively (though not completely) self-sustaining; epistemic virtues inculcated by trust typically confer strong(er) agency.

This discussion suggests a particular taxonomy of epistemic states related to trust and agency. Specifically, I’ve said that trust catalyses three closely-related epistemic virtues: positive self-regard, epistemic confidence, and epistemic courage. These states are increasingly robust epistemic virtues, which support our ability to form resolutions, exercise willpower, and act resiliently. At either end of thus spectrum is an epistemic defect: on one side, epistemic insecurity (a paucity of epistemic confidence), and on the other side, epistemic arrogance (a superabundance of epistemic confidence). These defects undermine agency for different reasons: epistemic insecurity undermines our ability to form and act on convictions, and epistemic arrogance undermines our ability to adequately consider evidence for and against our beliefs, inciting us to favour our prior assumptions come what may. (That is, it spurs self-serving bias and confirmation bias). These vices thus undermine our ability to have a deep self and to exercise moderate control over our deep self, respectively.

This is one possible epistemic framework for responsible agency—the one that I’ve settled on. I think that more work can be done here, viz., at the intersection of responsibility and epistemology (especially social/feminist epistemology, which is relational in nature). We can call this intersection ‘the epistemology of moral responsibility’. This is promising area for future research, I think, and it may be of interest to neuroscientifically-inclined philosophers, inasmuch as these epistemic states are amenable to neuroscientific description.

  1. Responsibility as ‘external’ or ‘distributed.’ 

I’m also interested in McGeer’s account because (I think) it poses a challenge to classic theories of responsible agency that are relatively ‘atomistic’ (Vargas 2013) or ‘internalist’ (Hurley 2011). Classic accounts include Frankfurt’s (1971), on which responsibility is a matter of being able to form higher-order volitions consistent with one’s lower-order desires, and Fischer’s (2006, 2011), on which responsibility is a matter of being moderately responsive to reasons. These are different types of theory (one is character-based and the other is control-based, as typically construed), but they both emphasize the internal properties of agents to a greater extent than McGeer’s theory of trust, and so they can be regarded as comparatively ‘internalistic.’ (I’ve adopted aspect of these theories here—the idea that responsible agency is a function of deep-selfhood and reasons-responsiveness—but I’m going to to suggest that these capacities are more ‘extended’ than classic accounts imply).

Internalism should be seen as a matter of degree: most theories of responsibility treat some background factors as responsibility-relevant—for example, neuroscientific intervention (Mele 1995). But classic theorists usually think that exogenous factors are only relevant insofar as they intervene on the ‘actual sequence’ of the agent’s deliberation. For example, Fischer holds that clandestine brainwashing impairs responsibility because it operates on the agent’s actual motivational profile, dramatically altering it; but a ‘counterfactual device,’ that would have intervened had the agent deliberated differently is ‘bracketed’ as irrelevant (for more on this, see Levy 2008). Frankfurt, too, sees these counterfactual conditions as irrelevant.

McGeer’s theory is comparatively ‘externalistic’ in that it (implicitly, at least) construes counterfactual interveners as relevant to responsibility (qua trust-fittingness). We can’t, on her view, ‘bracket’ these counterfactual conditions when considering whether someone is trustworthy. This is because when we substantially trust someone, we (implicity) judge the person by what she could be in a nearby possible world—one in which she’s better than she is. This is implied by the hopeful optimism intrinsic to substantial trust—we don’t see the trustee as she is (at least, in paradigm cases), but rather as she would be if she succeeded in translating our trust into ideal self-regard. Moreover, when someone fails to live up to our optimistic expectations, we don’t immediately withdraw our trust, since substantial trust is inherently resilient. Trust, then, doesn’t always track a person’s real-world capacity for control or real-world quality of will; it sometmes tracks the person’s potential to improve, not based on evidence but on hopeful optimism. Trust, then, is a form of responsibility (a reactive attitude) that isn’t constrained by considerations about a person’s real-world or actual-sequence capacities at the time of action—when the trustee did something good or bad. It considers the person as she is in a nearby possible world or as she may become in the future.

This sets McGeer’s account apart from classic ‘actual-world’ or ‘actual sequence’ theories, because substantial trust treats counterfactual possibilities—in which the agent has a different kind of self-regard—as morally relevant. The trust relationship itself can be seen as a ‘counterfactual enabler’ in Levy’s terms (2008), in that it enables the trustee to gain a capacity, if the person succeeds in internalizing the proffered trust. But these transformative effects aren’t countenanced as legitimate considerations on classic views of responsibility. Also importantly, the trust relationship is distributed between two people, not intrinsic to the trustee; if it’s withdrawn at a critical stage of development, it undermines the cultivation of positive self-regard and agency. This is another ‘externalist’ aspect to trust: it implicates two or more people’s agencies. So trust is ‘externalistic’ in at least these two aspects: it depends upon counterfactual scenarios and it implicates two agents.

  1. Responsibility as care-based (non-retributive) and forward-looking

Substantial trust also challenges two other familiar approaches to responsibility: the retributive view and the backward-looking view.

Retributivism is, in very simple terms, the view that those who commit a wrongful action deserve punitive attitudes (blame, disapprobation, resentment) and those who perform an excellent action deserve rewards (praise, approbation). (I won’t consider more complex versions of retributivism: this one will be my only target). This is a very natural way of thinking about the reactive attitudes, and it seems to be Strawson’s understanding. He implies that those who fail to conform to reasonable social expectations deserve punitive attitudes, unless there’s an excusing or exempting condition (e.g., hypnosis, severe psychosis).

Substantial trust challenges this neat binary by holding that a person who falls short of our aspirational norms still ‘deserves’ trust, if trust is likely to instil positive self-regard across a reasonable time scale. That is, continuance of trust is fitting when someone makes a “one-off” mistake, as substantial trust is an “on-going activity” that’s resilient in the face of moderate set-backs (McGeer 2008: 247). Hence, we can’t simply say that someone who surpasses our expectations thereby warrants praise and someone who breaches our trust thereby warrants blame, as per the standard desert-based picture. This doesn’t capture the essence of trust. Rather, we withdraw or modify our trusting disposition only when someone repeatedly or catastrophically disappoints us, rendering trust pointless and irrational. Since substantial trust is aspirational at its core, substandard conduct on the trustee’s part doesn’t compel us to automatically withdraw our trust and assume a retributive stance: we’re licensed to suspend blame in the hope that the person will improve.

This is related to the fact that substantial trust is a forward-looking attitude. Most theories of responsibility are backward-looking, meaning that they attribute responsibility (praise/blame) on the basis of an agent’s capacities at the time of action, i.e., some time in the past. Frankfurt’s and Fischer’s views are like this: if someone had (a) a certain motivational structure, or (b) reasons-responsiveness when performing a certain action A, the person is thereby responsible for A. Trust, however, isn’t deployed solely on the basis of someone’s past motivational psychology and conduct; it’s also deployed on the basis of the trustee’s ongoing and fluid potential: we can trust someone who doesn’t (presently) have the capacity to improve. Trust, that is, outstrips the trustee’s current capacities at any given time.

As McGeer points out, we don’t (paradigmatically) invest trust in someone on a calculated judgment that the person will ‘earn’ our trust (as Pettit thinks), as this would be perverse and ‘manipulative’ (2008: 252). Rather, we trust someone as a way of empowering the person. Another way of putting this, I think, is to say that we trust someone for that person’s own sake. This interpretation of trust has affinities with Claudia Card’s (1996) care-based approach to responsibility, on which responsibility serves the function of expressing care to the target agent. It also resembles Vargas’ agency-cultivation model (2013), which reflects a concern for the target’s wellbeing (at least, it’s amenable to this reading). This care-based orientation is very different from the retributive rationale, and it’s also not backward-looking: responsibility attributions are meant to enhance or empower the recipient, not to punish her for past misdeeds. McGeer’s account of trust thus fits better with consequentialist theories rather than retributive ones, and it seems to embody a care ethos—trust is an essentially caring attitude. It seems to be essential to trust that it be care-based—or at least forward-looking; any other interpretation is simply conceptually mistaken.

I think that this is the correct way to think about responsibility in general (i.e., as consequentialist); but even if this isn’t the whole story (arguably there are many incommensurable but correct theories of responsibility—see Doris 2015 on ‘pluralism’), this seems to be a necessary way of seeing at least one facet of responsibility: trust. This means (at a minimum) that not all of our responsibility-constitutive reactive attitudes are retributive.



[1] McGeer says that substantial trust fosters ‘more responsible and responsive trustworthy behaviour’ (2008). I’m just going to say that it fosters ‘responsible agency,’ and I’ll make a case for this more general claim in this post. It’s not hard to see how trust can enhance responsible agency: if we trust in our potential to achieve a desired outcome, we’re better able to achieve that outcome (under success-conducive circumstances, which I’ll leave vague).

Responsibility and epistemic confidence


Today I’m going to argue that responsibility can be undermined by lack of epistemic confidence.

Many responsibility theorists are interested in specifying the capacities implicated in responsible agency, and there are many fairly coarse-grained proposals, such as reasons-responsiveness (Fischer 2006, 2012), sanity (Wolf 1986), and consciousness (Levy 2014). These theorists also write about deficits that may (or may not) impair responsibility, such as addictions, compulsions, and psychological disorders. There’s no consensus on whether any of these deficits suffices to impair responsibility full stop, thereby underwriting an ‘excuse’ for misconduct. But those who discount these deficits often assume a zero-sum notion of responsibility.

I like to think of responsibility as a spectrum concept rather than a threshold condition. If responsibility lies on a spectrum, deficits like addictions and psychological disorders may impair responsibility to one degree or another, depending on the particularities of the case. This seems to me to be a useful and illuminating way of thinking about responsibility. I’m also open to thinking that these ‘deficits’ can enhance responsible agency in certain cases, following Nietzche’s adage of ‘what doesn’t kill you makes your stronger’; but I think that in many cases these conditions are responsibility-undermining, if for no other reason that that they can make it difficult to navigate a society that doesn’t provide adequate accommodations.

In any case, today I’m going to write about a pretty fine-grained capacity that I think is required for the highest degree of responsibility, and the absence of which impairs responsibility to some extent—perhaps a very great extent, depending on the case. This capacity is epistemic confidence. I’m not going to select a specific notion of responsibility; instead I’ll simply follow Frankfurt (1987) and Strawson (1963) in supposing that responsibility is closely tied to personhood concepts (particularly reasoning competency and dignity), and with this in mind, I’ll discuss how severe deficits of epistemic confidence can impair our capacity to be persons in the fullest sense.

What is epistemic confidence? Miranda Fricker (2007) describes it as the capacity to be a knower: someone who has knowledge. When we know something (psychologically speaking), we not only have a belief, but also confidence in that belief. Descartes famously described knowing as conditional upon absolute certainty or indubitability: confidence is the psychological maker of genuine knowledge, the metacognitive state that distinguishes knowledge from mere belief. While indubitability might seem like an exorbitant standard, it’s reasonable to think that some sense of certainty or resolution is definitive of the experience of knowing. Bernard Williams (also speaking psychologically) describes the process whereby beliefs become knowledge as a “steadying [of] the mind,” in which we solidify our beliefs, not only by reflecting on and endorsing them (as Descartes thought), but by conferring with other people and receiving corroboration or validation (2002: 192, cited in Fricker 2007: 52).

The acquisition of knowledge is thus an interpersonal process. When others validate our beliefs, it helps raise them to the status of knowledge, and when they dismiss our beliefs (and this dismissal resonates with us), it undermines our confidence in them. If people dismiss too many of our beliefs, this can give rise to a global confidence deficit, undercutting our ability to make resolutions and construct an identity for ourselves, in Frankfurt’s sense of ‘identity.’ Frankfurt saw personal identity as the set of a person’s decisive endorsements—the mental state that we identify with and can stand behind.

So lack of epistemic confidence threatens not only the strength of our convictions, but the strength of our self. It makes us susceptible to fickleness and indecision and cognitive dissonance—unpleasant and also disorienting states. It might also lead to what Michael Stocker calls ‘moral schizophrenia’ (1976)—alienation from our moral beliefs—since, if we don’t have convictions, we’re easily swayed from our moral beliefs and attitudes. While there’s something to be said for ‘unprincipled virtues’ (Arpaly 2002), if these virtue lack robustness, they’re too flimsy to support a moral identity. So robustness of sentiments is needed as well.

This shows how lack of epistemic confidence—‘epistemic insecurity,’ let’s call it—can undermine personhood. Fricker identifies two harms of ‘epistemic injustice’ (i.e., the unfair discrediting of a speaker’s testimony). The first is practical: the target can lose her social standing or civil rights or job. The second is epistemic: the person can lose knowledge and epistemic confidence.

The practical harm is largely a matter of public perception: epistemically insecure people are seen as incompetent and insincere, and so they are not treated with the dignity and respect accorded to others (Fricker 2015: 45). The second harm is intrinsic and existential: the agent loses knowledge, lapsing back into mere belief and opinion, in the reverse of the ‘steadying of the mind’ that follows from external validation. The agent’s mind becomes increasingly unstable as she loses epistemic confidence in the face of people’s demeaning treatment. She can lose her very ability to reason—to make a decision on the basis of relevant considerations—inasmuch as it’s difficult to weigh competing considerations when no particular consideration has any special resonance or ‘rational authority’ by the agent’s own lights. Any ‘decision’ arrived at will be a tentative selection, as opposed to a well-reasoned choice. The agent can never achieve a reflective equilibrium in her reasoning.

The connection between reasoning competency and personhood has deep roots in analytic philosophy: Kant tied personhood to rationality, Aristotle tied it to practical wisdom; and Rawls rooted it in the ability to make rational decisions about social justice. People who lack epistemic confidence, and thus reasoning ability, don’t live up to these popular standards of personhood, or related notions of basic human dignity.

This isn’t to say that people lacking in epistemic confidence aren’t dignified—every human being has a basic, inviolable moral dignity, to be sure. Fricker’s point is that when we impugn people’s credibility without warrant (subjecting them to ‘epistemic injustice’), we harm them in a particularly egregious and unforgivable way: we deny them the ability to live up to their potential as persons—rational agents, competent decision-makers, free and equal citizens. This is the central harm of epistemic injustice.

Fricker is particularly concerned with epistemic injustice as a result of identity prejudice, such as gender, racial, and class bias; but epistemic injustice can also target a person’s more peripheral commitments—academic commitments, for example. When we dismiss someone’s area of specialization, for example, this can undermine the person’s epistemic confidence and capacity to be a knower relative to that body of knowledge. Inasmuch as people identify with their epistemic labour and their epistemological community (other researchers in the field), being dismissed on the basis of one’s disciplinary commitments can undermine one’s knowledge in that field and the validation one derive from belonging to a particular research community. This isn’t as damaging as paradigmatic cases of identity prejudice, but it can still be very injurious, especially if the person has invested deeply in her research.

Epistemic injustice can decrease a person’s knowledge, but it can also prevent someone from gaining knowledge in the first place. Fricker cites a famous study in which researchers randomly selected 20% of a group of elementary school students to be described to their teacher as ‘academically gifted’; at the end of the year, the ‘gifted’ children had made significantly greater gains in IQ than their peers (Rosenthal & Jacobson 1996, cited in Fricker 2007: 56). The reason, on Fricker’s hypothesis, is that the randomly selected children were seen as more competent by their teachers, and thus received more nurturance and support. This made their success a self-fulfilling prophecy. The other students, meanwhile, suffered from a relative lack of concerted cultivation, and thus failed to live up to their epistemic potential. These students may also have suffered from a kind of stereotype threat, the tendency to conform to stereotypes or low expectations. In this way, epistemic injustice can prevent people from gaining knowledge that they otherwise could have had.

I hope that this explains the relationship between epistemic confidence and responsibility in a way that people can understand. I think this is a relationship that philosophers can appreciate more deeply than most people, inasmuch as we’re both more committed to our role as epistemic agents (i.e., we deeply and explicitly value knowledge), and we’re also privy to an inordinate amount of criticism, in the form of constant referee reports, performance evaluations, and casual judgments, not all of which are confidence-boosting. As some people have publicly attested, not every referee is suitably charitable; some are downright disparaging and hurtful, and this can be a blow to one’s epistemic confidence relative to one’s specialization. This kind of domain-specific confidence deficit can easily ‘creep’ into other domains, generating a broader loss of confidence. That is, if someone lacks confidence in one area central to her identity, she can more easily be persuaded that she’s incompetent in other areas. This is simply because insecurity is susceptible to ‘epistemic creep’: if you think you’re terrible at your job, you’re more easily convinced that you’re worthless in your personal life, your hobbies, and finally, as a person.

I suspect that early career researchers are more vulnerable to epistemic insecurity than more established researchers, but it’s notably that even the most distinguished philosophers are not immune. Fricker relates a (somewhat heartbreaking) memoir by Beauvoir about how her relationship with Sartre so damaged her confidence in her philosophical ability that she decided to retire from the profession. As Beauvoir writes,

“Day after day, and all day long I measured myself against Sartre, and in our discussions I was simply not in his class. One morning in the Luxembourg Gardens, near the Medici fountain, I outlined for him the pluralist morality which I had fashioned to justify the people I liked but did not wish to resemble: he ripped it to shreds. I was attached to it, because it allowed me to take my heart as the arbiter of good and evil; I struggled with him for three hours. In the end I had to admit I was beaten; besides, I had realized, in the course of our discussion, that many of my opinions were based only on prejudice, bad faith or thoughtlessness, that my reasoning was shaky and my ideas confused. ‘I’m no longer sure what I think, or even if I think at all,’ I noted, completely thrown” (Beauvoir 1959: 344, cited in Fricker 2007: 51).

This self-deprecating comment, according to Fricker, marked “the turning point in ‘Beauvoir’s] intellectual development at which she decides that philosophy is not really for her, and that she is destined instead for the life of a writer” (2007: 51). This decision was a tragedy for philosophy, since Beauvoir was one of the field’s most important contributors to existential thought and phenomenology, in no small part because she had privileged insight (by virtue of her gender) into how a person’s socioeconomic position—in particular, her gender identity—can affect her existential possibilities. This insight prompted Beauvoir (as I recall) to depart from Sartre in drawing a distinction between ‘ontological freedom,’ which is utterly unconstrained, and ‘ethical freedom,’ which is conditioned by social vectors. This is, I think, a much more nuanced account of freedom than Sartre came up with, not to mention more politically potent. So it’s a tragedy (if Fricker is right) that Beauvoir’s relationship with Sartre so crippled her epistemic confidence (partly due to stereotype threat, no doubt, but also due to a lack of adequate support) that she was compelled to quit the profession and denigrate the value of her own work.

Interestingly, Fricker observes that Beauvoir “is rendered unsure whether she thinks at all” (2007: 51). I think that this is an important and overlooked aspect of epistemic insecurity: it doesn’t just undermine your performance, leaving your competency intact (which is what happens when stereotype threat is activated); it undermines your capacity to know certain things, and in extreme cases, to know anything at all. It’s a bit like gaslighting in this regard: it makes you question your grasp of reality. Not knowing anything—not having confidence in your basic beliefs—makes you feel insane, to put it bluntly.

I’ve experienced very grave epistemic insecurity myself, which made me not only incapable of defending my convictions, but (at the time) of having any convictions at all. Like Beauvoir, I didn’t know what I thought. This is a very disorienting and spiritually damaging experience. It’s an experience that resembles insanity in the following sense: though it might not cause insanity in legal terms, i.e., decisional incapacity, it can cause a milder type of decision-making deficit: decisional incompetence, or the inability to come to a decision on the basis of reasons and believe in it, much less defend it. Epistemic insecurity can prevent you from favouring any reasons at all. And this undercuts your potential to be a full-fledged person—someone capable of decision-making.

It’s interesting that one of the more popular writing guidebooks, ‘Writing your Journal Article in 12 Weeks’ by Wendy Laura Belcher (2009), is more psychological aid than writing guide. The Introduction immediately states that the main goal of the workbook is to “help you develop the habits of productivity that lead to confidence, the kind of confidence that it takes to send out into the world a journal article that you have written” (2009: xi, emphasis mine). Belcher recognizes that one of the main obstacles—perhaps the main obstacle—to academic success is not incompetency per se, but lack of epistemic confidence (There’s probably a negative feedback loop between the two, but epistemic insecurity is a crucial factor, as the hat experiment showed).

I can’t count the number of terminal PhD students I’ve met who haven’t submitted a single article to a journal. This is potentially career undermining if you’re not from a top-tier department. Belcher’s book is designed for exactly for this type of person. Her first chapter is “Understanding feelings about writing,” which provides strategies for managing your feelings of epistemic inadequacy—feelings that can lead to crippling writer’s block. Belcher says to write for 15 minutes per day no matter what—a difficult feat for someone with epistemic insecurity; and she addresses common self-defeating thoughts, such as, “I can’t write because my idea sucks” (33), which she defuses by saying that everyone has something worthy to say, and if you write you’ll get better at saying it. The idea that everyone has valuable ideas isn’t just an optimistic platitude; it’s supported by feminist standpoint epistemology, the view that knowledge is situated, and so we can (in principle) learn from each other’s embodied experiences. Denying someone the confidence to articulate her experiences accomplishes nothing, and wastes a potential source of shared cultural knowledge.

On a related note, I recently read an interesting novel on a PhD student struggling in her academic career due to epistemic insecurity, written by a local (Sydney) author named Mia Farlane (2009). (It was somewhat oddly called ‘Footnotes to Sex’). The synopsis on the back cover reads, “My name is May Woodlea and I am writing hoping to begin writing my PhD the proposal for my PhD. Very soon. Soon… My name is May Woodlea and I have failed to achieve anything of significance with my life.” I’m sure a lot of PhD students can relate to this sentiment.

This synopsis—which captures the spirit of the book very pithily—interests me for two reasons. First, it illustrates how epistemic insecurity can undermine one’s ability to form a single coherent thought—how it can undermine not only the overt presentation of a thought, but the ability to even think a coherent thought—to engage in philosophical reflection on the most basic level. Epistemic insecurity can, in other words, gum up the reasoning system’s mechanics, resulting in incoherent, or at least very tenuous, outputs.

Throughout the book, the protagonist (May) diligently reads reams of literature on her dissertation topic, and yet never manages to synthesize this information into a single original thought—an idea representative of her self. And the reason is that she doesn’t have a self: she’s a cluster of incomplete thoughts, inklings, and speculations, but nothing that could be called a conviction, much less a thesis. Ironically, the more she mires herself in research, the less capable she is of producing an original thought, because she becomes less and less confident in her ability, and more and more dependent upon her source material. She’s not lazy or uninformed, she just doesn’t believe in herself. She’s the perfect reader for Belcher’s book.

The other interesting thing about the book’s synopsis is that it illustrates how a relatively narrow epistemic insecurity (about part of one’s identity) can cascade into a global lack of confidence: May quickly moves from doubting her ability to complete her PhD on time—a fairly common worry—to doubting her ability to ‘achieve anything of significance with [her] life.’ This resembles Beauvoir’s doubts about her ability to ‘think at all.’ May becomes dysfunctional in her academic role, and increasingly dysfunctional in her personal life: her lack of confidence pervades her non-academic activities. This is the ‘epistemic creep’ that I was talking about—the slide from domain-specific insecurity to global self-doubt and (potentially) inertia. It’s not hard to see how extreme self-doubt can lead to depression.

It’s a reasonable conjecture that people who get tenure in this job market have a good deal of epistemic confidence, which helped them to persevere in the face of the ordinary deluge of criticism and performance evaluations in academia. Many probably also have a related, superlative virtue: “epistemic courage, the virtue of not backing down in one’s convictions too quickly in response to challenge” (Fricker 2007: 49). This is a virtue because it’s basically epistemic confidence plus the courage to resist pressure. Epistemic courage helps us defend correct but unpopular positions.

Epistemic confidence and courage, as interpersonal competencies, are fostered by supportive people—they don’t come from nowhere. Unfortunately, too much support—fanaticism and sycophantism, in particular—can give rise to too much epistemic confidence—that is, epistemic arrogance. Fricker doesn’t discuss epistemic arrogance, but it can be seen as, in a sense, the opposite of testimonial injustice (i.e., the tendency to underrate a speaker’s testimony): it’s the tendency to overrate one’s own epistemic competency.

Like the epistemically insecure person, the epistemically arrogant person can also lose knowledge, but for very different reasons, viz., because she doesn’t entertain appropriate criticisms, which undermines the evidential warrant of her beliefs. Whereas epistemic insecurity is, in ordinary cases, an epistemic deficit pre-empted by environmental conditions, particularly other people’s demeaning judgments, epistemic arrogance is best construed as an epistemic vice, since it’s the predictable result of related character defects, such as neglect, lack of vigilance, egoism, and narcissism.

Perhaps certain types of epistemic arrogance can be excused—specifically, pathological cases, in which the agent played no role or very little role. But it’s reasonable to construe typical cases as blameworthy. Epistemic insecurity, by contrast, undermines the agent’s responsibility, compelling us to suspend or modify our normal reactive attitudes. If anything, the epistemically insecure person deserves sympathy and epistemic nurturance, as a means of remedying the problem. (This is, in effect, what Belcher’s book aims to do, though in an indirect way).

In sum: severe deficits in epistemic confidence can undermine responsibility as a feature of personhood—roughly, the capacity to engage in higher-order reasoning and form convictions through this process. People who lack epistemic confidence are deficient in these critical capacities. Epistemic insecurity in one domain can easily ‘creep’ into other domains, undermining general confidence, and in the worse cases, leading to depression. Affected people have responsibility deficits. Epistemic arrogance is a different epistemic flaw—an epistemic vice that reflects poorly on the agent’s character, unless it’s (somehow) the result of a pathological condition.


Romantic love, perceptions of responsibility, and attribution-self-representation biases.



I recently read this article by Alain de Botton on romantic love and why you’ll marry the wrong person, and it reminded me of Alain de Badiou’s book, ‘In Praise of Love.’ It struck me that romantic love, as describe by these philosophers, shares certain phenomenological qualities that conflict with dominant philosophical conceptions of responsibility: namely, romantic love, by it’s very nature, is ‘chancy,’ unpredictable, and out of our control. It emerges out of a “chance encounter,” and can’t be chosen on the basis of a rational decision procedure (Badiou & Truong 2012: 17). It’s a leap of faith that we make for another person who is, in effect, a stranger (de Botton). And it’s a transformative experience, one that changes us from a single person to an inherent duality (Badiou & Truong 2012: 17). Oddly, while falling in love on this picture has phenomenological features incompatible with ‘responsibility’ as a theoretical construct (loss of control and psychological continuity), it doesn’t challenge our felt sense of being responsible. Indeed, it seems to enhance it. We feel enriched and empowered in love, not undermined and violated. How is this possible? If we were kidnapped and brainwashed, we would feel less responsible as a result; but, while romantic love resembles these manipulation cases (see Mele 1985) in salient respects, it doesn’t disrupt our sense of being responsible in the least.

This got me thinking about the opposite of falling in love – breaking up with someone. Unwelcome break-ups – the kind that seemingly come out of nowhere – can have the same phenomenological features as falling in love. We might have no say in the break-up; it might be pre-empted by events beyond our control or by irreconcilable differences that we didn’t anticipate and couldn’t have foreseen. And then, whether we like it or not, we have to revert back to being a single entity. Break-ups, then, can be transformative in a distinctively unsettling way, and can impose on our free will. Why is there this discrepancy between the experiencing of falling in love and the experience of abruptly breaking up with someone, if both experiences have the same responsibility-undermining features? In the positive case these features are embraced as exhilarating and enhancing, while in the negative case they’re experienced as nefarious, alien impositions, similar to the brainwashing case.

Here’s one hypothesis. We tend to take more responsibility for positive events than negative events. At least, most people do. Since romantic love is subjectively positive, and abrupt break-ups are subjectively negative, we find it easier to ‘feel responsible’ in the former scenario than the latter, even if both events are antagonistic to theoretical conditions of responsibility (control and continuity). This hypothesis is consistent with attribution-self-representation theory, as described by Richard Bentall (2011). On our best evidence, ordinary people tend to attribute positive events to their own agency and negative events to external causes. That is, they exhibit a moderate optimistic attribution bias. In addition, ordinary people have moderately positive self-representation schemas, or beliefs about their worth and abilities. These attribution biases and self-representation schemas hang together in an “attribution-self-representation cycle,” or equilibrium, with each element supporting the others (Bentall 2011; 5294). An ordinary person’s attribution-self-representation system reliably gives rise to self-preserving biases – biases like the perception of being more responsible in a romantic relationship than in the aftermath of a break-up. These positive experiences also boost a person’s self-representation beliefs, reinforcing optimistic attribution bias in a positive feedback loop.

Now, not everyone has moderate self-serving biases. Depressed people have the opposite disposition: they tend to take less responsibility for positive events than negative events. Some people might also have excessive optimistic biases and self-representation schemas. These people are narcissists: they think they’re way better than they really are, and way better than other people. These different attribution styles (normal, depressive, narcissistic) might effect how people experience romantic love. Maybe depressed people are less capable of bonding with others, if they can’t ‘take responsibility’ for positive events like romantic relationships. Maybe they can’t assimilate romantic relationships into their self-conception. Narcissists might not be able to bond with anyone, inasmuch as they’re practical solipsists. They just can’t value another person. These biases come in degrees, of course, but we can see severe depression and severe narcissism as opposite ends of continuum. The ordinary attribution bias is the ‘healthy’ range: it promotes subjective happiness and social functioning. People on either extreme tend to be less happy and/or less functional. (This is somewhat speculative but I think it’s a reasonable conjecture about romantic bonding based on attribution-self-representation theory).

Can we draw any conclusions from this discussion about responsibility as a theoretical construct? Or is all this talk about subjective perceptions just descriptive? Another way of putting the question is, are subjective perceptions of responsibility relevant to how we should think about responsibility in objective terms?

I think they are. More specifically, I think that our attribution biases and self-representation schemas can be relevant to responsibility as a theoretical construct, on a certain picture. Here’s one that seems to fit. Let’s suppose that responsibility is an interpersonal practice in which we praise and blame people, à la P. F. Strawson (2003). And let’s suppose further that praise and blame function to enhance people’s moral agency, à la Manuel Vargas (2013). It seems to follow that, when praising and blaming people, we should tailor our reactive attitudes (praise and blame) to fit with the target agent’s attribution style. In other words, we should blame depressed people less than normal, and praise narcissists less than normal, as a way of ‘nudging’ their attribution-self-representation biases toward the normal range. This is because the normal amount of optimistic bias promotes social functioning, and social functioning enhances moral agency. (It’s hard to be a moral agent if you’re suffering from subjective distress and social and occupational dysfunction, and/or if you don’t care that much about other people). So, if we’re concerned with enhancing agency, we should praise and blame people differentially depending on their attribution-self-representation style. (Arguably, we implicitly do this already because we evolved to respond differentially to different attribution biases; but even if this isn’t the case, it’s reasonable to think that we should try to be sensitive to people’s attribution biases, since it’s good for the person and good for society).

Going back to romantic love, this approach implies that we shouldn’t praise or blame people for making moderately self-serving judgments of their role in romantic relationships and break-ups, since this attribution style has survival value, and may enable people to have functional romantic relationships and recover from break-ups, whereas other attribution styles may promote insecure attachment and isolation.

I’m not going to discuss how attribution theory fits with other accounts of responsibility. I suspect that it doesn’t fit quite as nicely, but I won’t go into that here.

Thanks for your time.


Badiou, A., & Truong, N. 2012. In praise of love. Profile Books.

Bentall, R. P. 2004. Madness explained: Psychosis and human nature. Penguin UK.

De Botton, A. 2016, May 28. Why you will marry the wrong person. The New York Times. Retrieved from

Mele, A. 1995. Autonomous agents. New York: Oxford University Press.

Strawson, P. F. 2008. Freedom and resentment and other essays. Routledge.

Vargas, M. 2013. Building better beings: A theory of moral responsibility. OUP Oxford.

Blame and Brock Turner



This is *very* rough and there’s a lot going on, but here you go.

Trigger warning: this post contains information about sexual assault and/or violence that may be triggering to survivors.

Introductory Statements

I’m going to write about a sensitive topic with some reservations and apprehensions, but I think that it’s a case worth addressing from a philosophical perspective. I believe it’s a case on which philosophers can shed light, and a case that we can learn from. As I’m sure you know, Brock Turner raped a 23-year-old woman whom he met at a party. He was convicted of three charges of felony sexual assault and sentenced to six months’ jail time and 3 years’ probation by Judge Aaron Persky. This case has elicited a very vocal blaming response from the public, which I think is eminently appropriate. But it raises some questions for responsibility theorists. The main one that I want to address is the relationship between individual and collective responsibility, or more specifically, the responsibility that an individual bears when he is a member of a broader social group, and his actions reflect the (implicit or explicit) values of the group. I also plan to reconsider what can count as responsibility-relevant group membership, to include participation in ‘mere aggregates’ such as rape culture. I think that assessing Turner through the lens of collective responsibility explains the force of our shared reaction in a way that wouldn’t otherwise be possible. That is, I think that there is general consensus that Turner is blameworthy in a particularly strong way, and we can explain this reaction by considering his social position – specifically, his implicit affiliation with particular (loose) social groups.

Without saying too much about this, I take this analysis to be consistent with certain contemporary projects in responsibility theory, particularly Vargas’ emphasis of the ‘moral ecology’ (2013) – the social structures that enhance or limit moral agency. I don’t think that it’s possible to assess a person’s responsibility status without considering the moral ecology, since responsibility, properly understood, is a function not just of a person’s internal properties, but the dynamic interaction between those properties and the world. Individual responsibility, then, can’t be judged independently of person’s social context. While social factors can impair responsibility by cutting off deliberative possibilities (consider Wolf’s famous JoJo example [1986], or Beauvoir’s example of the cloistered sex slave living in a harem [1964]), I argue that social structures can also amplify a person’s blameworthiness if those structures enable antisocial behaviour by creating conditions of privilege.

Before proceeding, a few clarifications.

I favour the view of blame on which blame is more than just a judgment to the effect that someone is blameworthy or that blame is fitting; I take it to be a cognitive state (i.e., a judgment) plus a conative or affective response – a disposition to rebuke the target or feel negative reactive emotions under blame-conducive circumstances, or something along those lines. I won’t defend this picture here, but in any case, I think that what I have to say is probably compatible with different conceptions of blame. You can adopt your preferred view.

I also want to state in advance that in what follows I’ll be discussing ‘rape culture’ and a ‘culture of White privilege’ (CWP for short), by which I mean to denote an aggregate of persons that could be called a loose ‘collective’ or ‘social group,’ though there is no shared intentionality amongst its members (i.e., ‘intentional agency’), or well-ordered decision procedures, unlike structured organizations (e.g., Walmart, the US military). I think that these particular aggregates are ‘social groups’ in a morally-relevant sense (i.e., a sense that confers responsibility on group members) because they share a set of implicit attitudes that motivate characteristic antisocial behaviours – most importantly, misogynistic attitudes, and an assumption of racial superiority and relative immunity from legal and moral sanctions, respectively. Members of these groups implicitly hold these attitudes, though they probably would not explicitly avow them, nor would they self-identify with these groups – they may not even know that these groups exist. Nonetheless, I hold that people who harbour and act on these types of implicit attitudes form a responsibilty-releative collective, and their group-typical individual behaviours are inflected by their group membership. Specifically, they can be responsible for harms committed by group, even if these harms outstrip their individual causal contributions to the group.

With this in mind, I submit that Turner is responsible for an individual act of rape as well as participation in rape culture and CWP, and this group participation makes his individual act more blameworthy, because it is both more wrong (in deontic or acetic terms – take your pick) and more harmful (in its effects). To be clear, I’m not just saying that Turner is blameworthy on three counts: for committing rape, for participating in rape culture, and for advancing CWP; I’m saying that his act of rape is more blameworthy as an act of rape on account of his participation in these social groups. This makes his act different from similar acts committed in different contexts. Some people benefit from membership in privileged social groups without committing rape, and some people who commit rape don’t belong to these types of groups; the unique confluence of rape and membership in these privileged social groups has normative implications for members’ responsibility status.  Now, even if we were to deny that group membership impacts on individual responsibility in this aggregative way, it would still be useful and informative to characterise Turner as responsible for not only an individual act of rape, but also implicit participation in these groups, which requires attention to Turner’s moral ecology. Yet I think it’s more accurate and informative still to see his action as part of a collective activity, and thus imbued with an ‘extra’ layer of moral significance which makes it more blameworthy.

I will also argue that Turner was a particularly active, albeit implicit, member of these groups, which prevents us from seeing his membership as in any way coerced (contrary to what he would have us believe). His participation in these groups was, properly understood, fully voluntary, given that he (tacitly) promoted the implicit values of these groups.

This construal of the situation poses a challenge to some popular ideas about responsibility. For example, the idea that collectives must have shared intentional agency or formal decision-making procedures to be collectives in a morally-relevant sense, and to confer responsibility onto individual members; the idea that individuals can be responsible for implicit biases and unwitting social affiliations; and the idea that unwitting group membership can have inculpating effects rather than (just) excusing effects.

This is a lot to take in, but for the rest of this post I’m going to focus on three key claims: (1) Turner is an active albeit unwitting participant in rape culture and CWP, (2) rape culture and CWP are responsibility-relevant collectives in spite of falling afoul of the standard criteria (i.e., intentional agency and/or formal decision procedures); and (3) Turner’s membership in these collectives intensified his blameworthiness, because it makes his action both worse (inherently) as well as more harmful. Then I’m going to explain how this construal of the situation challenges some popular assumptions about moral responsibility.

Note: To avoid confusion, I want to clarify that even if Turner’s action hadn’t been part of a collective harm, it would still have been utterly blameworthy and offensive in its own right. But while admitting this, I want to draw attention to features of the case that have received less attention in the media, that are generally overlooked by philosophers, and that might challenge some of our assumptions, especially concerning individual responsibility and the relevance of collectives.

(1) Active implicit participation

There are different degrees of group participation, and correspondingly different degrees of responsibility. Most theorists agree that resistors are not responsible for the activities of the group, and reluctant participants may be less-than-fully responsible. Passive participants, who neither contribute to the group’s aims nor resist, may be somewhat responsible. By contrast, active participants, who advance the group’s aims, are more responsible than any other type of group member. It may seem paradoxical to say that an active participant can also be an unwitting participant – oblivious to the group’s aims – but I think that this is intelligible if we consider tactic promotion of the group’s aims to count as ‘active participation.’ When someone explicitly denies group membership but exemplifies adherence to the group’s values through his actions, we can consider the person a willing participant. This allows us to say that someone might be a member of a collective even if the person wouldn’t recognize the collective as such: all that matters is whether that the person embodies the group’s values, even if they are only implicit represented in the person’s motivational system. The person wouldn’t count as ‘decisively endorsing’ his values in Frankfurt’s sense (1971), but he ‘endorses’ them at the level of overt behaviour – either bodily actions or speech acts that tacitly invoke the group’s values.

I think that this is true of Turner, inasmuch as he tacitly invoked salient features of rape culture and CWP repeatedly in his court statement. For instance, he claimed that, ‘coming from a small town in Ohio,’ he ‘had never really experienced celebrating or partying that involved alcohol.’ These are salient features of rape culture, and when invoked as a defence against rape, they function in a stereotypical way – to obscure the normative import of misogyny and violence against women, and to protect group members against harsh legal and social sanctions. It’s worth noting that Turner never confessed to rape, nor was he ever convicted of rape; he was convicted of felony sexual assault on (what I would describe as) a technicality – because the prosecution couldn’t prove to the court’s satisfaction that Turner raped his vicim with his own ‘sexual organ,’ as California law requires. This law can be seen another artefact of rape culture, promoting the idea that the male penis is special, imbued with some magical power to transform a woman’s moral and legal status in a unique way. (Otherwise why treat it differently than any other penetrative object?) Next, Turner appealed to ‘the stress of school and swimming’ as an apparent excuse for his ‘lapse of judgment’ (which, again, he refused to describe as rape). This can be seen as an invocation of White privilege: White people disproportionally participate in varsity swimming and comprise a majority of Stanford’s student and faculty population, and disproportionally  take this to reflect special moral standing, which can potentially ‘offset’ antisocial behaviour in their private lives – like a carbon tax for ‘white-collar crime.’ Turner’s father’s testimony reinforces this notion, citing Turner’s swimming record as an excuse for his behaviour. (A commentator cleverly edited this letter to show what a red herring these details are; but it would be wrong to see them as mere non sequiturs, rather than characteristic appeals to salient features of CWP. Seen in this light, they’re not morally irrelevant – they reveal flaws in the speaker’s moral sense). The judge responded by giving Turner a lenient sentence – much less than the maximum of 14 years in prison – on grounds that “a prison sentence would have a severe impact” on the defendant. It might not be a coincidence that the judge himself went to Stanford and was the captain of the Lacrosse team. It’s not implausible to think that he, too, is a member of CWP, sympathized with Turner as a compatriot, rather than the plaintiff, who pleaded for a harsher sentence. In any case, the point of this section is that salient aspects of Turner’s testimony suggest that he is a spokesperson for rape culture and CWP, even if he would deny it.

One might object here that a lenient sentence is required by liberalism and a commitment to rehabilitation – and perhaps this is the right attitude to take toward crime in general, which is often committed by underprivileged members of society (especially in economically polarized countries like the United States); but when you construe the judge’s verdict as a concession to Turner’s invocation of White privilege and rape culture, it can no long be seen as an innocuous defence of rehabilitative justice: it begins to look more like a defence of White privilege, rape culture, and the misogynistic attitudes that flow from the intersection of these subcultures. While some lenient sentences might legitimately rest on rehabilitative principles, Persky’s verdict problematically legitimates the defendant’s ludicrous excuses, and ignores the plaintiff’s request for a harsher sentence. This reinforce the pervasive social narratives of rape culture and CWP. And while rehabilitative justice is meant to equalize social inequality, this verdict does the opposite: it actually reinforces social equality by favouring the interests of the most well-off.



It’s also worth noting here that Turner denied responsibility for rape, absurdly insisting, both before and after the trial, that his interaction with the plaintiff was ‘consensual.’ I don’t think it’s implausible to say that Turner didn’t know that he was committing rape, but surely he should have known, and (relatedly) he could have known – he could have learned what rape means. Turner says, and perhaps genuinely believes, that he was ‘coerced’ by Stanford’s party culture (into committing an ‘indiscretion’), and this is supposed to excuse him. But he wasn’t coerced in any meaningful sense of the word. He could have sought out a different peer group, but he chose note to. He’s not like Wolf’s famous lone psychopath (JoJo). He’s an exceptionally privileged members of a liberal democracy with unimpeded access to many forms of life, and so he had as much free choice as any living person could want. He was thus ‘free’ in any meaningful compatibilst sense.

(2) Responsibility in collective contexts

One of the controversies in the collective responsibility literature is whether responsibility for participation in a collective harm can transcend the contributions of individual members, such that the collective harm is greater than the contribution of each member – whether the sum can be greater than its parts. For example, there is a question about whether genocide can be worse than multiple individual acts of anti-Semitic murder. I think that it can, but we need to deny some classic assumptions about collective responsibility. Some theorists (methodological individualists) think that collective action, and thus collective responsibility, is metaphysically impossible, but I see this as a collectivist version of what Strawson called ‘panicky metaphysics,’ and I don’t want to get bogged down in metaphysical concerns, so I won’t. I’m concerned with developing a practical evaluation of the situation that makes sensible distinctions amongst people. Other theorists are worried about treating individuals unfairly by lumping them together, which might seem to contravene Rawls’ principle of the ‘separateness of persons.’ This may be why H. D. Lewis referred to collective responsibility as a kind of “barbarism” (1948). But this worry seems to jump the gun; if there is a compelling reason to see individual participation in a collective as (morally) different then individual action simpliciter, then we should treat them differently. This isn’t barbarism, it’s realism. Of theorists who think that collective responsibility is coherent, many take this to be the case only if the collective has either organizational mechanisms (decision-procedures), or collective intentionality – especially shared intentional aims (e.g., French 1984). Others have defended a more permissible notion of collective responsibility, on which group membership can be based on shared attitudes;  yet these attitudes are typically construed as reflective. Marilyn Friedman and Larry May (2985), for instance, hold that ethnic groups, such as White men, can bear collective responsibility, but they offer the following three conditions of group membership: self-identification, continuous primary relationships with members of that group, and a shared cultural heritage. I doubt that rape culture and CWP meet any of these criteria. Certainly ‘group identity’ (in the typical sense) is lacking, since involvement in rape culture tends to be implicit, and would be explicitly disavowed by most members if asked. Nor do members bear ‘primary relations’ to one another; they usually meet only sporadically, if ever. Nor do members share something that could be called, in substantive terms, a share cultural heritage – at least, not the kind of well-established heritage that most ethnic groups share. Yet I still want to say that these groups – rape culture and CWP – commit collective harms, and members can be held responsible for these harms (depending on their degree of participation). Furthermore, the wrongness and harmfulness of collective harms outstrips the contribution of any individual member, yet each member bears a degree of blameworthiness for the collective harm – that is, the individual’s blameworthiness for his group-typical behaviours – such as rape – is intensified by virtue of his relation to the group. This is true whether the person knows that his action is part of a broader harm, or whether he knows that he belongs to the group at all. This is also different from being part of an ethnic group per se because rape culture and CWP are smaller subcultures, making it relatively easy to avoid them. And on top of this, their core members are relatively privileged and autonomous, and can fairly easily use their privilege and self-determination to choose more pro-social peer groups.

These thoughts, however, go against the standard constraints on collective responsibility, which hold that group membership only confers responsibility if shared intentional agency or organizational mechanisms are present, rendering membership ‘voluntary.’ On my proposal, group membership can be a matter of shared group-typical implicit biases, in addition to the standard criteria. (This expands the definition of ‘collective’). This modification fits most closely, of all the views I can think of, with Larry May’s notion of ‘group intentions’ as “pre-reflective intentions,” which are “not yet reflected upon by each of the members of the group” (May 1987 p. 64); yet this description suggests that there is counterfactual reflective endorsement, i.e., that group intentions would be avowed upon adequate reflection. Implicit biases aren’t like this; they resist reflective access. Perhaps May’s view could be adjusted to accommodate implicit attitudes; but either way, I have a different reason for rejecting the condition of shared intentionality – the ‘intentionality constraint.’ The reason, very simply, is that we shouldn’t even have an intentionality constraint on individual responsibility, let alone collective responsibility.

Responsibility theorists who specialize in individual responsibility are increasingly moving away from the intentionality requirement (also referred to as the ‘epistemic condition,’ ‘knowledge condition,’ or ‘reflective condition’). For example, George Sher argues that we can be responsible for unconscious omissions such as forgetting about a beloved pet in the backseat of a hot car, falling asleep on duty in a combat zone, or crashing an airplane due to lack of proper attention (2010: 24). Angela Smith (2005) argues that we can we can responsible for forgetting about a friend’s birthday. Nomy Arpaly (2014) says that we can praiseworthy for doing the right thing unwittingly, i.e., ‘inverse akrasia.’ She cites  Huck Finn as an example, for helping his friend Jim escape from slavery in spite of naively thinking that slavery is justified. These examples illustrate the idea that people can be responsible for their behaviour even if they fail to grasp the normative force of that behaviour, provided that their actions are ‘characteristic’ in some sense. (Most of these theorists believe that actions have to be suitably connected to some subset of an agent’s motivational system – the person’s moral personality – to confer responsibility). Some of these arguments rest on the intuitive force of the examples, while others marshall theoretical arguments. Arpaly, for instance, contends that reflective judgments are not, as Kant believed, “non-accidentally” connected to the normative features of right action, such that they reliably produce right action (2014: 145); hence, non-reflective actions can be praiseworthy – and presumably, they can also be blameworthy. If this is right, then maybe we should reject the intentionality constraint.

Another rationale for rejecting this constraint is that empirical research indicates that many of our characteristic choices and actions are not the result of reflection. Yet these choices and actions seem to define us as persons – even as moral agents. For example, people are more likely to donate to an honour box if there are eyes posted next to it (Bateson 2006). Although this is a non-reflective (automatic) choice, it might reflect a person’s agency (characteristic values and beliefs). John Doris (2015) takes this kind of research to show that agency doesn’t require reflective control – it’s at most loosely connected with reflection. Extrapolating from this, we can infer that responsibility, too, doesn’t require reflection. Extrapolating further, we can infer that implicit biases, if expressed in a person’s characteristic behaviours, can be responsibility-imputing albeit non-reflective.

By rejecting the intentionality constraint, we allow that people’s implicit biases might be responsibilty-conferring, if those implicit biases are manifested in overt behaviour. We also allow, by the same token, that a person’s implicit participation in a social group can be responsibility-conferring, and further, that this participation can amplify the person’s responsibility for his individual implicit biases and related actions. I turn to this thought next.

(3) Blame amplification through group participation

Although I am not aware of an exact precedent for defining groups in terms of implicit biases, there are precedents in the collective responsibility literature for discussing individual responsibility in collective contexts, and we can modify them to fit our purposes.

I hold that a person’s action can be especially blameworthy if the person is a member of a harmful collective, even if the person doesn’t self-identify with the group. I’ll unpack this claim in four steps. This will partially summarize, and partially build on, what has already been said.

(1) A person is responsible for participation in a group harm if this participation is voluntary either in the classic sense, i.e., there are decision procedures or shared intentional agency, or the members share group-typical implicit biases and manifest those biases in their overt behaviour. It is also relevant whether there are alternative possibilities within the person’s cultural environment: a lack of alternatives can undermine the voluntariness requirement by practically necessitating a certain outcome. In liberal democracies, it’s relatively easy for most people to move from one subculture to another.

(2) A person’s action is more blameworthy when the person is part of an antisocial collective (and the person meets condition 1), because the action is more wrong (deontically or aretaically), inasmuch as it embodies rational flaws and/moral and epistemic vices, or both, over and above the person’s individual flaws; and it is more harmful, inasmuch as is it part of a group harm, which outstrips the individual’s causal contribution to the group. Rape culture and CWP embody misogynistic and racist attitudes, and they perpetrate harms not just against individuals, but against whole groups – women and racial minorities (directly). They might even be construed as committing harms against everyone, inasmuch as they promote pernicious cultural myths that normalise misogyny and racism and make it harder for ordinary people to see them for what they are. Yet individual members might not know or appreciate what they are doing. Nonetheless, I think that it’s reasonable to see these agents as responsible for these outcomes, even if they don’t explicitly intend or endorse them. And I also think it’s plausible to see members who promote the core values of the group as responsible for peripheral values that they don’t implicitly hold, inasmuch as promoting the core features of an ideological system supports the perpetuation of the system as a whole. So group members can, I think, be responsible for promoting values that they neither implicitly nor explicitly hold. In this way, group members who promote core ideological tenets can be responsible for more than their own implicit biases – they can be responsible for unwittingly promoting additional values; and they can be responsible more than their individual direct contrition to the group – they can be responsible for additional indirect harms.

(3) I don’t just want to say that members of antisocial groups who commit a group-typical harm are responsible for their individual action, in addition to their membership in the group – although this is an interesting claim in its own right, and it helps to counterbalance the tendency to ignore the moral relevance of context. But I want to say more than this, i.e., that a person’s individual action is coloured by his membership in the group. Turner, specifically, is responsible not just for an act of rape, but for an act of rape as an act of group-based misogyny and White privilege. The act itself has several moral ‘layers.’

This is where there’s a helpful precedence in the collectivist literature, and I’m thinking specifically of Tracy Isaacs (2011). She says that when an individual’s action makes a causal contribution to a collective harm, that action (in effect) inherits a layer of normative significance from its relationship to that broader harm. So for instance, when someone murders a Jewish person in a Holocaust context, this isn’t just an act of murder, or even an act of anti-Semitic murder; it’s an act of genocide. The action is thus transformed from a strictly individual action to part of a collective harm, and this has significance for the individual’s responsibility status – how blameworthy he is. This theory leans on previous accounts of agency (Williams and Davidson’s), but adds a new dimension to them. On Bernard Williams’ theory of evaluative concepts (1985), a single action can be described variably in ‘thinner’ or ‘thicker’ terms. For instance, saving a drowning child can be described more ‘thickly’ as an act of courage. On Davidson’s theory of reasons, actions admit of intentional and non-intentional description; the act of flipping a light switch can be redescribed (intentionally) as an act of turning on a light or (unintentionally) as alerting a prowler lurking outside that someone is inside (1963). These accounts exemplify ‘the accordion effect’ in Sheffler’s sense – action descriptions ‘expand’ and ‘contract’ to reveal different layers of meaning. What Issacs adds to these ‘accordion’ views is the idea that an individual action can have collective significance –  a thicker type of significance. This explains the Nazi example: the individual act of murder becomes an act of genocide when the agent is part of a Holocaust. Isaacs, however, is only talking about organisations and goal-oriented collectives (like military regimens), which satisfy the group intentionality constraint. But if you drop this requirement, her view can be made to suit my purposes: we can describe individual actions in terms of implicit group membership, yielding a ‘thicker’ description. So Turner’s act of rape is, properly understood, also an act of misogynistic violence and White privilege. These collective descriptions are part of the meaning of his action, and inform his moral status. Note that individual and collective descriptions don’t attach to different actions; they co-describe the self-same action. The description of rape as, in part, an endorsement of rape culture and CWP may not be as succinct or epigrammatic as the description, ‘an act of genocide,’ but I don’t think that semantic clumsiness should bother us; we happen to have a convenient word for a genocidal action, but not other types of group harms. Yet I think that there are many group harms beyond the classic compendium, and they are sufficiently analogous, although they may be more complex and multifaceted. If we fail to see Turner’s action as part of a group harm in the relevant sense, this may be due to a pervasive individualist bias.


(4) When I say that Turner’s action is worse because it casually contributes to these two collectives – rape culture and CWP – I mean that it’s worse than an equivalent action performed outside of these groups, ceteris paribus. The reason this action is worse is that the collectives are especially morally flawed and especially harmful. Rape culture represents a set of misogynistic ideas, including that it’s permissible to treat women as objects, that sex doesn’t require explicit consent, and so on. If someone holds a core subset of these values, the person implicitly promotes the whole evaluative framework – not just these attitudes, but connected ones closer to the periphery. These attitudes also become more plausible to people, by virtue of their relation to the ideological system. Furthermore, the group perpetuates a set of harms that no individual would be capable of perpetrating on his own, yet every member’s contribution helps to sustain the group. For these reasons, actions that contribute to the group are especially wrong and especially harmful.

People who commit wrongs individually often don’t bear the same degree of responsibility as group members, especially particular types of group members. For example, a lone psychopath might commit a heinous offence, but either the person had no choice due to severe, inalterable cognitive deficits, or the psychopath wasn’t causally implicated in a harmful collective, capable of causing collective harms and promoting pernicious and false beliefs and attitudes;  he acted on his own psychopathic ‘reasons.’ In either case, the psychopath’s responsibility it determined by his own internal properties. A child soldier, like Ishmael Beah, might commit atrocities as part of a group, yet not be responsible because the person was a child, indoctrinated into a pernicious ideology by force; or the person might have lacked alternative possibilities – Beah, after all, was captured by the Sierra Leon government army during a civil war, drugged, and basically brainwashed. Members of privileged groups in liberal democracies aren’t like this. They’re not psychopaths for the most part, and they’re not captured, forced, or coerced in the literal sense.

This explanation poses some challenge to standard accounts of responsibility, which I’ll address next.

Responsibility as theory and practice

Here are some theoretical and practical commitments that are challenged by my claims. I should be careful here because I suspect that many theorists would reject these commitments, but I think that there is a natural way of construing certain view such that they imply these commitments, and I also think that commonsense morality might be committed to some of them. To avoid controversy, I’ll avoid citing anyone unless there is a clear connection.

(1) What is a collective?

First, I challenge the idea that responsibility-relevant collective must have clear decision-making procedures and/or explicit shared intentionality. This broadens the scope of responsibility-relevant collectives, but not so much that pervasive, practically inescapable collectives count. I don’t want to say that whole cultures are collectives in the responsibilty-conferring sense. Subcultures seem like more apt candidates for group status.

(2) Atomism

I’ve argued that we can’t adequately describe an individual’s action without considering the individual’s relation to collectives, since certain collectives can confer moral significance onto their members’ actions. This goes against the atomistic view, on which only individual actions have moral significance. While people can be responsible for an individual action, and for participating in a certain collective, atomists don’t allow that group membership can affect the significance of the individual’s action in a qualitative way.

(3) Groups as excusing

There’s a pervasive line of thinking on which group membership is excusing, because it induces ignorance and undermines control. Milligram’s famous experiments (1963) purported to show that anyone could have been a Nazi under the wrong circumstances, which reinforces a ‘there but for the grace of God go I’ mentality. If people do bad things only because they didn’t know better and couldn’t have done otherwise, people aren’t blameworthy. But there are degrees of control, and some people have more of it than others. Turner invoked lack of control in his defence: he suggested that when he committed ‘a lapse of judgment’ (in his words), he was under duress from ‘party culture.’ Reinforcing this narrative is dangerous, and I think that the narrative itself is false. Turner had as much freedom and autonomy as anyone in this world could want; if he isn’t free, then who is? Appeals to coercion are legitimate in some cases (maybe Ishmael Beah’s, for example), but not everyone can legitimately appeal to them. There are morally relevant differences between getting drunk at a party and raping someone, on the one hand, and being kidnapped, drugged, and brainwashed during a civil war as a child, on the other. These differences, I think, are sufficient to say that one person is responsible and the other isn’t (or at least, that one person is significantly more blameworthy than the other).

(3) Implicit attitudes aren’t responsibility-imputing

There’s also a pervasive line of thinking on which implicit attitudes are not blameworthy (e.g., Levy 2014; H. Smith 2014). (I say blameworthy because the kinds of implicit attitudes that philosophers are generally interested in are morally problematic). But if we have to define a person’s responsibility status on the basis of either a person’s explicitly avowed commitments or the person’s conflicting implicit attitudes, where the latter are expressed in the person’s overt behavioural patterns, then it makes sense to go with the manifested implicit attitudes. This is especially true of people who implicitly promote certain groups by invoking or embodying their normative features, since their contribution to the group’s aims can intelligibly be construed as a kind of (implicit) endorsement of these aims. This is different from Frankfurt’s notion of ‘decisively endorsement,’ since ‘decisiveness’ is conspicuously lacking, but it better captures what Watson (2006) described as a person’s values.

(4) The Searchlight View

There’s yet another prominent line of thinking on which a person can’t be responsible for unwitting infractions. Sher (2010) calls this the ‘searchlight view,’ and says that it can be attributable to Kant as well as commonsense morality. In some cases, this logic seems to make sense: if someone does something under posthypnotic suggestion, he’s not responsible because he didn’t know what he was doing and couldn’t have done otherwise. But as we saw, Sher and other modern deep-self theorists reject this view, on grounds that unconscious infractions might reflect our moral personality more than our reflective beliefs. While I wouldn’t go so far as to say that consciousness and control don’t matter at all, I think that in many cases of ‘unwitting wrongdoing,’ there is an element of control, albeit in a very indirect sense: specifically, the agent could have made different choices in the past, which could have conferred a stronger capacity for control. This is certainly true of Turner (on a very natural interpretation): he could have chosen different peers, different social groups, different ideological perspectives, different books, different movies. In any case, unwitting infractions seem responsibility-relevant, either because they reflect a person’s moral personality, or because the person could have made better choices.

(5) Actual-sequence control

Fischer (2012) famously defends a view of responsibility on which responsibility for an action A requires ‘actual sequence control’ over A, i.e., a person must have had control over A in the actual sequence of his deliberation. If the person could have exercised control over A only under different, counterfactual circumstance, the person isn’t responsible for A. It’s hard to apply this view to practical cases, because it’s hard to know when anyone has actual-sequence control over a particular choice, and this has led to an extended debate between Vargas (2005) and Fischer (2012). I think there’s a case to be made that Turner didn’t have actual-sequence control when he committed rape, because he was closed off to certain moral considerations, and couldn’t entertain or imagine them; but if that’s the case, then I think that this shows what’s wrong with the actual sequence control model. If this isn’t the case and Turner did have actual sequence control (sufficient to underwrite responsibility), then the actual sequence control model might be viable, but I would still worry that it artificially cuts off relevant parts of the moral ecology which might inflect a person’s responsibly, as I’ve argued in previous posts. But I won’t take this up here.


Conclusion: TBC.

Please comment in the ‘comments’ section above.

Moral enhancements 2


In my last post on moral enhancements, I considered whether there is a duty to enhance oneself and others, and correspondingly, whether one can be blameworthy for failing to fulfil this duty. I said that this is a complicated question, but it depends to a great extent on whether the intervention is susceptible of prior informed consent, which in turns hangs on whether there are likely to be unknown (especially potentially adverse) side-effects.

Here, I want to consider whether intended moral enhancements – those intended to induce pro-moral effects – can, somewhat paradoxically, undermine responsibility. I say ‘intended’ because, as we saw, moral interventions can have unintended (even counter-moral) consequences. This can happen for any number of reasons: the intervener can be wrong about what morality requires (imagine a Nazi intervener thinking that anti-Semitism is a pro-moral trait); the intervention can malfunction over time; the intervention can produce traits that are moral in one context but counter-moral in another (which seems likely, given that traits are highly context-sensitive, as I mentioned earlier); and so on – I won’t give a complete list. Even extant psychoactive drugs – which can count as a type of passive intervention – typically come with adverse side-effects; but the risk of unintended side-effects for futuristic interventions of a moral nature is substantially greater and more worrisome, because the technology is new, it operates on complicated cognitive structures, and it specifically operates on those structures constitutive of a person’s moral personality. Since intended moral interventions do not always produce their intended effects (pro-moral effects), I’ll discuss these interventions under two guises: interventions that go as planned and induce pro-moral traits (effective cases), and interventions that go awry (ineffective cases). I’ll also focus on the most controversial case of passive intervention: involuntary intervention, without informed consent.

One of my reasons for wanting to home on this type of case is that there is already a pretty substantial body of literature on passive non-consensual interventions, or ‘manipulation cases,’ in which a futuristic neuroscientist induces certain motives or motivational structures in a passive victim. We can tweak these examples to make the interventions unambiguously moral (the intervener is tampering with the victim’s moral personality), to derive conclusions about passive moral interventions and how they effect responsibility. My analysis isn’t going to be completely derivative on the manipulation cases, however, because theorists differ in their interpretations of these cases, and specifically on whether the post-manipulation agent is responsible for her induced traits and behaviours. I want to offer a new gloss on these cases (at least, compared to those I will consider here), and argue that the victim’s responsibility typically increases post-manipulation, as the agent gains authentic moral traits and capacities by the operation of her own motivational structures. (Except in the case of single-choice induction, where there are no long-term effects). I will also say some words on the responsibility status of the intervening neuroscientist.

I’m going to assess three kinds of case, each of which we can find in the literature. First, Frankfurt-type cases (in fact, Frankfurt’s original case), in which an intervener induces a single choice. Second, ‘global manipulation’ cases like Alfred Mele’s (2010), in which an intervener implants a whole new moral personality. And finally, partial manipulation cases, in which the intervener replaces part (let’s say half) of a person’s moral personality. I’m not aware of philosophical ‘partial manipulation’ cases per se, but there have been discussions of partial personality breakdowns, as in (e.g.) split-brain patients, which we can use as illustrative examples. Partial manipulation cases are like that, only instead of generic cognitive deficits that impair performance, there are moral deficits (due to internal conflict) that may undermine the agent’s ability to make and carry out moral plans.

Let’s consider each of these manipulation cases in turn.

1. Induced choice (minimal manipulation)

In the original Frankfurt-type case (1979), a futuristic neuroscientist named Black secretly implants a counterfactual device in Jones’ brain, which would compel Jones to vote for a certain political candidate if he were to choose otherwise, but the device is never activated. It’s causally inert, so it doesn’t affect Jones’ responsibility status, according to Frankfurt. (Jones is responsible because he acts on his own decisive desire). This example has to be modified to fit our purposes. First, let’s imagine that the implant is casually efficacious as opposed to merely counterfactual, as this is a more interesting kind of case. And second, let’s suppose that the intervention is designed to induce a moral effect – say, to make Jones donate to charity. Finally, let’s consider two types of case – one in which the intervention produces the intended effect, and one in which it produces a counter-moral effect.

I’ll assess these cases after discussing global and partial manipulation scenarios.

2. Global manipulation 

Alfred Mele (2006) offers a case of ‘global manipulation,’ in which a person’s entire motivational system is changed. He asks us to imagine two philosophy professors, Anne and Beth, and to suppose that Anne is more dedicated to the discipline whereas Beth is more laid back. The Dean wants Beth to be more productive so he hires a futuristic neuroscientist to implant Anne’s values in Beth, making her Anne’s psychological double. Is Beth responsible for her new psychological profile and downstream behaviours? According to Mele, no, because Beth’s new values are practically unsheddable: they cannot be disavowed or attenuated under anything but extraordinary circumstances (e.g., a second neuroscientific intervention). That is, Beth lacks control over her implanted values.

This is again not obviously a moral example, so let’s imagine that Anne is a saintly person and Beth is a jerk, and the Dean doesn’t like working with jerks, so he hires a futuristic neuroscientist to implant Anne’s saintly values in Beth. The intervention works and Beth becomes Anne’s saintly doppelgänger. This is a moral intervention and it’s a causally efficacious one. In this example, Beth’s moral personality is radically transformed.

If Mele is right, Beth is not responsible for her saintly values (and downstream behaviours) because these values are practically unsheddable. There are two natural ways of explaining this judgment. (1) Beth lacks control over her new values because they are firmly inalterable (Mele’s explanation). And (2) Beth is no longer herself – she has become someone else (Haji’s preferred explanation [2010]). These interpretations exemplify the control view and the character view – competing views in the literature that I described earlier. But in this case they converge on the same conclusion – Beth is not responsible – though they provide different (albeit overlapping) grounds. On one account, Beth lacks control over her desires, and on the other, her desires are not authentic, because they did not emerge through a process of continuous reflective deliberation (diachronic control). The two criteria are related, but come apart (as Mele emphasises [2006]): we can imagine implanted desires that are susceptible to revision but not the agent’s ‘own’; and we can imagine authentic desires that are relatively fixed and impervious to control (e.g., think of Martin Luther King saying, ‘Here I stand; I can do no other’). Yet on a historical picture of control, the two conditions substantively overlap: a person’s mental states are not authentic if they were not amenable to continuous reflective deliberation. At least, this is the case for passive intervention cases, which we are considering here. If we consider a third account of responsibility – Vargas’ agency cultivation model (2013) – we find still more convergence: it’s typically not agency-enhancing to hold someone responsible if she lacked diachronic control and wasn’t herself at a particular point in time. These accounts do not always converge, but in the case of passive intervention, there is substantive agreement. So, conveniently, we don’t need to arbitrate them. Yet there is still room for debate about whether a manipulation victim is responsible on any of these pictures, since there is room for debate about how to treat this person. Vargas holds that a post-intervention agent is responsible for her behaviour, but is a new person, contra Haji and Mele.

I’ll return to this question after considering a third case: partial manipulation.

3. Partial manipulation

Imagine that instead of implanting Anne’s values in Beth, making her Anne’s double, the manipulator had only implanted half of Anne’s values, or a substantial portion, leaving part of Beth’s personality intact. This is a partial manipulation case. Now Beth is internally fragmented. She has some saintly desires, but she also has a lot of non-saintly desires. Is she responsible for only her pre-implantation desires, or also for her post-implantation desires, or none of her desires (and downstream effects)? This is a more complicated scenario (not that the other two are simple!) We can perhaps compare this situation to neurocognitive disorders that involve internal fragmentation, which have also drawn attention from responsibility theorists – examples like psychosis, alien hand syndrome, and split-brain (callosotomy) patients. Perhaps by assessing whether these individuals are responsible (to any degree), we can determine whether partial-manipulation subjects are responsible. (Spit-brain surgery, which partially or wholly divides the two hemisphere down the corpus callosum, induces a similar split-personality effect).

Let’s look at these cases separately, beginning with global manipulation.


  1. Global

If Mele is right, then Beth is not responsible in the global manipulation case because she lacked control, and if Haji is right, she is not responsible because she lacked authenticity. These accounts have some degree of intuitive plausibility, surely. But Vargas (2013) offers a different interpretation. Vargas suggests that we should see post-global-manipulation agents as different people, but responsible people. So post-manipulation Beth is Beth2, and she is responsible for any actions that satisfy relevant conditions of responsibility for her. (Vargas’ preferred condition is a consequentialist one, but we can remain neutral). Since Beth2 can control her post-intervention desires (as much as any normal person), and they reflect her character, and it seems efficient to hold Beth2 responsible for these actions, we ought to regard her as responsible. Beth, on the other hand, is dead, and responsible for nothing. This view, it must be admitted, also has a degree of plausibility. But it seems to ignore the fact that Beth2, in very significant ways, is not like the rest of us.

I’m going to suggest a middle-of-the-road view, and it’s a view that emerges from a focus on how we become moral agents – a historical view. It is somewhat indebted to Haji’s [2010] analysis of psychopaths, who, somewhat like Beth2, have peculiar personal histories. According to Haji, psychopaths are not responsible agents because, unlike ordinary people, from childhood onward they lack the capacity to respond to reason due to myriad cognitive deficits (which we do not need to get into here). Children are not (full) moral agents: they lack certain rational capacities, yet they have certain emotional capacities that psychopaths lack, which allows to them to pass the moral/conventional distinction, at least from 5-years-old onward. Still, their rational deficits impair their moral agency. It is not only psychopaths who lack moral agency: Haji suggests that anyone who is barred from acquiring moral competency from childhood (due to congenital or involuntarily acquired deficits) is not a moral agent (a ‘normative agent,’ in his terms), because such people are, in effect, still children. (This includes severely neglected and abused children who develop psychopathic traits not of their own choosing). These individuals lack control over their motives, as well as anything that could be called a moral personality. If we want to say that children are not full moral agents, we must grant that these individuals, who suffer from arrested moral development, also are not full moral agents.

Now consider Beth2, who has been globally manipulated to have a different moral personality. Beth2 has zero (authentic) personal history. She is, in this respect, like Davidson’s ‘swamp-person’ (though different in other salient respects) – she emerges ex nihilo, fully formed. In lacking a personal history, Beth2 is like a newborn baby – a paradigm case of non-responsibility. Yet unlike the newborn baby, Beth has intact moral capacities, albeit not her own moral capacities – they were implanted. Beth2, then, is not responsible in either an authenticity sense or a diachronic control sense. Nonetheless, her extant moral capacities (though not her own) allow her to reflect on her motivational set, explore the world, interact with other people, and live a relatively normal life after the intervention. In this regard, Beth2 differs from psychopaths, who can never acquire such capacities, and are in a permanent baby-fied state, morally speaking. Moreover, as time goes by, Beth3 will become increasingly different from Amy, warranting the judgment that Beth3 is a separate person – her own person. So over time, it becomes more and more reasonable to say that Beth3 is an independent moral agent with her own moral personality. Although Beth3 cannot completely overhaul her motivational system, she can make piecemeal changes over time, and these changes are attributable to her own choices and experiences.

With this in mind, I submit that we treat Beth2 as not responsible immediately post-intervention (since she lacks any authentic motives at that time), but increasingly responsible thereafter (since she has the capacity to acquire authentic moral motives and capacities over time, unlike psychopaths). This doesn’t mean that Beth2 will ever be as responsible as an ordinary (non-manipulated) person, but she is certainly more responsible than newborn babies and psychopaths, and increasingly responsible over time.

Another problem with saying that Beth2 is fully responsible for her post-manipulation behaviour is that this leaves no room for saying that the clandestine manipulator – the Dean or the futuristic neuroscientist or whoever – is responsible for Beth2’s behaviour, especially in the case that the moral intervention goes wrong and induces counter-moral effects.

Suppose that Beth2 goes on a killing spree immediately after being manipulated: is she responsible for this effect? Surely the intervener is to blame. One could say, like Frankfurt (1979), that two people can be fully responsible for a certain action, but this seems like a problematic case of overdetermination. Surely blame must be distributed, or displaced onto the one who preempted the effects. Indeed, Frankfurt’s proposal doesn’t fit with how we ordinarily treat coercion. Consider entrapment: if a law enforcement agent induces someone to commit a crime, the officer is responsible, not the victim. This might be because the victim lacked adequate control (due to coercion), acted against character, or because it’s not useful to blame victims – entrapment fits with all of these explanations. Shoemaker seems to appeal to the last consideration when he says of entrapment, we blame the public official and not the victim because “we think the government shouldn’t be in the business of encouraging crime” (1987: 311) – that is, we don’t want to encourage government corruption. But Shoemaker also appeals to fairness, which is tied to control and character: it’s not fair to blame victims who lacked sufficient control or weren’t themselves when they acted (which is also why it’s not useful to blame them). So on any standard criterion of responsibility, it’s not clear why we would blame a manipulation victim.

Now, suppose that the intervention worked and the Dean makes Beth2 a moral saint. If I am right, Beth2 isn’t praiseworthy for the immediate effects of the intervention because they’re not her own (on various construals). The intervener’s moral status is more complicated. While he might seem prima facie praiseworthy for Beth’s pro-moral traits, we also have to consider the fact that he’s patently blameworthy for intervening without informed consent or any semblance of due process (viz. decisional incapacity protocols), and this might vitiate or cancel out his prima facie praiseworthiness. If we consider praise and blame as competing attitudes, it makes sense to see the Dean as blameworthy on balance.

2. Partial

Next, let’s consider a partial manipulation case. Let’s s imagine that the Dean replaces half of Beth’s personality with Anne’s, creating Beth3. This is trickier than the global manipulation case inasmuch as Beth3 has some authentic mental states, but they are in competition with implanted states that are not her own. So we can’t say that Beth3 lacks authenticity or diachronic control entirely, yet she is deficient in both respects compared to an ordinary person. We might compare Beth3 to examples of neurocognitive disorders that cause internal fragmentation, such as alien hand syndrome and split-brain effects, which have attracted a lot of philosophical interest. These subjects have ‘alien’ motives like Beth3, but unlike split-brain patients, Beth2 can presumable enhance her control and motivational congruence over time – assuming that none of her implanted states are pathological. So Beth is somewhere between split-brain patients and non-pathological individuals.

There are three ways that we can go in assessing Beth3’s responsibility. (1) We can hold that Beth3 is not responsible (period) because she has insufficient control and insufficient depth of character to ground responsibility. (2) We can say that Beth3 is actually two agents, not one, and each agent is responsible for its own motives and downstream behaviours. King and Carruthers (2012) seem to suggest that, if responsibility exists (on which they seem unsure), commisurotomy patients and alien hand patients must be responsible for their aberrant behaviours, since the former have two “unified and integrated” personalities (205), and alien-hand gestures reflect a person’s attitudes, albeit unconscious ones. Furthermore, they think that consciousness cannot be a requirement of responsibility since most motives aren’t conscious anyways. If this is right, then perhaps Beth2 is two people, each responsible for ‘her’ own motives. This, to me, seems impossibly impracticable, because we can only address responsibility attributions to one self-contained agent. We cannot try to reason with one side of Beth3’s personality at a time, because Beth3 doesn’t experience herself as two people. She won’t respond to targeted praise and blame, aimed at adjacent but interconnected motivational hierarchies. And she might even find this kind of moral address baffling and/or offensive. So this won’t do, practically speaking. But there’s also a good case to be made that commisurotomy patients and the like lack control and character in the normal sense, given that they have significant cognitive deficits. So it’s reasonable to see them as impaired in their responsibility compared to cognitively normal people. And likewise for Beth3.

This leads to the third possibility, which I favour: the middle-of-the-road proposal. Beth3 is somewhat responsible immediately after implantation, and increasingly responsible thereafter. This is because Beth3 is subject to control-undermining motivational conflict and disorientation following the clandestine intervention, but she nonetheless has some intact moral (relevant rational, emotional, and motivational) capacities, which differentiate her from the psychopath, and which should allow her to regain psychological congruence over time, enhancing her control and authenticity. So Beth3 should be seen as increasingly responsible over time (under ordinary circumstances). That said, she will likely never be as responsible as someone who had an ordinary learning history, since most people never suffer from this degree of fragmentation. So she may have relatively diminished responsibility for a long time, even if she becomes more apt for praise and blame.

Once again, this analysis can be applied to effective interventions and ineffective ones. If Beth3 acquires pro-moral traits (as per the Dean’s intention), she is not immediately responsible for them, but she gains responsibility for any induced traits that persist and amplify over time – indeed, for all of her post-intervention traits. Regarding the intervener, he is not necessarily praiseworthy for the pro-moral effects of the intervention, inasmuch as he might be blameworthy for intervening without consent or due process, and this might outweigh any praise that he might otherwise warrant.

Worse still, if the Dean inadvertently implants counter-moral traits in Beth3, he is blameworthy for intervening without consent as well as the effects of the botched intervention.

3. Minimal

Finally, let’s consider the induced-choice subject. Call her Beth4. Suppose that Beth4 has been inculcated with a desire to give to charity, and she acts on this desire. (Note: presumably to be causally efficacious a desire must be combined with a supporting motivational structure, but for simplicity I’ll refer to this bundle, following Frankfurt’s example, simply as a desire. Assume than an induced desire comes with supporting motivational states). Is Beth4 praiseworthy? On my view, no, because her desire is not her own. But the intervener also is not praiseworthy, insofar as he is blameworthy for intervening without consent or due process, which (I think) cancels out the import of of any good intentions he may have had. (The road to hell is paved with good intentions, as they say). (Note that I am construing praise and blame as competing reactive attitudes; other people hold alternative conceptions of ‘responsibility,’ which I will conveniently ignore).

Next, suppose that the Dean intends to induce a pro-moral choice in Beth4, but inadvertently induces her to commit a murder. The Dean, I think, is blameworthy for the murder, because he is responsible for implanting the relevant desire in Beth4 without her consent. This can be construed as an omission case, in which the Dean was ignorant of the likely consequences of his choice, but is responsible because he failed to exercise proper discretion. (Compare this to Sher’s [2010] example of a soldier who falls asleep on duty; the Dean failed in his duties as a person, a citizen, a Dean… he failed on many levels. He is, as it were, a spectacular failure as a moral agent). The Dean acted in character, and on a suitably reasons-responsive mechanism, when he chose to surgically intervene on Beth4 without her consent, and he bears responsibility for this infraction and the fallout of his choice.



Moral enhancements & moral responsibility


I’m going to write a somewhat lengthy but off-the-cuff entry on moral enhancements, because I have to present something on them soon, in Portugal.

I’m going to write about (1) moral enhancement and moral responsibility, and how these things intersect (we might be responsible for failing to use moral interventions); (2) the possibility of duties to enhance oneself and others, and (3) passive moral interventions – the most controversial type – and whether we have a duty to use, submit to, or administer them.

A qualification: I don’t know very much about moral enhancement, so this is going to be a bit sketchy.

Moral enhancements and moral responsibility

Here’s what I know. A ‘moral enhancement’ is a “deliberate intervention [that] aims to improve an existing capacity that almost all human beings typically have, or to create a new capacity” (Buchanan 2011: 23). These interventions can be traditional (e.g., education and therapy), or neuroscientific (e.g., biomedical and biotechnological interventions). The latter are more controversial for various reasons. Examples include transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and psychoactive drugs, along with futuristic ‘Clockwork Orange’-type methods that we can only imagine. (Typical science fiction examples are dystopic, but we shouldn’t let this prejudice the discussion). These enhancements are moral insofar as they aim to enhance the subject’s moral-cognitive capacities, howsoever construed. (I’ll leave this vague).

Some philosophers hold that the distinction between traditional and modern interventions is that one is ‘direct’ while the other is ‘indirect’: TMS, for instance, directly stimulates the brain with electric currents, whereas education ‘indirectly’ influences cognitive processing. Focquaert and Schermer argue that the distinction is better characterised as one between ‘active’ and ‘passive’ interventions: traditional interventions typically require ‘active participation’ on behalf of the subject, whereas neuroscientific interventions work ‘by themselves’ (2015: 140-141). More precisely, passive interventions work quickly and relatively unconsciously, bypassing reflective deliberation. The authors acknowledge that this is a rough and imperfect distinction, and that interventions lie on a continuum; but they think that this distinction nicely captures a normative difference between the two methods. Passive interventions, they say, pose a disproportionate threat to autonomy and identity because (1) they limit continuous rational reflection and autonomous choice, (2) they may cause abrupt narrative identity changes, and (3) they may cause concealed narrative identity changes. These effects undermine the agent’s sense of self, causing self-alienation and subjective distress (2011: 145-147).

For this reason, the authors recommend that we introduce safeguards to minimise the negative effects of passive intervention – safeguards like informed consent and pre- and post-intervneiton counselling, which help subjects integrate their newfound moral capacities into their moral personality, or come to terms with their new moral personality, in case of radical transformation. So although passive interventions have a higher threshold of justification, they can be justified if their adverse effects are managed and minimised.

(Some objections to this view, with responses, can be found here).

Now for a few words on moral responsibility. In earlier posts, I’ve discussed three models of responsibility, two backward-looking (character and control theory) and one forward-looking (the agency-cultivation model). These theories have something in common: they hold agents responsible (blameworthy) for omissions. On character theory, we are responsible for failing to act virtuously, if a reasonable person would have done better in our circumstances (see Sher 2010). So someone can be blameworthy for forgetting about her friend’s birthday. On control theory, we are ‘indirectly’ responsible for failures to exercise reasons-responsiveness responsibly, in accordance with people’s ‘reasonable’ expectations. As Levy remarks when discussing ‘indirect’ responsibility for implicit bias, a person can be “be fully morally responsible for [a] behaviour [resulting from implicit bias], because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014; see also Holroyd 2012). Since we can be responsible for omissions on both models, it stands to reason that we can be responsible for failures to enhance our moral agency, if this is what a reasonable person would have done or expected of us in the circumstances. In fact, remediating measures for implicit bias can be seen as ‘moral enhancements,’ though they are closer to education than biomedical interventions. But the point is that there are grounds for blaming someone for eschewing moral enhancements of either type – active or passive – depending on the particularities of the case.

On the agency-cultivation model (ACM), it’s even more obvious that a person might be blameworthy for failing to use moral interventions, since blame functions to enhance moral agency. If blaming someone for not using a moral intervention would entice the person to use it, thus enhancing the person’s agency, blame is a fitting response. And if forcibly administering the intervention would enhance the person’s agency, this might be fitting, too.

All of these theories as I have presented them invite questions about when blame for failing to use moral enhancements, or for using them inappropriately, is apt. But this leads somewhat away from responsibility (as an agential capacity or cluster of traits that warrant blame and praise), to the domain of duties: from the question of what we are responsible for, to what we are responsible to do. These two things are related insofar as we can be responsible (blameworthy) for omitting to fulfil our responsibilities (duties) on all three models. But they are distinct enough that they necessitate separate treatments. First we figure out what our duty is, and then we determine if we are blameworthy for falling short of it.

Moral duties and moral perfection

What are our duties with respect to moral enhancements? We can approach this question from two directions: our individual duty to use or submit to moral interventions, and our duty to provide or administer them to people with moral deficits. This might seem to suggest a distinction between self-regarding duties and other-regarding duties, but this is a false dichotomy because the duty to enhance oneself is partly a duty to others – a duty to equip oneself to respect other people’s rights and interests. So both duties have an other-regarding dimension. The distinction I’m talking about is between duties to enhance oneself, and duties to enhance other other people: self-directed duties and other-directed duties.

These two duties also cannot be neatly demarcated because we might need to weigh self-directed duties against other-directed duties to achieve a proper balance. That is, given finite time and resources, my duty to enhance myself in some way might be outweighed by my duty to foster the capabilities of another person. So we need to work out a proper balance, and different normative frameworks will provide different answers. All frameworks, however, seem to support these two kinds of duties, though they balance them differently. For Kant, we have an absolute (perfect) duty to abstain from using other people as mere means, so we have a stringent duty to mitigate deficits in our own moral psychology that cause this kind of treatment; and we also have a weaker (imperfect) duty to foster other people’s capabilities. On a consequentialist picture, we have to enhance the capabilities of all in such a way as to maximise some good. Our duty to enhance ourself is no greater or lesser than our duty to foster others’ capabilities. Aristotle, too, believed that we have a duty to enhance ourselves and foster others’ capabilities by making good choices and forming virtuous relationships. Rather than arbitrate amongst these frameworks, we can just note that they all support the idea that we have duties to enhance ourselves and others. The question is how stringent these duties are, not whether they exist. Of course, in a modern context these questions take on new significance, because the duty to enhance oneself and others no longer just means reflecting well and providing social assistance; it could potentially mean taking and distributing psychoactive drugs, TMS, etc.

One pertinent question here is how much enhancement any individual morally ought to undergo. How much enhancement is enough enhancement? We can identify at least four accounts of the threshold of functioning we ought to strive for: (1) the ubermensche view: we must enhance our own and others’ capabilities to the greatest extent possible; (2) prioritarianism: we should prioritise the capabilities of the least well-off or the most morally impaired; (3) sufficientarianism: we should enhance people’s capabilities until they reach a sufficient minimal threshold of functioning, after which it is a matter of personal discretion; and (4) pure egalitarianism: we should foster everyone’s capabilities equally.

No matter which of these accounts we favour, they all face potential objections. The more stringent forms (the ubermensche view) are more susceptible to these objections than the less stringent forms (sufficientarianism). If we are considering enhancing our own capabilities, the scope for criticism is minimised, but we might still worry that our attempts to enhance ourselves morally might go awry, they might interfere with our ability to foster others’ capabilities, they might impel us to neglect valuable non-moral traits, or (relatedly) they might prevent us from becoming well-rounded people, capable of living flourishing lives (see Susan Wolf’s ‘Moral Saints’ 2004). When considering whether we should foster others’ capabilities, there are still more worries. Namely, in seeking to enhance other people – particularly if we do this through institutional mechanisms – we risk insulting and demeaning those deemed ‘morally defective,’ stigmatising people, jeopardising pluralism (which may be inherently valuable), undermining people’s autonomy, or fostering the wrong kinds of traits due to human error. That is, by intervening to enhance moral agency, we might inadvertently harm people.

The most controversial type of moral intervention – passive – is the most vulnerable to these objections, and others that I haven’t mentioned. From now on, I’ll just talk about this type. I’ll consider Focquaert and Schermer’s cautionary advice, and then offer some suggestions.

Passive intervention, autonomy, and identity

The authors argue that passive moral interventions are particularly morally problematic because they disproportionally jeopardise autonomy and identity, insofar as they limit continuous rational reflection and precipitate abrupt and concealed narrative identity changes. To minimise these adverse effects, interveners should provide informed consent procedures and pre- and post-intervention counselling. Let’s suppose that this is right, and also that we have a duty to enhance ourselves and others, as per the above discussion. This seems to have the following implications. First, a duty to submit to a passive moral intervention obtains only on condition that informed consent and pre-and post- intervention counselling are available and reasonably effective. So, a person can be blameworthy for failing to undergo a passive moral intervention (if at all) only if the intervention came with those provisions. Secondly, there is (at minimum) a duty to provide these safeguards if administering a passive intervention; so, if an intervener fails to do so, the person is thereby blameworthy. This is the upshot of holding that there is a duty to enhance oneself and others, and this duty includes providing harm-minimising safeguards.

That said, there are other considerations that might vitiate the perceived duty to enhance one’s own and others’ moral capabilities. I think that the most forceful objection, which may underlie other objections to these interventions, is that we can’t be sure of their effects. We can call this the knowledge constraint. If we can’t know, in medium-grained terms, the probable effects of a passive intervention, we can’t obtain informed consent in a strong sense, since we can’t adequately assess whether the effects will be good or bad. Let’s reconsider Focquaert and Schermer’s worries, and tie it to this concern. They say that (1) passive interventions threaten continuous rational reflection and autonomy. When someone undergoes TMS, for instance, the person doesn’t have the capacity to endorse or reject the effects of the magnetic stimulus. But why is this a problem? I don’t think it suffices to say that it undermines autonomy, because many autonomy-enhancing procedures undermine the capacity for continuous reflection: I forfeit this capacity when I undergo a root canal under general anaesthesia, but I can still give prior informed consent unproblematically. This isn’t just because a root canal is localised to the mouth (although this is a related issue). It’s mainly because I know precisely what the effects of the root canal will be, so I can consent to them. If dental procedures induced unforeseeable side-effects, informed consent for them would be much more problematic. This suggests that the unforeseeableness of the side-effects of a procedure, as opposed to the reflection-undermining nature of the procedure per se, is what undermines autonomy and has moral import.

With neuroscientific interventions, we can’t very accurately predict the general immediate and long-term effects , because these interventions function very coarsely, across a broad range of cognitive systems and processes, and their mechanisms are not fully understood. Similarly, the brain is very complex, and its mechanisms are not very well understood. And so the interactions between these mechanisms are pretty murky. For these reasons, any targeted (intended) effect of neurointerventions comes with a range of possible side-effects. If we just focus on the immediate side-effects of TMS, they include fainting, seizures, pain, as well as transient hypomania, hearing loss, impairment of working memory, and cognitive changes (unspecified). In terms of long-term effects, NICE found that there is insufficient evidence to evaluate safety for long-term and frequent uses. That is, not enough is known to predict the long-range effects in a scientifically responsible way. Speaking more generally, TMS is used for multiple conditions; its activation is very un-specified, targeting various brain regions that serve various functions. It can’t be used to, say, ‘cure schizophrenia’; it acts on multiple brain regions, bringing about a cluster of effects, some intended and others unintended. All neurointerventions work this way: they induce a cluster of effects. Subjects can’t select just one desired effect: undesired side-effects are very likely, if not inevitable. Given these considerations, it’s far from clear that there is an obligation for anyone to submit to passive interventions that act directly on the brain, and even less clear that anyone has an obligation to induce them on other people.

So, it seems that the threat to continuous rational reflection is a problem because we don’t know, in medium-grained terms, what the effects of the procedure will be in advance, which threatens prior informed consent. If we could predict the effects relatively accurately, the loss of continuous rational reflection would not be a problem.

(2) Passive interventions cause abrupt narrative identity changes. Again, I think that this is a problem because the end result of the passive intervention is impossible to predict in medium-grained terms. If we could provide someone with an accurate specification of the effects of a proposed intervention and gain the person’s consent, the ‘abruptness’ of the narrative identity change wouldn’t be an obstacle, it would be an advantage. If I could get a root canal ‘abruptly’ rather than slowly, I would choose the abrupt method – but only if I understood the consequences. If I could quit smoking ‘abruptly’ as a result of voluntary hypnosis, I would choose that option. Abrupt changes are problematic only if we can’t predict the outcomes of those changes.

(2) Passive interventions cause concealed narrative identity changes. This is also a problem only because we can’t predict the effects of the intervention. If we could, we might even prefer that the procedure be concealed. I might want to have a ‘concealed’ root canal – a root canal that I completely forget about after the fact, but with my prior consent – but not if the procedure induces unwanted side-effects that I can’t detect until after the fact. If side-effects are an issue, I want to be conscious so that I can do what I can to mitigate them.

The fact that passive moral interventions impinge on the brain rather than the body (not that this is a hard and fast distinction) is also problematic because our brain is the seat of our personal identity, and changes to our personal identity are especially jarring. Still, the analogy between cognitive interventions and bodily interventions is informative. If I sign up for an appendectomy and wake up with a third arm, this is a violation of my autonomy. Waking up a different, or radically altered, person, is that much more autonomy-undermining. In both cases, the lack of prior knowledge about probable outcomes is what violates the person’s autonomy: an unforeseen transformation is an affront to the self. (This is true even if the transformation is an accident: if a piano player’s hand is crushed by a boulder, this is autonomy-undermining, even if no one is to blame [see Oshana 2008]).

Post-intervention counselling might mitigate some of these concerns, because it allows the agent to ‘consent’ retroactively. But this is problematic for the following reasons.

(1) For no other procedure is retroactive consent permissible. If my teeth are rotting and someone kidnaps me and performs involuntary dental work while I’m unconscious, this is a moral infraction. (Note that when I was discussing clandestine changes above, I meant that they were permissible only if there was prior informed consent). Even if I’m pleased with the dental work, the dental intervener could have performed the procedure less invasively – with my consent – and to this extent, he’s blameworthy: he chose the most invasive intervention instead of the least invasive one. Now, when moral enhancements are at issue, the intervention may be prima facie justified by the potential positive consequences, but we have to consider (1) the strength of the benefits, (2) the probability of adverse side-effects, and (3) whether a less invasive intervention was available. If passive interventions are used as a first response, this is clearly impermissible. If they are used as a ‘last resort,’ when no alternative is available or all alternatives have been tried, we have to consider the moral import of possible side-effects. Consider Alex (shown above) from ‘A Clockwork Orange’: the movie is a horror story because the Ludovico technique made him miserable and dysfunctional, unintentionally. And he was not informed beforehand of the risks. Interveners are, I believe, blameworthy for using interventions that may cause severe adverse side-effects, because there is a duty to protect people from these kinds of risks. This not to say that passive interventions are never permissible, but they are not permissible if they carry severe or unknown risks to wellbeing.

(2) Because moral interventions target a person’s moral identity, there is an argument to be made that post-intervention ‘consent’ is not consent at all, because the agent on which the intervention acted – the pre-intervention agent – no longer exists, or is so drastically altered, consent is not possible for that agent. Moral interventions have the ability to ‘kill off’ the pre-intervention agent. This places a massive burden of justification on those who would intervene. We need to consider whether it is permissible to ‘kill off’ unwanted moral personalities, or radically alter them in a ways that preclude obtaining consent. When a person is radically altered by an intervention without consent, the person cannot consent after the treatment, so any post-intervention consent given is a different person’s. This is a radical departure from the standard model of informed consent, and this is something we need to consider. We also need to weight whether society should be choosing which moral personalities have value, as this might threaten pluralism and self-determination.

I should make a qualifying statement here. I don’t want to stigmatize the use of TMS and other existing passive interventions. Some people voluntarily use TMS for personal (prudential) reasons, which I would deem morally unproblematic. I’m specifically concerned with whether people have a duty to use passive interventions – not just existing ones, but also interventions that may arise in the future – to enhance their moral capabilities; and whether society has a duty to impose these interventions on people without their consent. I’m saying that it’s much harder to justify a categorical duty (as opposed to a subjective desire) to undergo passive interventions, and still harder to justify a duty to impose (as opposed to merely offer) interventions on people deemed morally deficient.

Back to responsibility

Here’s how this bears on responsibility. On a simplistic reading, it might seem that we have a duty to use moral interventions to enhance ourselves and others to a substantial (sufficientarian, if not ubermenschean) degree. And maybe this is true for traditional moral enhancements, which we can reflectively choose and consent to with full information. But it’s not clear that this is the case for passive moral interventions, just because we can’t tell if those interventions are going to have overall positive consequences. And this is because neurointerventions target clusters of traits, some of which may be adaptive while others may be maladaptive. So, in trying to morally enhance someone, we might inadvertently harm the person and even introduce new counter-moral traits. If so, we might be blameworthy for the consequences of those interventions. And if there is no clear duty to submit to passive moral interventions that might produce harmful and even counter-moral side effects, then a person might not be blameworthy for refusing them.

Consider stories like ‘A Clockwork Orange’ and ‘Brave New World.’ They’re about attempts at moral enhancement and social engineering gone horribly awry. And this is a legitimate worry. I’ve suggested that when technology is morally problematic, it’s not because it threatens rational reflection and narrative identity per se. Rather, it’s because it has unpredictable side-effects, and these side-effects might be morally problematic. This isn’t mean to imply that we have no duty to enhance ourselves and others, but rather that when trying to enhance our capabilities with modern methods, we should proceed with caution.


Implicit Bias: The limits of control/character: Continued.


This post continues from the last one. I was saying that if we need to trace responsibility back to a suitable prior moment (t-1) at which the agent could have foreseen the consequences of his choices (following Vargas’s 2005 description of control theory), then we need to assess not only that agent’s internal capacities at t-n, but also the agent’s ‘moral ecology,’ and the relationships between the agent and the moral ecology. This follows from the fact that ‘indirect control’ is a matter of whether an agent could have exercised or acquired a capacity using available resources in her local environment. (Indeed, these two things are related: we acquire capacities in part by exercising more basic capacities: by taking piano lessons, I learn how to play piano well, i.e., I acquire piano-playing reasons-responsiveness. This is a result of exercising a more basic capacity – the basic human capacity to master a symbolic system with a combination of tutelage and practice). According to Holroyd, if someone is implicitly biased but could have avoided becoming implicitly biased or expressing an implicit bias, the person may be blameworthy. I am suggesting that to determine a person’s blameworthiness, we must evaluate, not just the individual person at t-n, but the person’s location in the social ecology.

This picture suggest a different kind of objection to Fischer’s notion of control – the dominant model. Fischer explicitly states that to appraise a person’s responsibility status, we must home in on the ‘actual sequence’ of deliberation, ignoring counterfactual circumstances in which the agent would have deliberated differently. That is, control for Fischer is ‘actual-sequence control.’ This strategy is (I believe) meant to undercut incompatibilist and nihilistic objections to control: objections to the effect that no one is responsible for anything because no one is capable of exercising ultimate control (being an unmoved mover) in a determinist universe (G. Strawson 1986), or because control is irrelevant in light of moral luck: whether you’re capable of exercising compatibilist control is just a matter of luck, not agency (Levy 2008). I think this last view places too much importance on the moral ecology and not enough on agency, but we can return to this later. By restricting control to the actual sequence, we cut off counterfactual circumstances in which the agent is metaphysically determined, as well as circumstances in which all agents are equally capable of control – distant possible worlds. But there is a case to be made that even if we include some counterfactual circumstances as morally relevant – and thus, some possible worlds – we do not need to include all counterfactual possibilities. Even if the buck doesn’t stop at actual-sequence control, it might stop in the next nearest possible world, cutting off the kind of slippery-slope objections that lead to nihilism.

I’m going to make this case in a moment, but first consider an existing objection to actual-sequence control, from Levy (2008). Levy argues that, while counterfactual disabling circumstances superficially appear to be irrelevant (consider, for example, Frankfurt-type cases in which the counterfactual device is never activated), it is possible to construct a counterfactual enabling circumstance that seems to make a difference. An enabling circumstance is one in which the agent gains a capacity, in contrast to the standard disabling scenarios, in which the counterfactual device would prompt the agent to go against the demands of morality if it were activated. Here’s one of Levy’s examples:


Jillian is walking along the beach when she notices a child growing. Jillian is a good swimmer, but she is pathologically afraid of deep water. She is so constituted that her phobia would prevent her from rescuing the child were she to attempt to; she would be overcome by feelings of panic. Nevertheless, she is capable of trying to rescue the child, and she knows that she is capable of trying. Indeed, though she knows that she has the phobia, she does not know just how powerful it is; she thinks (wrongly) that she could effect a rescue. Unbeknownst to Jillian, a good-hearted neurosurgeon has implanted her with a chip with which he monitors Jillian’s neural states, and through which he can intervene if he desires to. Should Jillian decide (on her own) to rescue the child, the neurosurgeon will intervene to dampen her fear; she will not panic and will succeed, despite her anxiety, in swimming out to the child and rescuing her. (2008: 170, 2008b: 234).

Levy actually presents this scenario variably as an objection to control theory and an objection to character theory. Suppose that Jillian decides not to rescue the child, in spite of (falsely) believing that she can. In the first place, it seems as if Jillian is responsible (says Levy) because she believes that she can rescue the child, and fails to act on this belief. But if Fischer is right, then Jillian isn’t responsible because she can’t succeed by her own (independent) means. Without the help of the benevolent intervener, success is impossible, and the intervener is external to Jillian’s motivational set. To vindicate the intuition that Jillian is responsible, we need to include the counterfactual intervener; so counterfactual scenarios seem relevant. (At least, this is my understanding). In the second place, if we include the counterfactual scenario as relevant, we have to accept that Jillian’s motivational set, and thus her character, include this scenario as a component part, and so Jillian’s character is “smeared” across time and space (Levy 2008: 179). Hence, locational externalism (i.e., the extended mind hypothesis) is true. These proposals call into question the viability of actual-sequence control and of character as traditionally conceived. I take it that this is supposed to support responsibility nihilism, i.e., the idea that responsibility (in a desert-entailing sense) doesn’t exist, as per Levy’s thesis in ‘Hard Luck.’

I think that that some of Levy’s claims are accurate and others need to be modified. Here’s what I endorse and what I dispute. I agree that counterfactual circumstances matter – at least, some counterfactual circumstances (specifically, those in which the agent could have intervened and succeeded with some kind of help), but I disagree with the ‘intuition’ that Jillian is responsible for an omission in this particular case. The reason is that, if someone, for bizarre reasons, thinks that she can save someone from drowning, but there is good objective reason to think that she can’t, the person doesn’t have a duty to intervene. Arguably, if we are not lifeguards, we don’t have a standing duty to save someone from drowning in any circumstance, because the risk of drowning ourselves in the attempt is too high, even if we are excellent swimmers for ordinary people (not lifeguards). So there’s no reason to think that Jillian ought to act on her belief that she can help. Normal people don’t have a standing obligation to risk their lives to save drowning victims. What we ought to do is alert the lifeguard, call emergency medical services, or look for an indirect means of intervening that doesn’t risk our own safety. So if we don’t have an obligation to risk our lives to save drowning victims, then people with water-related anxiety disorders certainly don’t, even if they have bizarre beliefs about their capacities and their moral duties.

I raise this point in part because this is the response I usually get when I present this case to other people. We could adjust the scenario by stipulating that the child is drowning in a wading pool. Normal adults have a duty to save children from drowning in wading pools, surely. But Jillian is not a psychologically-normal adult, so the worry recurs. Just as it is unreasonable for a normal adult to think that she has a duty to save a drowning victim in open water, it may be unreasonable for someone with a water-induced phobia, which could endanger her safety, to think that she has a duty to save a wading-pool victim. The worry is that the sacrifice required of the person is (objectively) too great to ground the existence of a duty, even if the person thinks that she has a duty for bizarre, subjective reasons.

But now consider a case in which it’s more obvious that a moral duty obtains. I’ll try to construct it to resemble the Jillian scenario, i.e., to include a protagonist who cannot achieve some end using her own (internal) capacities, but could succeed with external support. Suppose that Jack can’t contain his anger towards women, and regularly berates his female employees, family members, servers at restaurants, and so on. Jack (falsely) believes that he can control his misogynistic anger using willpower alone, and decides to exercise his willpower. But this is a false belief – he has much less willpower than he imagines. Unbeknownst to Jack, a local therapist specialises in anger-management problems, and would have been able to help him if he had sought out her help. But Jack never does. So Jack tries to exercise his willpower and fails, and continues to demean women on a daily basis.

If we home in on Jack’s internal capacity for control, we have to excuse Jack, since he lacked the internal resources to suppress his misogynistic urges. But if we consider the counterfactual scenario (in which Jack visits the therapist) as relevant to Jack’s responsibility status, then we can hold him responsible. It seems very reasonable to say that Jack had the capacity, in a very basic sense, to look for resources to control his misogynistic anger. But Jack does not have unassisted actual-sequence control over his anger.

I think that this scenario presents a plausible argument for the idea that, although Jack lacks actual-sequence control over his misogynistic anger, he has counterfactual-sequence control over it, and this counterfactual control is relevant to Jack’s responsibility status. Jack would not be risking his life by seeking out therapy. In fact, he wouldn’t be sacrificing anything of moral value. And if psychiatric counselling is free, as it is in Canada and some of the moral socialist countries, he isn’t even sacrificing anything of prudential value. By not seeking help, he’s not exercising his (basic human) capacities in the way he should. And because he’s not exercising these capacities responsibly, he has a character flaw.

Now, assuming that all of this is plausible, there’s an argument to be made that the proposed counterfactual-control model initiates a slippery slope into responsibility nihilism. Once we allow that counterfactuals are relevant, we have to admit that determinism precludes agency. But that’s only true if we hold all counterfactual possibilities – or at least, very many counterfactual possibilities – to be morally relevant. Yes, in a deterministic universe no one is incompatibilist-responsible for anything. But why go back to the metaphysical basis of reality – the metaphysical underpinnings of human behaviour? Why engage in ‘panicky metaphysics’ at all, when we can stop at the moral ecology? All I’m suggesting in constructing the above example is that some counterfactual possibilities matter – the ones that the agent could have availed himself of relatively easily, without sacrificing anything of moral (and in this case, even prudential) value.

Recall that in my last post I noted that control theorists typically espouse an implicit ‘reasonableness’ constraint in their conception of ‘indirect responsibility’: they hold an agent responsible for omissions for which it is reasonable to complain against the agent. As Levy says when commenting on implicit bias, an agent might “be fully morally responsible for [a] behaviour [resulting from implicit bias], because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014). This implies that only counterfactual circumstances that were reasonably available to the agent are morally relevant – scenarios in which the sacrifice demanded of the agent is not overly stringent. I have argued that in the Jillian scenario, the moral demand is too high, but in the Jack case, it’s not. This gives us a foundation on which to say that someone can be responsible for failing to exercise counterfactual control if doing so was reasonable. Of course, ‘reasonableness’ is a vague concept, and I won’t precisify it here, but it’s a concept that makes intuitive sense, and one that we regularly rely on without clarification, to constrain the scope of normative concepts. (Consider Scanlon’s account of moral principles as those that no suitably-motivated person could reasonably reject; we get the point without delving into the semantics).

Character theorists of Sher’s stripe similarly hold that reasonableness is critical to responsibility. A person is responsible for an omission just in case a reasonable person with relevantly similar capacities would have done better. So once again, we are to judge the agent’s responsibility status by what it would be reasonable to expect of her, in light of certain counterfactual possibilities – what the agent could have achieved under conditions C.

If this is right, then counterfactual circumstances do matter morally. But not all, or just any, counterfactual circumstances are relevant. Just those that were reasonably available to the person, at relatively low personal cost. This goes back to what I was saying about the moral ecology, and about tracing. Counterfactual scenarios are part of the moral ecology, external to the person’s material brain. So when evaluating a person’s responsibility status, we have to consider the person’s relevant brain states, and the reasonably available counterfactual circumstances supported by the agent’s moral ecology. We have to look at the person’s capacities and the person’s moral ecology and the potential interaction between those two variables, to see if they support the possibility of agency cultivation. And with regard to tracing, we need to trace responsibility to that possibility. We need to assess whether Jack, for example, had the (general) capacity to acquire the (specific) capacity to remediate or suppress his misogynistic anger, given the resources of his moral ecology.

This suggests the following revisions to control theory and character theory. We need to see control as more than actual-sequence control to account for the possibility of indirect responsibility for omissions that were reasonably avoidable. Specifically, we need to include as morally relevant, counterfactual circumstances in which the agent could have interacted with the moral ecology in such a way as to bring about a new capacity, or (more precisely) to leverage an existing basic (undifferentiated) capacity into a more specific (specialised) capacity. And it suggests that we ought to regard character as diffuse or locally extended, i.e., co-constituted with agency-supporting or agency-enhancing social supports. (This is not a revision to Sher’s view, in fact, but it emphasises that aspect of it). And finally, it suggests that when ‘tracing’ back to control, we need to trace beyond the agent’s actual sequence, to reasonably available aspects of the agent’s moral ecology, which would have enhanced the agent’s capacities if the agent had taken the right kind of initiative. And similarly with character theory, we need to trace character to relevant features of the local ecology, to determine if the agent is using those features to the best of her ability. If she is not, she may be culpably indifferent. (In this way, tracing applies to character theory as well, though its reach is more limited – we don’t need to trace as far back).

These considerations strike against any theory that is too narrow in its conception of responsible agency, particular the actual-sequence control model. And it suggests that control theory and character theory are perhaps more similar than they may initially appear, in that control theory admits a greater scope for blame under the category of ‘indirect responsibility,’ properly understood. These considerations build on Levy’s objections to actual-sequence control and character internalism. But I recruit them to show that there is a broader scope for blame than we tend to think, and he does the opposite – he recruits them in support of responsibility nihilism. Our views are technically compatible, though, because he’s refuting a desert-based notion of responsibility, and a relatively harsh form of desert-based responsibility, on which blame (1) is justified by reference to an agent’s actual sequence of deliberation or internal traits, and (2) entails fairly punitive sanctions. I also reject this notion of responsibility, because it combines a metaphysically tenuous conception of agency with dubious assumptions about the kind of thing blame is (punishment) and proportionality (harshly punitive). I think that the same objections can be taken to support a modification of responsibility rather than a rejection of it.

Here’s one substantive alternative to the actual-sequence control model – the ‘limited counterfactual-sequence control model.’ People are responsible for (1) intentional infractions (like explicit bias), (2) failures to properly exercise control (e.g., manifestations of implicit bias that could have been avoided by suitable reflection), and (3) failures to enhance the capacity for control (e.g., failing to search for remediating measures available in the local moral ecology, when it would be reasonable to do this). And here’s a viable version of character theory: people are responsible for character defects just in case those defects could have been remediated with reasonable effort, using the resources of the local ecology. In case it’s not obvious, here’s why the local ecology matters for character theory. If Smith is a misogynist because he lives in 1950s middle America and doesn’t have access to good examples of egalitarian behaviour, while Jones is a misogynist in present-day New York just because he hates women, Jones has worse character than Smith, because he’s not only a misogynist, he’s also indifferent to women’s interests. That is, Jones exhibits a greater degree of indifference than Smith (see my 2015 paper and my 2013 paper for lengthier examples, and see also Fricker 2012). Some theorists assume that tracing doesn’t apply to character theory, but that’s false. We have to trace the causal source of a character defect, to see if the character defect is amplified by indifference to available reasons.

These are two viable versions of control theory and character theory that present plausible alternatives to responsibility nihilism. But there’s a third option that may seem to be a better fit with what I’ve said so far. It’s a consequential approach, along the lines of Vargas’ agency cultivation model (2013). On that view, we’re responsible to the extent that praise or blame is likely to enhance our agency (very crudely put). Here’s how this view works. Suppose that Smith is a misogynist who berates all the women in his life, but Smith is still a moral agent (not a full-blown, unresponsive psychopath or something of that nature; he has some vestige of the capacity to respond to reasons). Blame might function to enable or enhance Smith’s capacity to respect women (and it might function that way for all misogynists – let’s suppose that this is its general effect). So Smith is blameworthy. This approach, because forward-looking, might seem to eliminate the need for tracing, which might seem to be a desideratum, since tracing is hard. But I don’t think it does. First, we need to know if Smith is a misogynist, as opposed to, say, a foreigner who doesn’t know he’s using a misogynistic slur, or a brain-washing victim, or someone whose family is being held hostage on condition that he demean his female acquaintances, etc. I artificially stipulated that Smith is a misogynist above, but in real life, we need to discover things about a person’s circumstances to make correct moral appraisals, so we need to get to know people and inquire into their lives. Second, we might want to consider whether Smith had opportunities to develop a more egalitarian sensibility – control-based considerations. The point is, even on a forward-looking account, we need to know things about an agent’s capacities and environment, and so we need to do some non-negligible amount of tracing. We can’t just guess what someone is like on the basis of one time-slice. People are notorious for jumping to conclusions, but to be responsible in our responsibility attributions, we need to be committed to giving fair consideration to relevant data.

The upshot is that there are convincing arguments against narrow versions of control theory and character theory, but they don’t force us down a slippery slope to responsibility nihilism. There are viable (extended) versions of control theory and character theory that we can adopt; and consequentialism is also an option. But we should not, I think, assume that we can do away with tracing on any of these alternatives. If anything, once we grant that the moral ecology is relevant to responsibility –  more relevant than we might have previously thought – we have to extend the scope of tracing beyond the agent’s material brain. But I think that we implicitly do this anyways (in our ordinary judgments of praise and blame), which is why we consider people from past times and foreign cultures to be less blameworthy for certain infractions. Yet some accounts of responsibility don’t adequately explain this kind of contextual assessment – they don’t sufficiently appreciate the significant of context. Circumstances matter because they co-constitute, enable, and support – or conversely, impair – the capacities that underwrite responsible agency.


Here’s how all of this relates back to implicit bias. As I said in my first post on implicit bias, it’s not at all clear how wide the scope of control has to be for responsibility to obtain. If we think that reasonably-available counterfactual circumstances are morally relevant, we can hold people responsible for exhibiting implicit biases if they failed to use remediating measures that were locally available, provided that this was a reasonable expectation. People with special duties – people on hiring committees, for example – have stronger reasons to use these measures, and thus are more susceptible to blame for relevant omissions. And this is true even if they lack responsiveness to such measures now, provided that they could have acquired suitable patterned sensitivity at some time in the past, by a reasonable effort. Ordinary people can be blamed if we think that it was within their ability to avoid manifesting implicit bias through some reasonable act of will. The case for blame is stronger if we admit counterfactual circumstances into the equation, because then we have grounds for saying that someone is (indirectly) responsible for an omission, just in case a counterfactual enabling circumstance was within reach. This brings the view somewhat closer to modern character theory in its scope for attributing blame.


Implicit bias: The limits of control/charater



I’m going to write an entry pursuant upon my previous post on responsibility and implicit bias. (Find it here). There, I noted that there is no agreed-upon definition of ‘control’ (see Fishcer 2012, Levy 2014). And there’s also no agreed-upon definition of character (see, e.g., Angela Smith 2005, Holly Smith 2014, David Shoemaker 2014, and George Sher 2010 for different definitions of character). And this is likely why there are debates about the scope of moral responsibility within in each camp, as well as between camps. Some control theorists think that we can be responsible for the manifestation of implicit bias and others think that we can’t. And ditto for character theorists. It is debatable whether implicit biases, which are neither explicitly valued by the agent nor under the agent’s direct control, are blameworthy.

Let’s start with control, which is, I think, the easier metric. How ‘narrow’ or ‘internal’ is control? That is, is control over motive M merely a matter of whether we have conscious access to M? Certainly not, since we lack access to a great many mental states that are patently blameworthy. We take it that Smith can be responsible for killing a pedestrian when driving under the influence, because Smith was in a sound state of mind when he started drinking. This is a ‘benighted omission,’ in which Smith lacks control at t only because he impaired his reasons-responsive capacity at t-1. In this case, Smith bears ‘indirect responsibility’ for hitting the pedestrian because he had ‘indirect’ (prior time) control. The only thing that would excuse Smith would be if he were drunk only because he had been subjected to clandestine brainwashing by an evil neuroscientist who induced him to drink, or something of that nature – or so people tend to assume. (There are innumerable examples of ‘Frankfurt-type’ brainwashing scenarios, including in Fischer 2006, 2012).

The point is that direct conscious control at the time of action is not needed. Virtually every theorist agrees on this point. Only some kind of ‘indirect control’ is necessary. But it is still an open question how broad the scope of indirect control should be. According to Levy (2014), we aren’t directly responsible for the effects of implicit bias because we have at most ‘patchy’ control over these states, but we might be indirectly responsible for their effects. He doesn’t say much about indirect responsibility, however, which is a shame because this is arguably the more interesting question. If Smith selects a White male job candidate due to implicit bias, did he have indirect control over his choice? As Holroyd points out, we have indirect control over implicit biases through the use of strategies such as implementation intentions, countersterotypical exposure, explicit belief formation, and so on. So in principle, Smith could be held responsible on this basis. But much of this debate hangs on how reasonable it is to expect someone to preemptively use these strategies. As Levy states,

“an agent may lack direct responsibility for an action caused by their implicit attitudes, because given what her implicit attitudes were at t, it would not be reasonable to expect her to control her behavior, or to recognize its moral significance, or what have you. But the agent might nevertheless be fully morally responsible for the behavior, because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014).

So indirect responsibility ends up being tied to reasonable expectations. Is it reasonable to expect someone to use implementations intentions etc., prior to taking a certain action? It’s natural in our current climate to expect employers to take steps to remediate implicit bias, but is it reasonable to expect this of ordinary people? Suppose that Jones, an ordinary person with no special responsibilities, make a sexist joke, thinking that he’s being witty. He has no idea that he’s being sexist and is explicitly committed to egalitarianism. Is he responsible because he could have used remediating measures against implicit bias in the past?

If so, the scope of direct responsibility turns out to be very slight, while the scope for indirect responsibility turns out to be enormous. In fact, the control theory veritably collapses into character theory! Character theorists (for the most part) hold that we can be responsible for unconscious omissions such as implicit biases, provided that those mental states are caused by certain parts of our motivational set (as opposed to extraneous factors). So a character theorist might hold that someone is not responsible for acting under hypnosis (because this is an ‘alien’ motivation, external to the agent), but responsible for deeply-entrenched implicit biases that reliable cause patterns of behaviour. Well, that’s what control theorists would hold someone indirectly responsible for, too.

Consider for a moment Sher’s version of character theory (2010). He says that we’re responsible for an omission if it’s one that a reasonable person in our position would not have committed, provided that the omission stems from our own motivational psychology. So, we’re responsible for omissions for which a reasonable person could complain against us. That sounds a lot like the control view as I have just described it. On the latter, we’re responsible for omissions if we could have acquire the capacity to avoid them, by taking reasonable measures – by doing what a reasonable person would do in the circumstances. Construed this way, control view and character view turn out to be very similar.

Moving forward, let’s assume that a person can be responsible for A just in case it would have been reasonable to expect the person to acquire the capacity to control A at some point in the past. This somewhat circumscribes the scope of control, forbidding acquisition measures like killing someone, or severely injuring oneself, to enhance control. What’s ‘reasonable’ is still very debatable, but I won’t settle it here – let’s just grant that it rules out killing, death, and severe injury. Still, this doesn’t circumscribe control very much, because we still need to determine whether someone had the capacity within reason to acquire a better capacity for control, and doing this still requires careful analysis.

These remarks are somewhat speculative, because control theorists haven’t said much about indirect control. Fischer and Levy write a lot about direct responsibilility, but not much about indirect responsibility, except to note its existence. Yet direct responsibility, I think, turns out to be pretty irrelevant. Does it matter if we’re directly responsible for something, versus being indirectly responsible? If someone acts on implicit bias only because she failed to critically reflect on this mental state, is she more blameworthy than someone who fails to use remediating measures? Not obviously. Levy says that ‘indirect responsibility can underwrite a great deal of praise and blame; it is not necessarily a lesser kind of responsibility.’ We don’t have an argument to the effect that direct responsibility is a special, worse kind of responsibility, just because it is a more remote kind. Maybe direct responsibility matters because it tells us something about a person’s capacities: someone who is unaware of her implicit biases might be less capable of suppressing those biases. But now this seems irrelevant, since the critical question is not, ‘is S reasons-responsive now,’ but, ‘was S capable of becoming reasons-responsive at some ‘suitable prior time’ (to quote Vargas 2005)? We must trace reasons-responsiveness back – maybe way back – to prior times when S had the opportunity to improve herself. If indirect responsibility is a thing, the buck no longer stops at the agent’s direct capacity for control; we need to see if she was ‘indirectly,’ as some prior time, capable of acquiring direct control.

So for this reason, I don’t really understand why we need to be concerned with consciousness or direct responsibility at all. Correct me if I’m wrong here.

Indirect responsibility is slippery. It’s epistemically slipper. Here’s why: it requires that we ‘trace responsibility’ back to a ‘suitable prior time’ at which the agent could have foreseen the long-range consequences of her choices. This is what Vargas calls ‘the trouble with tracing (2005). Fischer subsequently responds, basically, that tracing isn’t really a problem, because we can generally foresee in ‘coarse grained’ terms what the consequences of our choices will be (2012). But I doubt this. I think we’re pretty terrible at predicting the future, and we’re terrible because we’re susceptible to all kinds of cognitive biases, including but not limited to implicit bias. (See also: self-serving bias, confirmation bias, confabulation, rationalisation, and all these). So, we’re generally bad at forecasting outcomes.

This is one dimension of the epistemological problem, but I’ve just adverted to another one: we not only need to determine whether someone satisfies the forecasting condition directly, but we also need to determine whether the person could have acquired a better forecasting capacity at some ‘suitable prior time.’ So we need to trace back beyond the person’s immediate capacity for forecasting (and beyond the person’s immediate capacity for control more generally), to see if the person had a prior opportunity to refine that capacity (or those capacities), in light of her circumstances. (Our circumstances are the resources that we use to acquire and hone our capacities, so they’re relevant here). Suppose that Jeff, the middle-aged middle manager, is a jerk, and that Vargas is right that Jeff could not foresee in his halcyon youth that he would one day become a jerk. This doesn’t settle the question of whether Jeff is all-things-considered responsible, because we need to trace back further still to see if Jeff could have acquired the capacity to foresee that he would become a jerk. Because if he could have, then he may be indirectly responsible for failing to acquire or hone that capacity, and thus indirectly- indirectly responsible for becoming a jerk. He’s responsible at n degrees of remoteness for his jerkiness.

If this is right, then control entails even more tracing than Vargas’ 2005 paper suggests. What I mean is, we can’t just home in on the agent’s brain at time t-n (some time in the agent’s personal history), and assess whether the agent had a particular capacity at that time. We need to abstract away from the agent’s capacity, to see if the agent could have enhanced, modified, or refined that capacity, using the resources of his environment. The capacity for control is not just a matter of an agent’s internal (physical, psychological, cognitive) capacities, but the relationship between those capacities and the world. So we can’t just trace back and look at the agent’s endogenous capacities at t-n. We have to look at those capacities, and the local environment, and the relationship between those two things.

Maybe Vargas intends for tracing to mean exactly this. In his later book (2012), he places a lot of emphasis on ‘the moral ecology,’ and the moral ecology consists of the conditions that support moral agency (2013). This view is a corollary of his agency-cultivation model. But I think that it’s informative to frame this perspective as a response to control theory, since it implies that, while control is relevant to responsibility, control is not an endogenous capacity, in the agent’s mind-brain, i.e., something that we can evaluate, in principle, with a brain scanner. Control is a relational property, between an agent and the environment.

Similarly with character: if character is a reliably-manifested property of an agent, then it’s a feature of the interaction between the agent as an individual and the environment. Sher’s view captures this insight: he sees character as a set of physical and psychological states that interact to produce an agent’s “characteristic patterns of thought and action” (2010: 124), and he describes this account as ‘suitably interpersonal (2010: 125). But insofar as a person is responsible for only ‘characteristic patterns,’ we need to identify the cause of a person’s action, to see if it is part of his character or not. If Jeff is curt with an employee, is it because he’s a jerk or just because he’s sick? If the latter, this isn’t part of his character. So character theory also requires tracing, though to a much more limited degree.

I’m going to continue this train of thought in the next post.

Non-cognitvism as philosophical method?


*By non-cognitivism I just mean a view on which emotions play a role in reasoning.

This post continues from the last one, which was on cognitivism and moral reasoning. I’m going to make some personal statements and some controversial statements, which are meant to be friendly and constructive suggestions, and I hope they’re taken in the right spirit.


It might be instructive to reflect on where the philosophical ideal of un-emotional (purely cognitive) reasoning comes from. In philosophy, we write dispassionately. We don’t include emotional arguments, or even personal anecdotes for the most part. The problem with personal stories is not just that they’re anecdotal, but that they’re emotionally evocative: they persuade without giving ‘reasons.’ They elicit our unreflective sympathy and agreement. If someone writes about her harrowing experience as a rape victim to frame an argument about the epistemic value of intuitions, like Karyn L. Friedman did (2006), we might worry that the anecdote does a lot of the argumentative lifting. But why shouldn’t it? We assume the validity of cognitivism when we assume that emotionally-resonant narratives can’t confer epistemic warrant – the very position Freedman is disputing! Maybe the cognitivist assumption is right, but for the most part, as a community, we assume it without argument, and this is a very dogmatic position. If we’re all going to agree that this is the right way to do philosophy, we’d better have a long discussion about it. And I’m not aware that a lot of discussions about the authority of cognitivism are taking place. If anything, they’re buried in the history of philosophical discourse.

I find personal narratives to be some of the most compelling arguments in philosophical writing, although I don’t come across them very often, and when I do, they’re usually in ‘newer’ branches of philosophy: transgendeer theory, mad studies (basically, philosophy of psychiatry from the service user’s perspective), critical disabilities studies, and sometimes feminist philosophy. Anecdotes arise more often in these fields because the fields themselves represent the perspectives of underrepresented groups who have relevant experiences – experiences that challenge the assumptions of the majority and make the field more objective. (see Sandra Harding 2015 for an account of objectivity as diversity – I can’t recommend it enough).

I write about my own experiences all the time, even in this blog. But I write about them under the guise of philosophical abstraction and objectivity. I don’t say, ‘this is my experience, which I’m refracting through the lens of philosophical convention to make it sound more professional.’ But that’s what I do. To be honest, I’m constantly writing about myself but presenting it as an argument about some philosophical construct, from a disembodied perspective – ‘the view from nowhere – as if it had nothing to do with my personal experiences. That’s what Freedman could have done – she could have written about epistemic warrant without talking about being raped. But she broke with convention, and maybe that’s what more of us should do. Maybe it’s better and more intellectually honest to ‘lay your cards on the table’ rather than pretend there aren’t any cards.

I think that responsibility theory is a very fruitful space for marginalized groups to write about their experiences, because if you think about it, responsibility is about those groups: it’s about people who had abusive and neglectful childhoods, people with psychological disorders, people who have have been oppressed by sexism and racism and homophobia and transphobia. These are the people that we’re writing about when we write about ‘unfortunate formative circumstances’ and ‘psychologically abnormal individuals’ and addictions and personal identity and psychological congruence and coercion and duress, etc., etc., etc. And sometimes when we write about those groups we’re really writing about ourselves, only we’re not saying so.

What better opportunity for marginalized groups to make a dent in trenchant philosophical issues?

But sometimes I worry that by presenting philosophical problems as abstractions, as problems about what to do with people and how to handle people, and about what specific cognitive states are implicated in ideal responsible agency, etc., we alienate the people who are in the best position to contribute to the discourse – the people who are in a position to give us the objectivity-as-diversity that we so desperately need in this field. We risk alienating people, I think, when we represent philosophical problems as abstract problems rather than questions that deeply affect us on a personal level and connect with our lives.

I want to clarify in closing that I’m not saying that philosophy is rubbish as it’s currently done, or anything even close to that. I just want to suggest that there might be other ways of doing philosophy that might seem less ‘philosophical’ only because of a cognitivist bias.