Hate as a type of prejudice: Luke Roelofs.

Hi. I’m going to take this opportunity to promote Luke Roelofs’ philosophy blog:

What kind of thing is ‘hate’?

His latest post analyzes ‘hate’ as a discriminatory attitude. He describes hate – specifically, the type involved in misogyny, homophobia, racism, and so on – as a holistic pattern of de-valuation that targets a group of people.

He says that hate on this description can encompass both hate as a feature of individual psychology and hate as a set of institutions and practices that function to devalue historically disenfranchised groups (or something to this effect). He also says that this description is compatible with seeing hate as both a psychological state and a matter of consequences. Finally, this definition explains why ‘reverse racism’ isn’t real: because racism as a feature of individual psychology can only exist against a background of systemic racism.

This is a compelling proposal, though not immune from criticism. One might argue that there can be real instances of inconsequential hate – for example, someone making a racist remark that no one hears. Still, racism can simply be re-defined by reference to an action’s tendency to cause harmful consequences to the targeted group under certain conditions. One might also argue that a person who doesn’t embody the psychological features of hate can still commit a hateful act, defined as such by its consequences. Relatedly, one might argue that the psychological definition of hate is redundant, since once you’ve defined hate as a feature of institutions, any action that contributes to discriminatory institutional arrangements is hateful as such, regardless of the agent’s psychological profile.

That said, I think that the psychological-systemic definition is useful because it illustrates how systems of relations are embodied in intentional agents, the metaphysical bedrock of those institutions. This dual description also covers cases of white ignorance and indifference, in which an agent’s moral character is defined by what it lacks – suitable responsiveness to morally salient contours of the environment. If we see moral psychology as a set of patterned responses (both conscious and unconscious) to morally-salient conditions, then hateful people lack patterned sensitivity to conditions of inequality and injustice.  Hate, in other words, is a disposition to respond insensitively to morally significant social cues – a flaw in the structure of the agent’s moral psychology. And this attitude contributes to, and partly constitutes, a broader system of social relations. As such, a person can be hateful (racist, misogynist, etc.) without knowing it, and can be hateful without causing harm – e.g., if he is stranded on a desert island, though the agent’s hateful disposition would be harmful under ordinary (i.e. unjust) social conditions.

This, at least, fits with some influential accounts of moral responsibility and its lack (e.g., Fischer 2012, Sher 2010), of ignorance and insensitivity (Mills 2007, Medina 2012, Fricker 2007), and of implicit bias (e.g., Levy 2016).

On this view, ‘reverse racism’ is a myth because one cannot be insensitive to systemic discrimination against white people, since this kind of discrimination doesn’t exist – ‘reverse racism,’ that is, only makes sense on a grossly distorted social ontology. However, one can be reasonably wary of white ignorance and patterned indifference to the plight of Black people, which is a rational attitude, not an instance of discrimination. This is why racism and ‘reverse racism’ are not equivalent – indeed, reverse racism is impossible.

Responsibility, Epistemic Confidence, and Trust

Unknown.jpeg

In my last post, I argued that severe deficits of epistemic confidence can undermine responsible agency by undermining a person’s ability to form resolutions and have a deep self. In this post, I want to discuss a related notion: trust. In writing about epistemic confidence, Miranda Fricker (2007) says that people who conspicuously lack epistemic confidence are perceived as less competent and less trustworthy. Being seen as less trustworthy undermines a person’s epistemic confidence, which in turn undermines the person’s agency or competency. Trust, epistemic confidence, and agency are thus related in a positive feedback loop. This is illustrated in the experiment on expectancy effect, in which certain students were randomly designated as academically gifted, and the teacher’s trust in the students’ academic competency actually improved their competency (as measured by test scores) over the course of the year (Rosenthal & Jacobson 1996, cited by Fricker 2007: 56).

In this post, I want to look more closely at trust and its relation to responsible agency.

Victoria McGeer also writes about trust. She argues the ‘substantial trust’—trust that goes beyond the evidence and abjures strategic judgment—enhances the trustee’s responsible agency (2008).[1] Substantial trust ‘goes beyond the evidence’ in the sense that it embodies a belief in the trustee’s moral worth that isn’t supported by the balance of evidence; and it ‘abjures strategic judgment’ in that it entails a refusal to evaluate the trustee’s worth on the basis of evidence. That is, trustors don’t meticulously scrutinize the evidence regarding their friend’s moral qualities; they take a leap of faith in favour of the friend’s potential to be good. To illustrate this epistemic state, if my friend is accused of bribery, I exhibit substantial trust if I’m biased in favour of her innocence, in spite of any evidence to the contrary. When we substantively trust someone, we refuse to judge her on evidential grounds.

A central element of substantial trust on McGeer’s view is hope: in trusting a friend, we hope the person will live up to our optimistic expectations of her moral worth, but we don’t know if she will. Yet substantive trust can’t (or shouldn’t) be delusory: if the evidence confirms our friend’s guilt beyond doubt, we shouldn’t trust in the person’s innocence; but it would still be reasonable in this case to trust in our friend’s capacity to improve. In this way, substantive trust is relatively resistant to disappointment: even if a friend fails several times, we can continue to trust in the person’s basic capacity to live up to our hopes. We trust that the person can gain new capacities or build on existing capacities to embody our ideal. Only in the face of repeated disappointment does substantive trust become irrational. ‘Irrational’ trust on McGeer’s view is pointless; it doesn’t reliably contribute to the trustee’s agency.

Substantive trust enhances the trustee’s agency because it “has a galvanizing effect on how trustees see themselves, as trustors avowedly do, in the fullest of their potential” (McGeer 2008: 252). That is, our trust inspires confidence in the trustee, who begins to believe in herself.

This picture of trust as agency-enhancing interests me for 3 reasons, which I’ll elaborate briefly here.

  1. Epistemic confidence: the mediating variable between trust and responsible agency

McGeer’s account helps to explain how epistemic confidence is related to responsible agency: substantial trust (when assimilated to Fricker’s moral epistemology) inspires epistemic confidence, which (in the right degree) facilitates responsible agency. The right degree, as per my last post, is midway between between epistemic insecurity and epistemic arrogance; it’s neither too much nor too little self-regard. Epistemic confidence, then, is the mediating variable between trust and responsible agency. McGeer doesn’t explicitly mention ‘epistemic confidence,’ but she’s interested in elucidating the psychological mechanism whereby trust enhances responsibility. She rejects Pettit’s theory (1995) that trust incites a desire for approval, as this isn’t a ‘morally decent’ motive, befitting of the trust relationship (2008: 252). Instead, McGeer proposes that trust ‘galvanizes’ the trustee to see herself in a more positive light—through the trustor’s eyes. The resultant state—let’s call it positive self-regard—motivates the trustee to aspire to a higher standard of conduct.

Positive self-regard can be seen as a weak form of epistemic confidence—an aspirational kind. Whereas epistemic confidence is a positive belief in one’s merit or abilities, self-regard (in McGeer’s sense) appears to be faith in one’s (as yet unproven) merits and abilities. But self-esteem and epistemic confidence are of a kind: one is just firmer than the other. So, we can see positive self-regard as a weak form of epistemic confidence, and both states as intermediary between two epistemic defects: epistemic insecurity and epistemic arrogance. These epistemic virtues—self-esteem and epistemic confidence—are positively correlated with responsible agency, in the following sense: they enhance the trustee’s confidence in herself, and thus her ability to have firm beliefs and values (or convictions) about herself, and to act on those states. Having convictions prevents people from being ‘wantons,’ akratics, and irresolute people—paradigms of irresponsibility or weak responsibility. Responsibility is enhanced by belief in oneself, and this belief tends to confer self-control, willpower, and resilience—competencies implicated in or constitutive of fully responsible agency.

These related virtues—positive self-regard and epistemic confidence—might serve slightly different purposes; specifically, self-esteem might be particularly adaptive in adverse circumstances where a positive outcome is unlikely (but possible), whereas epistemic confidence might be more fitting when success is reasonably probable; but both states facilitate responsibility. Trust is fitting, therefore, when it’s likely to enhance responsibility by either of these means. In other words, we’re rational to trust someone when our trusting attitude reliably confers agency-conducive epistemic virtues. This allows us to say (consistent with McGeer’s view) that trust is a ‘rational’ attitude even if it goes against the evidence, insofar as it tends to foster agency in the trustee. Trusting in someone ‘irrationally’ would mean trusting in someone who can’t reasonably be expected to live up to our ideal; in that case, we’re merely wishing (not trusting) that the person could be better. Trust is also irrational if the trustee is overconfident, since in that case, our trust is either wasted or positively harmful: it’s likely to increase the person’s epistemic narcissism.

On this (basically functionalist) account of trust, epistemic confidence is counterfactually dependent on trust in the following sense: it wouldn’t exist without some initial investment of trust, but it can become increasingly self-sustaining and self-perpetuating over time. That is, people who never receive trust probably (as a matter of statistical probability) won’t develop epistemic confidence, but people who do receive trust may become increasingly self-trusting and self-sufficient. This claim is based in part on facts about ordinary human psychology: As a matter of fact, trust tends to confer epistemic confidence in psychologically normal humans, which enhances responsibility as a measure of resoluteness, willpower, and resilience. This psychological picture is suggested (though not explicitly articulated) by McGeer and Fricker, who cite developmental and child education studies showing that trust from an adult inspires confidence and competency in children. (This is sometimes called ‘Pygmalion effect’). Fricker cites the famous teacher expectation study (Rosenthal & Jacobson 1996), and McGeer cites research in developmental psychology showing that children who receive support from parents—‘parental scaffolding,’ as she calls it (2007: 249)—develop stronger powers of agency than deprived and neglected children. This research suggests that agency typically, in ordinary humans, depends on positive self-regard, which depends on a non-trivial investment of trust, especially during a person’s formative years. Subsequent trusting relationships, however, can compensate for deficits in childhood, as other research indicates—for example, research on therapy showing how positive therapeutic relationships can remediate symptoms of childhood trauma (Pearlman & Saakvitne 1995). This is how I suggest we perceive the trust-epistemic confidence relationship: epistemic confidence is counterfactually dependent on a non-trivial investment of trust in psychologically normal people, but can eventually become relatively (though not completely) self-sustaining; epistemic virtues inculcated by trust typically confer strong(er) agency.

This discussion suggests a particular taxonomy of epistemic states related to trust and agency. Specifically, I’ve said that trust catalyses three closely-related epistemic virtues: positive self-regard, epistemic confidence, and epistemic courage. These states are increasingly robust epistemic virtues, which support our ability to form resolutions, exercise willpower, and act resiliently. At either end of thus spectrum is an epistemic defect: on one side, epistemic insecurity (a paucity of epistemic confidence), and on the other side, epistemic arrogance (a superabundance of epistemic confidence). These defects undermine agency for different reasons: epistemic insecurity undermines our ability to form and act on convictions, and epistemic arrogance undermines our ability to adequately consider evidence for and against our beliefs, inciting us to favour our prior assumptions come what may. (That is, it spurs self-serving bias and confirmation bias). These vices thus undermine our ability to have a deep self and to exercise moderate control over our deep self, respectively.

This is one possible epistemic framework for responsible agency—the one that I’ve settled on. I think that more work can be done here, viz., at the intersection of responsibility and epistemology (especially social/feminist epistemology, which is relational in nature). We can call this intersection ‘the epistemology of moral responsibility’. This is promising area for future research, I think, and it may be of interest to neuroscientifically-inclined philosophers, inasmuch as these epistemic states are amenable to neuroscientific description.

  1. Responsibility as ‘external’ or ‘distributed.’ 

I’m also interested in McGeer’s account because (I think) it poses a challenge to classic theories of responsible agency that are relatively ‘atomistic’ (Vargas 2013) or ‘internalist’ (Hurley 2011). Classic accounts include Frankfurt’s (1971), on which responsibility is a matter of being able to form higher-order volitions consistent with one’s lower-order desires, and Fischer’s (2006, 2011), on which responsibility is a matter of being moderately responsive to reasons. These are different types of theory (one is character-based and the other is control-based, as typically construed), but they both emphasize the internal properties of agents to a greater extent than McGeer’s theory of trust, and so they can be regarded as comparatively ‘internalistic.’ (I’ve adopted aspect of these theories here—the idea that responsible agency is a function of deep-selfhood and reasons-responsiveness—but I’m going to to suggest that these capacities are more ‘extended’ than classic accounts imply).

Internalism should be seen as a matter of degree: most theories of responsibility treat some background factors as responsibility-relevant—for example, neuroscientific intervention (Mele 1995). But classic theorists usually think that exogenous factors are only relevant insofar as they intervene on the ‘actual sequence’ of the agent’s deliberation. For example, Fischer holds that clandestine brainwashing impairs responsibility because it operates on the agent’s actual motivational profile, dramatically altering it; but a ‘counterfactual device,’ that would have intervened had the agent deliberated differently is ‘bracketed’ as irrelevant (for more on this, see Levy 2008). Frankfurt, too, sees these counterfactual conditions as irrelevant.

McGeer’s theory is comparatively ‘externalistic’ in that it (implicitly, at least) construes counterfactual interveners as relevant to responsibility (qua trust-fittingness). We can’t, on her view, ‘bracket’ these counterfactual conditions when considering whether someone is trustworthy. This is because when we substantially trust someone, we (implicity) judge the person by what she could be in a nearby possible world—one in which she’s better than she is. This is implied by the hopeful optimism intrinsic to substantial trust—we don’t see the trustee as she is (at least, in paradigm cases), but rather as she would be if she succeeded in translating our trust into ideal self-regard. Moreover, when someone fails to live up to our optimistic expectations, we don’t immediately withdraw our trust, since substantial trust is inherently resilient. Trust, then, doesn’t always track a person’s real-world capacity for control or real-world quality of will; it sometmes tracks the person’s potential to improve, not based on evidence but on hopeful optimism. Trust, then, is a form of responsibility (a reactive attitude) that isn’t constrained by considerations about a person’s real-world or actual-sequence capacities at the time of action—when the trustee did something good or bad. It considers the person as she is in a nearby possible world or as she may become in the future.

This sets McGeer’s account apart from classic ‘actual-world’ or ‘actual sequence’ theories, because substantial trust treats counterfactual possibilities—in which the agent has a different kind of self-regard—as morally relevant. The trust relationship itself can be seen as a ‘counterfactual enabler’ in Levy’s terms (2008), in that it enables the trustee to gain a capacity, if the person succeeds in internalizing the proffered trust. But these transformative effects aren’t countenanced as legitimate considerations on classic views of responsibility. Also importantly, the trust relationship is distributed between two people, not intrinsic to the trustee; if it’s withdrawn at a critical stage of development, it undermines the cultivation of positive self-regard and agency. This is another ‘externalist’ aspect to trust: it implicates two or more people’s agencies. So trust is ‘externalistic’ in at least these two aspects: it depends upon counterfactual scenarios and it implicates two agents.

  1. Responsibility as care-based (non-retributive) and forward-looking

Substantial trust also challenges two other familiar approaches to responsibility: the retributive view and the backward-looking view.

Retributivism is, in very simple terms, the view that those who commit a wrongful action deserve punitive attitudes (blame, disapprobation, resentment) and those who perform an excellent action deserve rewards (praise, approbation). (I won’t consider more complex versions of retributivism: this one will be my only target). This is a very natural way of thinking about the reactive attitudes, and it seems to be Strawson’s understanding. He implies that those who fail to conform to reasonable social expectations deserve punitive attitudes, unless there’s an excusing or exempting condition (e.g., hypnosis, severe psychosis).

Substantial trust challenges this neat binary by holding that a person who falls short of our aspirational norms still ‘deserves’ trust, if trust is likely to instil positive self-regard across a reasonable time scale. That is, continuance of trust is fitting when someone makes a “one-off” mistake, as substantial trust is an “on-going activity” that’s resilient in the face of moderate set-backs (McGeer 2008: 247). Hence, we can’t simply say that someone who surpasses our expectations thereby warrants praise and someone who breaches our trust thereby warrants blame, as per the standard desert-based picture. This doesn’t capture the essence of trust. Rather, we withdraw or modify our trusting disposition only when someone repeatedly or catastrophically disappoints us, rendering trust pointless and irrational. Since substantial trust is aspirational at its core, substandard conduct on the trustee’s part doesn’t compel us to automatically withdraw our trust and assume a retributive stance: we’re licensed to suspend blame in the hope that the person will improve.

This is related to the fact that substantial trust is a forward-looking attitude. Most theories of responsibility are backward-looking, meaning that they attribute responsibility (praise/blame) on the basis of an agent’s capacities at the time of action, i.e., some time in the past. Frankfurt’s and Fischer’s views are like this: if someone had (a) a certain motivational structure, or (b) reasons-responsiveness when performing a certain action A, the person is thereby responsible for A. Trust, however, isn’t deployed solely on the basis of someone’s past motivational psychology and conduct; it’s also deployed on the basis of the trustee’s ongoing and fluid potential: we can trust someone who doesn’t (presently) have the capacity to improve. Trust, that is, outstrips the trustee’s current capacities at any given time.

As McGeer points out, we don’t (paradigmatically) invest trust in someone on a calculated judgment that the person will ‘earn’ our trust (as Pettit thinks), as this would be perverse and ‘manipulative’ (2008: 252). Rather, we trust someone as a way of empowering the person. Another way of putting this, I think, is to say that we trust someone for that person’s own sake. This interpretation of trust has affinities with Claudia Card’s (1996) care-based approach to responsibility, on which responsibility serves the function of expressing care to the target agent. It also resembles Vargas’ agency-cultivation model (2013), which reflects a concern for the target’s wellbeing (at least, it’s amenable to this reading). This care-based orientation is very different from the retributive rationale, and it’s also not backward-looking: responsibility attributions are meant to enhance or empower the recipient, not to punish her for past misdeeds. McGeer’s account of trust thus fits better with consequentialist theories rather than retributive ones, and it seems to embody a care ethos—trust is an essentially caring attitude. It seems to be essential to trust that it be care-based—or at least forward-looking; any other interpretation is simply conceptually mistaken.

I think that this is the correct way to think about responsibility in general (i.e., as consequentialist); but even if this isn’t the whole story (arguably there are many incommensurable but correct theories of responsibility—see Doris 2015 on ‘pluralism’), this seems to be a necessary way of seeing at least one facet of responsibility: trust. This means (at a minimum) that not all of our responsibility-constitutive reactive attitudes are retributive.

 

*****

[1] McGeer says that substantial trust fosters ‘more responsible and responsive trustworthy behaviour’ (2008). I’m just going to say that it fosters ‘responsible agency,’ and I’ll make a case for this more general claim in this post. It’s not hard to see how trust can enhance responsible agency: if we trust in our potential to achieve a desired outcome, we’re better able to achieve that outcome (under success-conducive circumstances, which I’ll leave vague).

Moral enhancements 2

goodevil.jpg

In my last post on moral enhancements, I considered whether there is a duty to enhance oneself and others, and correspondingly, whether one can be blameworthy for failing to fulfil this duty. I said that this is a complicated question, but it depends to a great extent on whether the intervention is susceptible of prior informed consent, which in turns hangs on whether there are likely to be unknown (especially potentially adverse) side-effects.

Here, I want to consider whether intended moral enhancements – those intended to induce pro-moral effects – can, somewhat paradoxically, undermine responsibility. I say ‘intended’ because, as we saw, moral interventions can have unintended (even counter-moral) consequences. This can happen for any number of reasons: the intervener can be wrong about what morality requires (imagine a Nazi intervener thinking that anti-Semitism is a pro-moral trait); the intervention can malfunction over time; the intervention can produce traits that are moral in one context but counter-moral in another (which seems likely, given that traits are highly context-sensitive, as I mentioned earlier); and so on – I won’t give a complete list. Even extant psychoactive drugs – which can count as a type of passive intervention – typically come with adverse side-effects; but the risk of unintended side-effects for futuristic interventions of a moral nature is substantially greater and more worrisome, because the technology is new, it operates on complicated cognitive structures, and it specifically operates on those structures constitutive of a person’s moral personality. Since intended moral interventions do not always produce their intended effects (pro-moral effects), I’ll discuss these interventions under two guises: interventions that go as planned and induce pro-moral traits (effective cases), and interventions that go awry (ineffective cases). I’ll also focus on the most controversial case of passive intervention: involuntary intervention, without informed consent.

One of my reasons for wanting to home on this type of case is that there is already a pretty substantial body of literature on passive non-consensual interventions, or ‘manipulation cases,’ in which a futuristic neuroscientist induces certain motives or motivational structures in a passive victim. We can tweak these examples to make the interventions unambiguously moral (the intervener is tampering with the victim’s moral personality), to derive conclusions about passive moral interventions and how they effect responsibility. My analysis isn’t going to be completely derivative on the manipulation cases, however, because theorists differ in their interpretations of these cases, and specifically on whether the post-manipulation agent is responsible for her induced traits and behaviours. I want to offer a new gloss on these cases (at least, compared to those I will consider here), and argue that the victim’s responsibility typically increases post-manipulation, as the agent gains authentic moral traits and capacities by the operation of her own motivational structures. (Except in the case of single-choice induction, where there are no long-term effects). I will also say some words on the responsibility status of the intervening neuroscientist.

I’m going to assess three kinds of case, each of which we can find in the literature. First, Frankfurt-type cases (in fact, Frankfurt’s original case), in which an intervener induces a single choice. Second, ‘global manipulation’ cases like Alfred Mele’s (2010), in which an intervener implants a whole new moral personality. And finally, partial manipulation cases, in which the intervener replaces part (let’s say half) of a person’s moral personality. I’m not aware of philosophical ‘partial manipulation’ cases per se, but there have been discussions of partial personality breakdowns, as in (e.g.) split-brain patients, which we can use as illustrative examples. Partial manipulation cases are like that, only instead of generic cognitive deficits that impair performance, there are moral deficits (due to internal conflict) that may undermine the agent’s ability to make and carry out moral plans.

Let’s consider each of these manipulation cases in turn.

1. Induced choice (minimal manipulation)

In the original Frankfurt-type case (1979), a futuristic neuroscientist named Black secretly implants a counterfactual device in Jones’ brain, which would compel Jones to vote for a certain political candidate if he were to choose otherwise, but the device is never activated. It’s causally inert, so it doesn’t affect Jones’ responsibility status, according to Frankfurt. (Jones is responsible because he acts on his own decisive desire). This example has to be modified to fit our purposes. First, let’s imagine that the implant is casually efficacious as opposed to merely counterfactual, as this is a more interesting kind of case. And second, let’s suppose that the intervention is designed to induce a moral effect – say, to make Jones donate to charity. Finally, let’s consider two types of case – one in which the intervention produces the intended effect, and one in which it produces a counter-moral effect.

I’ll assess these cases after discussing global and partial manipulation scenarios.

2. Global manipulation 

Alfred Mele (2006) offers a case of ‘global manipulation,’ in which a person’s entire motivational system is changed. He asks us to imagine two philosophy professors, Anne and Beth, and to suppose that Anne is more dedicated to the discipline whereas Beth is more laid back. The Dean wants Beth to be more productive so he hires a futuristic neuroscientist to implant Anne’s values in Beth, making her Anne’s psychological double. Is Beth responsible for her new psychological profile and downstream behaviours? According to Mele, no, because Beth’s new values are practically unsheddable: they cannot be disavowed or attenuated under anything but extraordinary circumstances (e.g., a second neuroscientific intervention). That is, Beth lacks control over her implanted values.

This is again not obviously a moral example, so let’s imagine that Anne is a saintly person and Beth is a jerk, and the Dean doesn’t like working with jerks, so he hires a futuristic neuroscientist to implant Anne’s saintly values in Beth. The intervention works and Beth becomes Anne’s saintly doppelgänger. This is a moral intervention and it’s a causally efficacious one. In this example, Beth’s moral personality is radically transformed.

If Mele is right, Beth is not responsible for her saintly values (and downstream behaviours) because these values are practically unsheddable. There are two natural ways of explaining this judgment. (1) Beth lacks control over her new values because they are firmly inalterable (Mele’s explanation). And (2) Beth is no longer herself – she has become someone else (Haji’s preferred explanation [2010]). These interpretations exemplify the control view and the character view – competing views in the literature that I described earlier. But in this case they converge on the same conclusion – Beth is not responsible – though they provide different (albeit overlapping) grounds. On one account, Beth lacks control over her desires, and on the other, her desires are not authentic, because they did not emerge through a process of continuous reflective deliberation (diachronic control). The two criteria are related, but come apart (as Mele emphasises [2006]): we can imagine implanted desires that are susceptible to revision but not the agent’s ‘own’; and we can imagine authentic desires that are relatively fixed and impervious to control (e.g., think of Martin Luther King saying, ‘Here I stand; I can do no other’). Yet on a historical picture of control, the two conditions substantively overlap: a person’s mental states are not authentic if they were not amenable to continuous reflective deliberation. At least, this is the case for passive intervention cases, which we are considering here. If we consider a third account of responsibility – Vargas’ agency cultivation model (2013) – we find still more convergence: it’s typically not agency-enhancing to hold someone responsible if she lacked diachronic control and wasn’t herself at a particular point in time. These accounts do not always converge, but in the case of passive intervention, there is substantive agreement. So, conveniently, we don’t need to arbitrate them. Yet there is still room for debate about whether a manipulation victim is responsible on any of these pictures, since there is room for debate about how to treat this person. Vargas holds that a post-intervention agent is responsible for her behaviour, but is a new person, contra Haji and Mele.

I’ll return to this question after considering a third case: partial manipulation.

3. Partial manipulation

Imagine that instead of implanting Anne’s values in Beth, making her Anne’s double, the manipulator had only implanted half of Anne’s values, or a substantial portion, leaving part of Beth’s personality intact. This is a partial manipulation case. Now Beth is internally fragmented. She has some saintly desires, but she also has a lot of non-saintly desires. Is she responsible for only her pre-implantation desires, or also for her post-implantation desires, or none of her desires (and downstream effects)? This is a more complicated scenario (not that the other two are simple!) We can perhaps compare this situation to neurocognitive disorders that involve internal fragmentation, which have also drawn attention from responsibility theorists – examples like psychosis, alien hand syndrome, and split-brain (callosotomy) patients. Perhaps by assessing whether these individuals are responsible (to any degree), we can determine whether partial-manipulation subjects are responsible. (Spit-brain surgery, which partially or wholly divides the two hemisphere down the corpus callosum, induces a similar split-personality effect).

Let’s look at these cases separately, beginning with global manipulation.

Analysis

  1. Global

If Mele is right, then Beth is not responsible in the global manipulation case because she lacked control, and if Haji is right, she is not responsible because she lacked authenticity. These accounts have some degree of intuitive plausibility, surely. But Vargas (2013) offers a different interpretation. Vargas suggests that we should see post-global-manipulation agents as different people, but responsible people. So post-manipulation Beth is Beth2, and she is responsible for any actions that satisfy relevant conditions of responsibility for her. (Vargas’ preferred condition is a consequentialist one, but we can remain neutral). Since Beth2 can control her post-intervention desires (as much as any normal person), and they reflect her character, and it seems efficient to hold Beth2 responsible for these actions, we ought to regard her as responsible. Beth, on the other hand, is dead, and responsible for nothing. This view, it must be admitted, also has a degree of plausibility. But it seems to ignore the fact that Beth2, in very significant ways, is not like the rest of us.

I’m going to suggest a middle-of-the-road view, and it’s a view that emerges from a focus on how we become moral agents – a historical view. It is somewhat indebted to Haji’s [2010] analysis of psychopaths, who, somewhat like Beth2, have peculiar personal histories. According to Haji, psychopaths are not responsible agents because, unlike ordinary people, from childhood onward they lack the capacity to respond to reason due to myriad cognitive deficits (which we do not need to get into here). Children are not (full) moral agents: they lack certain rational capacities, yet they have certain emotional capacities that psychopaths lack, which allows to them to pass the moral/conventional distinction, at least from 5-years-old onward. Still, their rational deficits impair their moral agency. It is not only psychopaths who lack moral agency: Haji suggests that anyone who is barred from acquiring moral competency from childhood (due to congenital or involuntarily acquired deficits) is not a moral agent (a ‘normative agent,’ in his terms), because such people are, in effect, still children. (This includes severely neglected and abused children who develop psychopathic traits not of their own choosing). These individuals lack control over their motives, as well as anything that could be called a moral personality. If we want to say that children are not full moral agents, we must grant that these individuals, who suffer from arrested moral development, also are not full moral agents.

Now consider Beth2, who has been globally manipulated to have a different moral personality. Beth2 has zero (authentic) personal history. She is, in this respect, like Davidson’s ‘swamp-person’ (though different in other salient respects) – she emerges ex nihilo, fully formed. In lacking a personal history, Beth2 is like a newborn baby – a paradigm case of non-responsibility. Yet unlike the newborn baby, Beth has intact moral capacities, albeit not her own moral capacities – they were implanted. Beth2, then, is not responsible in either an authenticity sense or a diachronic control sense. Nonetheless, her extant moral capacities (though not her own) allow her to reflect on her motivational set, explore the world, interact with other people, and live a relatively normal life after the intervention. In this regard, Beth2 differs from psychopaths, who can never acquire such capacities, and are in a permanent baby-fied state, morally speaking. Moreover, as time goes by, Beth3 will become increasingly different from Amy, warranting the judgment that Beth3 is a separate person – her own person. So over time, it becomes more and more reasonable to say that Beth3 is an independent moral agent with her own moral personality. Although Beth3 cannot completely overhaul her motivational system, she can make piecemeal changes over time, and these changes are attributable to her own choices and experiences.

With this in mind, I submit that we treat Beth2 as not responsible immediately post-intervention (since she lacks any authentic motives at that time), but increasingly responsible thereafter (since she has the capacity to acquire authentic moral motives and capacities over time, unlike psychopaths). This doesn’t mean that Beth2 will ever be as responsible as an ordinary (non-manipulated) person, but she is certainly more responsible than newborn babies and psychopaths, and increasingly responsible over time.

Another problem with saying that Beth2 is fully responsible for her post-manipulation behaviour is that this leaves no room for saying that the clandestine manipulator – the Dean or the futuristic neuroscientist or whoever – is responsible for Beth2’s behaviour, especially in the case that the moral intervention goes wrong and induces counter-moral effects.

Suppose that Beth2 goes on a killing spree immediately after being manipulated: is she responsible for this effect? Surely the intervener is to blame. One could say, like Frankfurt (1979), that two people can be fully responsible for a certain action, but this seems like a problematic case of overdetermination. Surely blame must be distributed, or displaced onto the one who preempted the effects. Indeed, Frankfurt’s proposal doesn’t fit with how we ordinarily treat coercion. Consider entrapment: if a law enforcement agent induces someone to commit a crime, the officer is responsible, not the victim. This might be because the victim lacked adequate control (due to coercion), acted against character, or because it’s not useful to blame victims – entrapment fits with all of these explanations. Shoemaker seems to appeal to the last consideration when he says of entrapment, we blame the public official and not the victim because “we think the government shouldn’t be in the business of encouraging crime” (1987: 311) – that is, we don’t want to encourage government corruption. But Shoemaker also appeals to fairness, which is tied to control and character: it’s not fair to blame victims who lacked sufficient control or weren’t themselves when they acted (which is also why it’s not useful to blame them). So on any standard criterion of responsibility, it’s not clear why we would blame a manipulation victim.

Now, suppose that the intervention worked and the Dean makes Beth2 a moral saint. If I am right, Beth2 isn’t praiseworthy for the immediate effects of the intervention because they’re not her own (on various construals). The intervener’s moral status is more complicated. While he might seem prima facie praiseworthy for Beth’s pro-moral traits, we also have to consider the fact that he’s patently blameworthy for intervening without informed consent or any semblance of due process (viz. decisional incapacity protocols), and this might vitiate or cancel out his prima facie praiseworthiness. If we consider praise and blame as competing attitudes, it makes sense to see the Dean as blameworthy on balance.

2. Partial

Next, let’s consider a partial manipulation case. Let’s s imagine that the Dean replaces half of Beth’s personality with Anne’s, creating Beth3. This is trickier than the global manipulation case inasmuch as Beth3 has some authentic mental states, but they are in competition with implanted states that are not her own. So we can’t say that Beth3 lacks authenticity or diachronic control entirely, yet she is deficient in both respects compared to an ordinary person. We might compare Beth3 to examples of neurocognitive disorders that cause internal fragmentation, such as alien hand syndrome and split-brain effects, which have attracted a lot of philosophical interest. These subjects have ‘alien’ motives like Beth3, but unlike split-brain patients, Beth2 can presumable enhance her control and motivational congruence over time – assuming that none of her implanted states are pathological. So Beth is somewhere between split-brain patients and non-pathological individuals.

There are three ways that we can go in assessing Beth3’s responsibility. (1) We can hold that Beth3 is not responsible (period) because she has insufficient control and insufficient depth of character to ground responsibility. (2) We can say that Beth3 is actually two agents, not one, and each agent is responsible for its own motives and downstream behaviours. King and Carruthers (2012) seem to suggest that, if responsibility exists (on which they seem unsure), commisurotomy patients and alien hand patients must be responsible for their aberrant behaviours, since the former have two “unified and integrated” personalities (205), and alien-hand gestures reflect a person’s attitudes, albeit unconscious ones. Furthermore, they think that consciousness cannot be a requirement of responsibility since most motives aren’t conscious anyways. If this is right, then perhaps Beth2 is two people, each responsible for ‘her’ own motives. This, to me, seems impossibly impracticable, because we can only address responsibility attributions to one self-contained agent. We cannot try to reason with one side of Beth3’s personality at a time, because Beth3 doesn’t experience herself as two people. She won’t respond to targeted praise and blame, aimed at adjacent but interconnected motivational hierarchies. And she might even find this kind of moral address baffling and/or offensive. So this won’t do, practically speaking. But there’s also a good case to be made that commisurotomy patients and the like lack control and character in the normal sense, given that they have significant cognitive deficits. So it’s reasonable to see them as impaired in their responsibility compared to cognitively normal people. And likewise for Beth3.

This leads to the third possibility, which I favour: the middle-of-the-road proposal. Beth3 is somewhat responsible immediately after implantation, and increasingly responsible thereafter. This is because Beth3 is subject to control-undermining motivational conflict and disorientation following the clandestine intervention, but she nonetheless has some intact moral (relevant rational, emotional, and motivational) capacities, which differentiate her from the psychopath, and which should allow her to regain psychological congruence over time, enhancing her control and authenticity. So Beth3 should be seen as increasingly responsible over time (under ordinary circumstances). That said, she will likely never be as responsible as someone who had an ordinary learning history, since most people never suffer from this degree of fragmentation. So she may have relatively diminished responsibility for a long time, even if she becomes more apt for praise and blame.

Once again, this analysis can be applied to effective interventions and ineffective ones. If Beth3 acquires pro-moral traits (as per the Dean’s intention), she is not immediately responsible for them, but she gains responsibility for any induced traits that persist and amplify over time – indeed, for all of her post-intervention traits. Regarding the intervener, he is not necessarily praiseworthy for the pro-moral effects of the intervention, inasmuch as he might be blameworthy for intervening without consent or due process, and this might outweigh any praise that he might otherwise warrant.

Worse still, if the Dean inadvertently implants counter-moral traits in Beth3, he is blameworthy for intervening without consent as well as the effects of the botched intervention.

3. Minimal

Finally, let’s consider the induced-choice subject. Call her Beth4. Suppose that Beth4 has been inculcated with a desire to give to charity, and she acts on this desire. (Note: presumably to be causally efficacious a desire must be combined with a supporting motivational structure, but for simplicity I’ll refer to this bundle, following Frankfurt’s example, simply as a desire. Assume than an induced desire comes with supporting motivational states). Is Beth4 praiseworthy? On my view, no, because her desire is not her own. But the intervener also is not praiseworthy, insofar as he is blameworthy for intervening without consent or due process, which (I think) cancels out the import of of any good intentions he may have had. (The road to hell is paved with good intentions, as they say). (Note that I am construing praise and blame as competing reactive attitudes; other people hold alternative conceptions of ‘responsibility,’ which I will conveniently ignore).

Next, suppose that the Dean intends to induce a pro-moral choice in Beth4, but inadvertently induces her to commit a murder. The Dean, I think, is blameworthy for the murder, because he is responsible for implanting the relevant desire in Beth4 without her consent. This can be construed as an omission case, in which the Dean was ignorant of the likely consequences of his choice, but is responsible because he failed to exercise proper discretion. (Compare this to Sher’s [2010] example of a soldier who falls asleep on duty; the Dean failed in his duties as a person, a citizen, a Dean… he failed on many levels. He is, as it were, a spectacular failure as a moral agent). The Dean acted in character, and on a suitably reasons-responsive mechanism, when he chose to surgically intervene on Beth4 without her consent, and he bears responsibility for this infraction and the fallout of his choice.

*****

 

Moral enhancements & moral responsibility

Unknown.jpeg

I’m going to write a somewhat lengthy but off-the-cuff entry on moral enhancements, because I have to present something on them soon, in Portugal.

I’m going to write about (1) moral enhancement and moral responsibility, and how these things intersect (we might be responsible for failing to use moral interventions); (2) the possibility of duties to enhance oneself and others, and (3) passive moral interventions – the most controversial type – and whether we have a duty to use, submit to, or administer them.

A qualification: I don’t know very much about moral enhancement, so this is going to be a bit sketchy.

Moral enhancements and moral responsibility

Here’s what I know. A ‘moral enhancement’ is a “deliberate intervention [that] aims to improve an existing capacity that almost all human beings typically have, or to create a new capacity” (Buchanan 2011: 23). These interventions can be traditional (e.g., education and therapy), or neuroscientific (e.g., biomedical and biotechnological interventions). The latter are more controversial for various reasons. Examples include transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and psychoactive drugs, along with futuristic ‘Clockwork Orange’-type methods that we can only imagine. (Typical science fiction examples are dystopic, but we shouldn’t let this prejudice the discussion). These enhancements are moral insofar as they aim to enhance the subject’s moral-cognitive capacities, howsoever construed. (I’ll leave this vague).

Some philosophers hold that the distinction between traditional and modern interventions is that one is ‘direct’ while the other is ‘indirect’: TMS, for instance, directly stimulates the brain with electric currents, whereas education ‘indirectly’ influences cognitive processing. Focquaert and Schermer argue that the distinction is better characterised as one between ‘active’ and ‘passive’ interventions: traditional interventions typically require ‘active participation’ on behalf of the subject, whereas neuroscientific interventions work ‘by themselves’ (2015: 140-141). More precisely, passive interventions work quickly and relatively unconsciously, bypassing reflective deliberation. The authors acknowledge that this is a rough and imperfect distinction, and that interventions lie on a continuum; but they think that this distinction nicely captures a normative difference between the two methods. Passive interventions, they say, pose a disproportionate threat to autonomy and identity because (1) they limit continuous rational reflection and autonomous choice, (2) they may cause abrupt narrative identity changes, and (3) they may cause concealed narrative identity changes. These effects undermine the agent’s sense of self, causing self-alienation and subjective distress (2011: 145-147).

For this reason, the authors recommend that we introduce safeguards to minimise the negative effects of passive intervention – safeguards like informed consent and pre- and post-intervneiton counselling, which help subjects integrate their newfound moral capacities into their moral personality, or come to terms with their new moral personality, in case of radical transformation. So although passive interventions have a higher threshold of justification, they can be justified if their adverse effects are managed and minimised.

(Some objections to this view, with responses, can be found here).

Now for a few words on moral responsibility. In earlier posts, I’ve discussed three models of responsibility, two backward-looking (character and control theory) and one forward-looking (the agency-cultivation model). These theories have something in common: they hold agents responsible (blameworthy) for omissions. On character theory, we are responsible for failing to act virtuously, if a reasonable person would have done better in our circumstances (see Sher 2010). So someone can be blameworthy for forgetting about her friend’s birthday. On control theory, we are ‘indirectly’ responsible for failures to exercise reasons-responsiveness responsibly, in accordance with people’s ‘reasonable’ expectations. As Levy remarks when discussing ‘indirect’ responsibility for implicit bias, a person can be “be fully morally responsible for [a] behaviour [resulting from implicit bias], because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014; see also Holroyd 2012). Since we can be responsible for omissions on both models, it stands to reason that we can be responsible for failures to enhance our moral agency, if this is what a reasonable person would have done or expected of us in the circumstances. In fact, remediating measures for implicit bias can be seen as ‘moral enhancements,’ though they are closer to education than biomedical interventions. But the point is that there are grounds for blaming someone for eschewing moral enhancements of either type – active or passive – depending on the particularities of the case.

On the agency-cultivation model (ACM), it’s even more obvious that a person might be blameworthy for failing to use moral interventions, since blame functions to enhance moral agency. If blaming someone for not using a moral intervention would entice the person to use it, thus enhancing the person’s agency, blame is a fitting response. And if forcibly administering the intervention would enhance the person’s agency, this might be fitting, too.

All of these theories as I have presented them invite questions about when blame for failing to use moral enhancements, or for using them inappropriately, is apt. But this leads somewhat away from responsibility (as an agential capacity or cluster of traits that warrant blame and praise), to the domain of duties: from the question of what we are responsible for, to what we are responsible to do. These two things are related insofar as we can be responsible (blameworthy) for omitting to fulfil our responsibilities (duties) on all three models. But they are distinct enough that they necessitate separate treatments. First we figure out what our duty is, and then we determine if we are blameworthy for falling short of it.

Moral duties and moral perfection

What are our duties with respect to moral enhancements? We can approach this question from two directions: our individual duty to use or submit to moral interventions, and our duty to provide or administer them to people with moral deficits. This might seem to suggest a distinction between self-regarding duties and other-regarding duties, but this is a false dichotomy because the duty to enhance oneself is partly a duty to others – a duty to equip oneself to respect other people’s rights and interests. So both duties have an other-regarding dimension. The distinction I’m talking about is between duties to enhance oneself, and duties to enhance other other people: self-directed duties and other-directed duties.

These two duties also cannot be neatly demarcated because we might need to weigh self-directed duties against other-directed duties to achieve a proper balance. That is, given finite time and resources, my duty to enhance myself in some way might be outweighed by my duty to foster the capabilities of another person. So we need to work out a proper balance, and different normative frameworks will provide different answers. All frameworks, however, seem to support these two kinds of duties, though they balance them differently. For Kant, we have an absolute (perfect) duty to abstain from using other people as mere means, so we have a stringent duty to mitigate deficits in our own moral psychology that cause this kind of treatment; and we also have a weaker (imperfect) duty to foster other people’s capabilities. On a consequentialist picture, we have to enhance the capabilities of all in such a way as to maximise some good. Our duty to enhance ourself is no greater or lesser than our duty to foster others’ capabilities. Aristotle, too, believed that we have a duty to enhance ourselves and foster others’ capabilities by making good choices and forming virtuous relationships. Rather than arbitrate amongst these frameworks, we can just note that they all support the idea that we have duties to enhance ourselves and others. The question is how stringent these duties are, not whether they exist. Of course, in a modern context these questions take on new significance, because the duty to enhance oneself and others no longer just means reflecting well and providing social assistance; it could potentially mean taking and distributing psychoactive drugs, TMS, etc.

One pertinent question here is how much enhancement any individual morally ought to undergo. How much enhancement is enough enhancement? We can identify at least four accounts of the threshold of functioning we ought to strive for: (1) the ubermensche view: we must enhance our own and others’ capabilities to the greatest extent possible; (2) prioritarianism: we should prioritise the capabilities of the least well-off or the most morally impaired; (3) sufficientarianism: we should enhance people’s capabilities until they reach a sufficient minimal threshold of functioning, after which it is a matter of personal discretion; and (4) pure egalitarianism: we should foster everyone’s capabilities equally.

No matter which of these accounts we favour, they all face potential objections. The more stringent forms (the ubermensche view) are more susceptible to these objections than the less stringent forms (sufficientarianism). If we are considering enhancing our own capabilities, the scope for criticism is minimised, but we might still worry that our attempts to enhance ourselves morally might go awry, they might interfere with our ability to foster others’ capabilities, they might impel us to neglect valuable non-moral traits, or (relatedly) they might prevent us from becoming well-rounded people, capable of living flourishing lives (see Susan Wolf’s ‘Moral Saints’ 2004). When considering whether we should foster others’ capabilities, there are still more worries. Namely, in seeking to enhance other people – particularly if we do this through institutional mechanisms – we risk insulting and demeaning those deemed ‘morally defective,’ stigmatising people, jeopardising pluralism (which may be inherently valuable), undermining people’s autonomy, or fostering the wrong kinds of traits due to human error. That is, by intervening to enhance moral agency, we might inadvertently harm people.

The most controversial type of moral intervention – passive – is the most vulnerable to these objections, and others that I haven’t mentioned. From now on, I’ll just talk about this type. I’ll consider Focquaert and Schermer’s cautionary advice, and then offer some suggestions.

Passive intervention, autonomy, and identity

The authors argue that passive moral interventions are particularly morally problematic because they disproportionally jeopardise autonomy and identity, insofar as they limit continuous rational reflection and precipitate abrupt and concealed narrative identity changes. To minimise these adverse effects, interveners should provide informed consent procedures and pre- and post-intervention counselling. Let’s suppose that this is right, and also that we have a duty to enhance ourselves and others, as per the above discussion. This seems to have the following implications. First, a duty to submit to a passive moral intervention obtains only on condition that informed consent and pre-and post- intervention counselling are available and reasonably effective. So, a person can be blameworthy for failing to undergo a passive moral intervention (if at all) only if the intervention came with those provisions. Secondly, there is (at minimum) a duty to provide these safeguards if administering a passive intervention; so, if an intervener fails to do so, the person is thereby blameworthy. This is the upshot of holding that there is a duty to enhance oneself and others, and this duty includes providing harm-minimising safeguards.

That said, there are other considerations that might vitiate the perceived duty to enhance one’s own and others’ moral capabilities. I think that the most forceful objection, which may underlie other objections to these interventions, is that we can’t be sure of their effects. We can call this the knowledge constraint. If we can’t know, in medium-grained terms, the probable effects of a passive intervention, we can’t obtain informed consent in a strong sense, since we can’t adequately assess whether the effects will be good or bad. Let’s reconsider Focquaert and Schermer’s worries, and tie it to this concern. They say that (1) passive interventions threaten continuous rational reflection and autonomy. When someone undergoes TMS, for instance, the person doesn’t have the capacity to endorse or reject the effects of the magnetic stimulus. But why is this a problem? I don’t think it suffices to say that it undermines autonomy, because many autonomy-enhancing procedures undermine the capacity for continuous reflection: I forfeit this capacity when I undergo a root canal under general anaesthesia, but I can still give prior informed consent unproblematically. This isn’t just because a root canal is localised to the mouth (although this is a related issue). It’s mainly because I know precisely what the effects of the root canal will be, so I can consent to them. If dental procedures induced unforeseeable side-effects, informed consent for them would be much more problematic. This suggests that the unforeseeableness of the side-effects of a procedure, as opposed to the reflection-undermining nature of the procedure per se, is what undermines autonomy and has moral import.

With neuroscientific interventions, we can’t very accurately predict the general immediate and long-term effects , because these interventions function very coarsely, across a broad range of cognitive systems and processes, and their mechanisms are not fully understood. Similarly, the brain is very complex, and its mechanisms are not very well understood. And so the interactions between these mechanisms are pretty murky. For these reasons, any targeted (intended) effect of neurointerventions comes with a range of possible side-effects. If we just focus on the immediate side-effects of TMS, they include fainting, seizures, pain, as well as transient hypomania, hearing loss, impairment of working memory, and cognitive changes (unspecified). In terms of long-term effects, NICE found that there is insufficient evidence to evaluate safety for long-term and frequent uses. That is, not enough is known to predict the long-range effects in a scientifically responsible way. Speaking more generally, TMS is used for multiple conditions; its activation is very un-specified, targeting various brain regions that serve various functions. It can’t be used to, say, ‘cure schizophrenia’; it acts on multiple brain regions, bringing about a cluster of effects, some intended and others unintended. All neurointerventions work this way: they induce a cluster of effects. Subjects can’t select just one desired effect: undesired side-effects are very likely, if not inevitable. Given these considerations, it’s far from clear that there is an obligation for anyone to submit to passive interventions that act directly on the brain, and even less clear that anyone has an obligation to induce them on other people.

So, it seems that the threat to continuous rational reflection is a problem because we don’t know, in medium-grained terms, what the effects of the procedure will be in advance, which threatens prior informed consent. If we could predict the effects relatively accurately, the loss of continuous rational reflection would not be a problem.

(2) Passive interventions cause abrupt narrative identity changes. Again, I think that this is a problem because the end result of the passive intervention is impossible to predict in medium-grained terms. If we could provide someone with an accurate specification of the effects of a proposed intervention and gain the person’s consent, the ‘abruptness’ of the narrative identity change wouldn’t be an obstacle, it would be an advantage. If I could get a root canal ‘abruptly’ rather than slowly, I would choose the abrupt method – but only if I understood the consequences. If I could quit smoking ‘abruptly’ as a result of voluntary hypnosis, I would choose that option. Abrupt changes are problematic only if we can’t predict the outcomes of those changes.

(2) Passive interventions cause concealed narrative identity changes. This is also a problem only because we can’t predict the effects of the intervention. If we could, we might even prefer that the procedure be concealed. I might want to have a ‘concealed’ root canal – a root canal that I completely forget about after the fact, but with my prior consent – but not if the procedure induces unwanted side-effects that I can’t detect until after the fact. If side-effects are an issue, I want to be conscious so that I can do what I can to mitigate them.

The fact that passive moral interventions impinge on the brain rather than the body (not that this is a hard and fast distinction) is also problematic because our brain is the seat of our personal identity, and changes to our personal identity are especially jarring. Still, the analogy between cognitive interventions and bodily interventions is informative. If I sign up for an appendectomy and wake up with a third arm, this is a violation of my autonomy. Waking up a different, or radically altered, person, is that much more autonomy-undermining. In both cases, the lack of prior knowledge about probable outcomes is what violates the person’s autonomy: an unforeseen transformation is an affront to the self. (This is true even if the transformation is an accident: if a piano player’s hand is crushed by a boulder, this is autonomy-undermining, even if no one is to blame [see Oshana 2008]).

Post-intervention counselling might mitigate some of these concerns, because it allows the agent to ‘consent’ retroactively. But this is problematic for the following reasons.

(1) For no other procedure is retroactive consent permissible. If my teeth are rotting and someone kidnaps me and performs involuntary dental work while I’m unconscious, this is a moral infraction. (Note that when I was discussing clandestine changes above, I meant that they were permissible only if there was prior informed consent). Even if I’m pleased with the dental work, the dental intervener could have performed the procedure less invasively – with my consent – and to this extent, he’s blameworthy: he chose the most invasive intervention instead of the least invasive one. Now, when moral enhancements are at issue, the intervention may be prima facie justified by the potential positive consequences, but we have to consider (1) the strength of the benefits, (2) the probability of adverse side-effects, and (3) whether a less invasive intervention was available. If passive interventions are used as a first response, this is clearly impermissible. If they are used as a ‘last resort,’ when no alternative is available or all alternatives have been tried, we have to consider the moral import of possible side-effects. Consider Alex (shown above) from ‘A Clockwork Orange’: the movie is a horror story because the Ludovico technique made him miserable and dysfunctional, unintentionally. And he was not informed beforehand of the risks. Interveners are, I believe, blameworthy for using interventions that may cause severe adverse side-effects, because there is a duty to protect people from these kinds of risks. This not to say that passive interventions are never permissible, but they are not permissible if they carry severe or unknown risks to wellbeing.

(2) Because moral interventions target a person’s moral identity, there is an argument to be made that post-intervention ‘consent’ is not consent at all, because the agent on which the intervention acted – the pre-intervention agent – no longer exists, or is so drastically altered, consent is not possible for that agent. Moral interventions have the ability to ‘kill off’ the pre-intervention agent. This places a massive burden of justification on those who would intervene. We need to consider whether it is permissible to ‘kill off’ unwanted moral personalities, or radically alter them in a ways that preclude obtaining consent. When a person is radically altered by an intervention without consent, the person cannot consent after the treatment, so any post-intervention consent given is a different person’s. This is a radical departure from the standard model of informed consent, and this is something we need to consider. We also need to weight whether society should be choosing which moral personalities have value, as this might threaten pluralism and self-determination.

I should make a qualifying statement here. I don’t want to stigmatize the use of TMS and other existing passive interventions. Some people voluntarily use TMS for personal (prudential) reasons, which I would deem morally unproblematic. I’m specifically concerned with whether people have a duty to use passive interventions – not just existing ones, but also interventions that may arise in the future – to enhance their moral capabilities; and whether society has a duty to impose these interventions on people without their consent. I’m saying that it’s much harder to justify a categorical duty (as opposed to a subjective desire) to undergo passive interventions, and still harder to justify a duty to impose (as opposed to merely offer) interventions on people deemed morally deficient.

Back to responsibility

Here’s how this bears on responsibility. On a simplistic reading, it might seem that we have a duty to use moral interventions to enhance ourselves and others to a substantial (sufficientarian, if not ubermenschean) degree. And maybe this is true for traditional moral enhancements, which we can reflectively choose and consent to with full information. But it’s not clear that this is the case for passive moral interventions, just because we can’t tell if those interventions are going to have overall positive consequences. And this is because neurointerventions target clusters of traits, some of which may be adaptive while others may be maladaptive. So, in trying to morally enhance someone, we might inadvertently harm the person and even introduce new counter-moral traits. If so, we might be blameworthy for the consequences of those interventions. And if there is no clear duty to submit to passive moral interventions that might produce harmful and even counter-moral side effects, then a person might not be blameworthy for refusing them.

Consider stories like ‘A Clockwork Orange’ and ‘Brave New World.’ They’re about attempts at moral enhancement and social engineering gone horribly awry. And this is a legitimate worry. I’ve suggested that when technology is morally problematic, it’s not because it threatens rational reflection and narrative identity per se. Rather, it’s because it has unpredictable side-effects, and these side-effects might be morally problematic. This isn’t mean to imply that we have no duty to enhance ourselves and others, but rather that when trying to enhance our capabilities with modern methods, we should proceed with caution.

*****

Implicit Bias: The limits of control/character: Continued.

personality-theory-people.jpg

This post continues from the last one. I was saying that if we need to trace responsibility back to a suitable prior moment (t-1) at which the agent could have foreseen the consequences of his choices (following Vargas’s 2005 description of control theory), then we need to assess not only that agent’s internal capacities at t-n, but also the agent’s ‘moral ecology,’ and the relationships between the agent and the moral ecology. This follows from the fact that ‘indirect control’ is a matter of whether an agent could have exercised or acquired a capacity using available resources in her local environment. (Indeed, these two things are related: we acquire capacities in part by exercising more basic capacities: by taking piano lessons, I learn how to play piano well, i.e., I acquire piano-playing reasons-responsiveness. This is a result of exercising a more basic capacity – the basic human capacity to master a symbolic system with a combination of tutelage and practice). According to Holroyd, if someone is implicitly biased but could have avoided becoming implicitly biased or expressing an implicit bias, the person may be blameworthy. I am suggesting that to determine a person’s blameworthiness, we must evaluate, not just the individual person at t-n, but the person’s location in the social ecology.

This picture suggest a different kind of objection to Fischer’s notion of control – the dominant model. Fischer explicitly states that to appraise a person’s responsibility status, we must home in on the ‘actual sequence’ of deliberation, ignoring counterfactual circumstances in which the agent would have deliberated differently. That is, control for Fischer is ‘actual-sequence control.’ This strategy is (I believe) meant to undercut incompatibilist and nihilistic objections to control: objections to the effect that no one is responsible for anything because no one is capable of exercising ultimate control (being an unmoved mover) in a determinist universe (G. Strawson 1986), or because control is irrelevant in light of moral luck: whether you’re capable of exercising compatibilist control is just a matter of luck, not agency (Levy 2008). I think this last view places too much importance on the moral ecology and not enough on agency, but we can return to this later. By restricting control to the actual sequence, we cut off counterfactual circumstances in which the agent is metaphysically determined, as well as circumstances in which all agents are equally capable of control – distant possible worlds. But there is a case to be made that even if we include some counterfactual circumstances as morally relevant – and thus, some possible worlds – we do not need to include all counterfactual possibilities. Even if the buck doesn’t stop at actual-sequence control, it might stop in the next nearest possible world, cutting off the kind of slippery-slope objections that lead to nihilism.

I’m going to make this case in a moment, but first consider an existing objection to actual-sequence control, from Levy (2008). Levy argues that, while counterfactual disabling circumstances superficially appear to be irrelevant (consider, for example, Frankfurt-type cases in which the counterfactual device is never activated), it is possible to construct a counterfactual enabling circumstance that seems to make a difference. An enabling circumstance is one in which the agent gains a capacity, in contrast to the standard disabling scenarios, in which the counterfactual device would prompt the agent to go against the demands of morality if it were activated. Here’s one of Levy’s examples:

Phobia:

Jillian is walking along the beach when she notices a child growing. Jillian is a good swimmer, but she is pathologically afraid of deep water. She is so constituted that her phobia would prevent her from rescuing the child were she to attempt to; she would be overcome by feelings of panic. Nevertheless, she is capable of trying to rescue the child, and she knows that she is capable of trying. Indeed, though she knows that she has the phobia, she does not know just how powerful it is; she thinks (wrongly) that she could effect a rescue. Unbeknownst to Jillian, a good-hearted neurosurgeon has implanted her with a chip with which he monitors Jillian’s neural states, and through which he can intervene if he desires to. Should Jillian decide (on her own) to rescue the child, the neurosurgeon will intervene to dampen her fear; she will not panic and will succeed, despite her anxiety, in swimming out to the child and rescuing her. (2008: 170, 2008b: 234).

Levy actually presents this scenario variably as an objection to control theory and an objection to character theory. Suppose that Jillian decides not to rescue the child, in spite of (falsely) believing that she can. In the first place, it seems as if Jillian is responsible (says Levy) because she believes that she can rescue the child, and fails to act on this belief. But if Fischer is right, then Jillian isn’t responsible because she can’t succeed by her own (independent) means. Without the help of the benevolent intervener, success is impossible, and the intervener is external to Jillian’s motivational set. To vindicate the intuition that Jillian is responsible, we need to include the counterfactual intervener; so counterfactual scenarios seem relevant. (At least, this is my understanding). In the second place, if we include the counterfactual scenario as relevant, we have to accept that Jillian’s motivational set, and thus her character, include this scenario as a component part, and so Jillian’s character is “smeared” across time and space (Levy 2008: 179). Hence, locational externalism (i.e., the extended mind hypothesis) is true. These proposals call into question the viability of actual-sequence control and of character as traditionally conceived. I take it that this is supposed to support responsibility nihilism, i.e., the idea that responsibility (in a desert-entailing sense) doesn’t exist, as per Levy’s thesis in ‘Hard Luck.’

I think that that some of Levy’s claims are accurate and others need to be modified. Here’s what I endorse and what I dispute. I agree that counterfactual circumstances matter – at least, some counterfactual circumstances (specifically, those in which the agent could have intervened and succeeded with some kind of help), but I disagree with the ‘intuition’ that Jillian is responsible for an omission in this particular case. The reason is that, if someone, for bizarre reasons, thinks that she can save someone from drowning, but there is good objective reason to think that she can’t, the person doesn’t have a duty to intervene. Arguably, if we are not lifeguards, we don’t have a standing duty to save someone from drowning in any circumstance, because the risk of drowning ourselves in the attempt is too high, even if we are excellent swimmers for ordinary people (not lifeguards). So there’s no reason to think that Jillian ought to act on her belief that she can help. Normal people don’t have a standing obligation to risk their lives to save drowning victims. What we ought to do is alert the lifeguard, call emergency medical services, or look for an indirect means of intervening that doesn’t risk our own safety. So if we don’t have an obligation to risk our lives to save drowning victims, then people with water-related anxiety disorders certainly don’t, even if they have bizarre beliefs about their capacities and their moral duties.

I raise this point in part because this is the response I usually get when I present this case to other people. We could adjust the scenario by stipulating that the child is drowning in a wading pool. Normal adults have a duty to save children from drowning in wading pools, surely. But Jillian is not a psychologically-normal adult, so the worry recurs. Just as it is unreasonable for a normal adult to think that she has a duty to save a drowning victim in open water, it may be unreasonable for someone with a water-induced phobia, which could endanger her safety, to think that she has a duty to save a wading-pool victim. The worry is that the sacrifice required of the person is (objectively) too great to ground the existence of a duty, even if the person thinks that she has a duty for bizarre, subjective reasons.

But now consider a case in which it’s more obvious that a moral duty obtains. I’ll try to construct it to resemble the Jillian scenario, i.e., to include a protagonist who cannot achieve some end using her own (internal) capacities, but could succeed with external support. Suppose that Jack can’t contain his anger towards women, and regularly berates his female employees, family members, servers at restaurants, and so on. Jack (falsely) believes that he can control his misogynistic anger using willpower alone, and decides to exercise his willpower. But this is a false belief – he has much less willpower than he imagines. Unbeknownst to Jack, a local therapist specialises in anger-management problems, and would have been able to help him if he had sought out her help. But Jack never does. So Jack tries to exercise his willpower and fails, and continues to demean women on a daily basis.

If we home in on Jack’s internal capacity for control, we have to excuse Jack, since he lacked the internal resources to suppress his misogynistic urges. But if we consider the counterfactual scenario (in which Jack visits the therapist) as relevant to Jack’s responsibility status, then we can hold him responsible. It seems very reasonable to say that Jack had the capacity, in a very basic sense, to look for resources to control his misogynistic anger. But Jack does not have unassisted actual-sequence control over his anger.

I think that this scenario presents a plausible argument for the idea that, although Jack lacks actual-sequence control over his misogynistic anger, he has counterfactual-sequence control over it, and this counterfactual control is relevant to Jack’s responsibility status. Jack would not be risking his life by seeking out therapy. In fact, he wouldn’t be sacrificing anything of moral value. And if psychiatric counselling is free, as it is in Canada and some of the moral socialist countries, he isn’t even sacrificing anything of prudential value. By not seeking help, he’s not exercising his (basic human) capacities in the way he should. And because he’s not exercising these capacities responsibly, he has a character flaw.

Now, assuming that all of this is plausible, there’s an argument to be made that the proposed counterfactual-control model initiates a slippery slope into responsibility nihilism. Once we allow that counterfactuals are relevant, we have to admit that determinism precludes agency. But that’s only true if we hold all counterfactual possibilities – or at least, very many counterfactual possibilities – to be morally relevant. Yes, in a deterministic universe no one is incompatibilist-responsible for anything. But why go back to the metaphysical basis of reality – the metaphysical underpinnings of human behaviour? Why engage in ‘panicky metaphysics’ at all, when we can stop at the moral ecology? All I’m suggesting in constructing the above example is that some counterfactual possibilities matter – the ones that the agent could have availed himself of relatively easily, without sacrificing anything of moral (and in this case, even prudential) value.

Recall that in my last post I noted that control theorists typically espouse an implicit ‘reasonableness’ constraint in their conception of ‘indirect responsibility’: they hold an agent responsible for omissions for which it is reasonable to complain against the agent. As Levy says when commenting on implicit bias, an agent might “be fully morally responsible for [a] behaviour [resulting from implicit bias], because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014). This implies that only counterfactual circumstances that were reasonably available to the agent are morally relevant – scenarios in which the sacrifice demanded of the agent is not overly stringent. I have argued that in the Jillian scenario, the moral demand is too high, but in the Jack case, it’s not. This gives us a foundation on which to say that someone can be responsible for failing to exercise counterfactual control if doing so was reasonable. Of course, ‘reasonableness’ is a vague concept, and I won’t precisify it here, but it’s a concept that makes intuitive sense, and one that we regularly rely on without clarification, to constrain the scope of normative concepts. (Consider Scanlon’s account of moral principles as those that no suitably-motivated person could reasonably reject; we get the point without delving into the semantics).

Character theorists of Sher’s stripe similarly hold that reasonableness is critical to responsibility. A person is responsible for an omission just in case a reasonable person with relevantly similar capacities would have done better. So once again, we are to judge the agent’s responsibility status by what it would be reasonable to expect of her, in light of certain counterfactual possibilities – what the agent could have achieved under conditions C.

If this is right, then counterfactual circumstances do matter morally. But not all, or just any, counterfactual circumstances are relevant. Just those that were reasonably available to the person, at relatively low personal cost. This goes back to what I was saying about the moral ecology, and about tracing. Counterfactual scenarios are part of the moral ecology, external to the person’s material brain. So when evaluating a person’s responsibility status, we have to consider the person’s relevant brain states, and the reasonably available counterfactual circumstances supported by the agent’s moral ecology. We have to look at the person’s capacities and the person’s moral ecology and the potential interaction between those two variables, to see if they support the possibility of agency cultivation. And with regard to tracing, we need to trace responsibility to that possibility. We need to assess whether Jack, for example, had the (general) capacity to acquire the (specific) capacity to remediate or suppress his misogynistic anger, given the resources of his moral ecology.

This suggests the following revisions to control theory and character theory. We need to see control as more than actual-sequence control to account for the possibility of indirect responsibility for omissions that were reasonably avoidable. Specifically, we need to include as morally relevant, counterfactual circumstances in which the agent could have interacted with the moral ecology in such a way as to bring about a new capacity, or (more precisely) to leverage an existing basic (undifferentiated) capacity into a more specific (specialised) capacity. And it suggests that we ought to regard character as diffuse or locally extended, i.e., co-constituted with agency-supporting or agency-enhancing social supports. (This is not a revision to Sher’s view, in fact, but it emphasises that aspect of it). And finally, it suggests that when ‘tracing’ back to control, we need to trace beyond the agent’s actual sequence, to reasonably available aspects of the agent’s moral ecology, which would have enhanced the agent’s capacities if the agent had taken the right kind of initiative. And similarly with character theory, we need to trace character to relevant features of the local ecology, to determine if the agent is using those features to the best of her ability. If she is not, she may be culpably indifferent. (In this way, tracing applies to character theory as well, though its reach is more limited – we don’t need to trace as far back).

These considerations strike against any theory that is too narrow in its conception of responsible agency, particular the actual-sequence control model. And it suggests that control theory and character theory are perhaps more similar than they may initially appear, in that control theory admits a greater scope for blame under the category of ‘indirect responsibility,’ properly understood. These considerations build on Levy’s objections to actual-sequence control and character internalism. But I recruit them to show that there is a broader scope for blame than we tend to think, and he does the opposite – he recruits them in support of responsibility nihilism. Our views are technically compatible, though, because he’s refuting a desert-based notion of responsibility, and a relatively harsh form of desert-based responsibility, on which blame (1) is justified by reference to an agent’s actual sequence of deliberation or internal traits, and (2) entails fairly punitive sanctions. I also reject this notion of responsibility, because it combines a metaphysically tenuous conception of agency with dubious assumptions about the kind of thing blame is (punishment) and proportionality (harshly punitive). I think that the same objections can be taken to support a modification of responsibility rather than a rejection of it.

Here’s one substantive alternative to the actual-sequence control model – the ‘limited counterfactual-sequence control model.’ People are responsible for (1) intentional infractions (like explicit bias), (2) failures to properly exercise control (e.g., manifestations of implicit bias that could have been avoided by suitable reflection), and (3) failures to enhance the capacity for control (e.g., failing to search for remediating measures available in the local moral ecology, when it would be reasonable to do this). And here’s a viable version of character theory: people are responsible for character defects just in case those defects could have been remediated with reasonable effort, using the resources of the local ecology. In case it’s not obvious, here’s why the local ecology matters for character theory. If Smith is a misogynist because he lives in 1950s middle America and doesn’t have access to good examples of egalitarian behaviour, while Jones is a misogynist in present-day New York just because he hates women, Jones has worse character than Smith, because he’s not only a misogynist, he’s also indifferent to women’s interests. That is, Jones exhibits a greater degree of indifference than Smith (see my 2015 paper and my 2013 paper for lengthier examples, and see also Fricker 2012). Some theorists assume that tracing doesn’t apply to character theory, but that’s false. We have to trace the causal source of a character defect, to see if the character defect is amplified by indifference to available reasons.

These are two viable versions of control theory and character theory that present plausible alternatives to responsibility nihilism. But there’s a third option that may seem to be a better fit with what I’ve said so far. It’s a consequential approach, along the lines of Vargas’ agency cultivation model (2013). On that view, we’re responsible to the extent that praise or blame is likely to enhance our agency (very crudely put). Here’s how this view works. Suppose that Smith is a misogynist who berates all the women in his life, but Smith is still a moral agent (not a full-blown, unresponsive psychopath or something of that nature; he has some vestige of the capacity to respond to reasons). Blame might function to enable or enhance Smith’s capacity to respect women (and it might function that way for all misogynists – let’s suppose that this is its general effect). So Smith is blameworthy. This approach, because forward-looking, might seem to eliminate the need for tracing, which might seem to be a desideratum, since tracing is hard. But I don’t think it does. First, we need to know if Smith is a misogynist, as opposed to, say, a foreigner who doesn’t know he’s using a misogynistic slur, or a brain-washing victim, or someone whose family is being held hostage on condition that he demean his female acquaintances, etc. I artificially stipulated that Smith is a misogynist above, but in real life, we need to discover things about a person’s circumstances to make correct moral appraisals, so we need to get to know people and inquire into their lives. Second, we might want to consider whether Smith had opportunities to develop a more egalitarian sensibility – control-based considerations. The point is, even on a forward-looking account, we need to know things about an agent’s capacities and environment, and so we need to do some non-negligible amount of tracing. We can’t just guess what someone is like on the basis of one time-slice. People are notorious for jumping to conclusions, but to be responsible in our responsibility attributions, we need to be committed to giving fair consideration to relevant data.

The upshot is that there are convincing arguments against narrow versions of control theory and character theory, but they don’t force us down a slippery slope to responsibility nihilism. There are viable (extended) versions of control theory and character theory that we can adopt; and consequentialism is also an option. But we should not, I think, assume that we can do away with tracing on any of these alternatives. If anything, once we grant that the moral ecology is relevant to responsibility –  more relevant than we might have previously thought – we have to extend the scope of tracing beyond the agent’s material brain. But I think that we implicitly do this anyways (in our ordinary judgments of praise and blame), which is why we consider people from past times and foreign cultures to be less blameworthy for certain infractions. Yet some accounts of responsibility don’t adequately explain this kind of contextual assessment – they don’t sufficiently appreciate the significant of context. Circumstances matter because they co-constitute, enable, and support – or conversely, impair – the capacities that underwrite responsible agency.

*****

Here’s how all of this relates back to implicit bias. As I said in my first post on implicit bias, it’s not at all clear how wide the scope of control has to be for responsibility to obtain. If we think that reasonably-available counterfactual circumstances are morally relevant, we can hold people responsible for exhibiting implicit biases if they failed to use remediating measures that were locally available, provided that this was a reasonable expectation. People with special duties – people on hiring committees, for example – have stronger reasons to use these measures, and thus are more susceptible to blame for relevant omissions. And this is true even if they lack responsiveness to such measures now, provided that they could have acquired suitable patterned sensitivity at some time in the past, by a reasonable effort. Ordinary people can be blamed if we think that it was within their ability to avoid manifesting implicit bias through some reasonable act of will. The case for blame is stronger if we admit counterfactual circumstances into the equation, because then we have grounds for saying that someone is (indirectly) responsible for an omission, just in case a counterfactual enabling circumstance was within reach. This brings the view somewhat closer to modern character theory in its scope for attributing blame.

 

Implicit bias: The limits of control/charater

Unknown.jpegimages.jpegcontrol-1.jpg

 

I’m going to write an entry pursuant upon my previous post on responsibility and implicit bias. (Find it here). There, I noted that there is no agreed-upon definition of ‘control’ (see Fishcer 2012, Levy 2014). And there’s also no agreed-upon definition of character (see, e.g., Angela Smith 2005, Holly Smith 2014, David Shoemaker 2014, and George Sher 2010 for different definitions of character). And this is likely why there are debates about the scope of moral responsibility within in each camp, as well as between camps. Some control theorists think that we can be responsible for the manifestation of implicit bias and others think that we can’t. And ditto for character theorists. It is debatable whether implicit biases, which are neither explicitly valued by the agent nor under the agent’s direct control, are blameworthy.

Let’s start with control, which is, I think, the easier metric. How ‘narrow’ or ‘internal’ is control? That is, is control over motive M merely a matter of whether we have conscious access to M? Certainly not, since we lack access to a great many mental states that are patently blameworthy. We take it that Smith can be responsible for killing a pedestrian when driving under the influence, because Smith was in a sound state of mind when he started drinking. This is a ‘benighted omission,’ in which Smith lacks control at t only because he impaired his reasons-responsive capacity at t-1. In this case, Smith bears ‘indirect responsibility’ for hitting the pedestrian because he had ‘indirect’ (prior time) control. The only thing that would excuse Smith would be if he were drunk only because he had been subjected to clandestine brainwashing by an evil neuroscientist who induced him to drink, or something of that nature – or so people tend to assume. (There are innumerable examples of ‘Frankfurt-type’ brainwashing scenarios, including in Fischer 2006, 2012).

The point is that direct conscious control at the time of action is not needed. Virtually every theorist agrees on this point. Only some kind of ‘indirect control’ is necessary. But it is still an open question how broad the scope of indirect control should be. According to Levy (2014), we aren’t directly responsible for the effects of implicit bias because we have at most ‘patchy’ control over these states, but we might be indirectly responsible for their effects. He doesn’t say much about indirect responsibility, however, which is a shame because this is arguably the more interesting question. If Smith selects a White male job candidate due to implicit bias, did he have indirect control over his choice? As Holroyd points out, we have indirect control over implicit biases through the use of strategies such as implementation intentions, countersterotypical exposure, explicit belief formation, and so on. So in principle, Smith could be held responsible on this basis. But much of this debate hangs on how reasonable it is to expect someone to preemptively use these strategies. As Levy states,

“an agent may lack direct responsibility for an action caused by their implicit attitudes, because given what her implicit attitudes were at t, it would not be reasonable to expect her to control her behavior, or to recognize its moral significance, or what have you. But the agent might nevertheless be fully morally responsible for the behavior, because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014).

So indirect responsibility ends up being tied to reasonable expectations. Is it reasonable to expect someone to use implementations intentions etc., prior to taking a certain action? It’s natural in our current climate to expect employers to take steps to remediate implicit bias, but is it reasonable to expect this of ordinary people? Suppose that Jones, an ordinary person with no special responsibilities, make a sexist joke, thinking that he’s being witty. He has no idea that he’s being sexist and is explicitly committed to egalitarianism. Is he responsible because he could have used remediating measures against implicit bias in the past?

If so, the scope of direct responsibility turns out to be very slight, while the scope for indirect responsibility turns out to be enormous. In fact, the control theory veritably collapses into character theory! Character theorists (for the most part) hold that we can be responsible for unconscious omissions such as implicit biases, provided that those mental states are caused by certain parts of our motivational set (as opposed to extraneous factors). So a character theorist might hold that someone is not responsible for acting under hypnosis (because this is an ‘alien’ motivation, external to the agent), but responsible for deeply-entrenched implicit biases that reliable cause patterns of behaviour. Well, that’s what control theorists would hold someone indirectly responsible for, too.

Consider for a moment Sher’s version of character theory (2010). He says that we’re responsible for an omission if it’s one that a reasonable person in our position would not have committed, provided that the omission stems from our own motivational psychology. So, we’re responsible for omissions for which a reasonable person could complain against us. That sounds a lot like the control view as I have just described it. On the latter, we’re responsible for omissions if we could have acquire the capacity to avoid them, by taking reasonable measures – by doing what a reasonable person would do in the circumstances. Construed this way, control view and character view turn out to be very similar.

Moving forward, let’s assume that a person can be responsible for A just in case it would have been reasonable to expect the person to acquire the capacity to control A at some point in the past. This somewhat circumscribes the scope of control, forbidding acquisition measures like killing someone, or severely injuring oneself, to enhance control. What’s ‘reasonable’ is still very debatable, but I won’t settle it here – let’s just grant that it rules out killing, death, and severe injury. Still, this doesn’t circumscribe control very much, because we still need to determine whether someone had the capacity within reason to acquire a better capacity for control, and doing this still requires careful analysis.

These remarks are somewhat speculative, because control theorists haven’t said much about indirect control. Fischer and Levy write a lot about direct responsibilility, but not much about indirect responsibility, except to note its existence. Yet direct responsibility, I think, turns out to be pretty irrelevant. Does it matter if we’re directly responsible for something, versus being indirectly responsible? If someone acts on implicit bias only because she failed to critically reflect on this mental state, is she more blameworthy than someone who fails to use remediating measures? Not obviously. Levy says that ‘indirect responsibility can underwrite a great deal of praise and blame; it is not necessarily a lesser kind of responsibility.’ We don’t have an argument to the effect that direct responsibility is a special, worse kind of responsibility, just because it is a more remote kind. Maybe direct responsibility matters because it tells us something about a person’s capacities: someone who is unaware of her implicit biases might be less capable of suppressing those biases. But now this seems irrelevant, since the critical question is not, ‘is S reasons-responsive now,’ but, ‘was S capable of becoming reasons-responsive at some ‘suitable prior time’ (to quote Vargas 2005)? We must trace reasons-responsiveness back – maybe way back – to prior times when S had the opportunity to improve herself. If indirect responsibility is a thing, the buck no longer stops at the agent’s direct capacity for control; we need to see if she was ‘indirectly,’ as some prior time, capable of acquiring direct control.

So for this reason, I don’t really understand why we need to be concerned with consciousness or direct responsibility at all. Correct me if I’m wrong here.

Indirect responsibility is slippery. It’s epistemically slipper. Here’s why: it requires that we ‘trace responsibility’ back to a ‘suitable prior time’ at which the agent could have foreseen the long-range consequences of her choices. This is what Vargas calls ‘the trouble with tracing (2005). Fischer subsequently responds, basically, that tracing isn’t really a problem, because we can generally foresee in ‘coarse grained’ terms what the consequences of our choices will be (2012). But I doubt this. I think we’re pretty terrible at predicting the future, and we’re terrible because we’re susceptible to all kinds of cognitive biases, including but not limited to implicit bias. (See also: self-serving bias, confirmation bias, confabulation, rationalisation, and all these). So, we’re generally bad at forecasting outcomes.

This is one dimension of the epistemological problem, but I’ve just adverted to another one: we not only need to determine whether someone satisfies the forecasting condition directly, but we also need to determine whether the person could have acquired a better forecasting capacity at some ‘suitable prior time.’ So we need to trace back beyond the person’s immediate capacity for forecasting (and beyond the person’s immediate capacity for control more generally), to see if the person had a prior opportunity to refine that capacity (or those capacities), in light of her circumstances. (Our circumstances are the resources that we use to acquire and hone our capacities, so they’re relevant here). Suppose that Jeff, the middle-aged middle manager, is a jerk, and that Vargas is right that Jeff could not foresee in his halcyon youth that he would one day become a jerk. This doesn’t settle the question of whether Jeff is all-things-considered responsible, because we need to trace back further still to see if Jeff could have acquired the capacity to foresee that he would become a jerk. Because if he could have, then he may be indirectly responsible for failing to acquire or hone that capacity, and thus indirectly- indirectly responsible for becoming a jerk. He’s responsible at n degrees of remoteness for his jerkiness.

If this is right, then control entails even more tracing than Vargas’ 2005 paper suggests. What I mean is, we can’t just home in on the agent’s brain at time t-n (some time in the agent’s personal history), and assess whether the agent had a particular capacity at that time. We need to abstract away from the agent’s capacity, to see if the agent could have enhanced, modified, or refined that capacity, using the resources of his environment. The capacity for control is not just a matter of an agent’s internal (physical, psychological, cognitive) capacities, but the relationship between those capacities and the world. So we can’t just trace back and look at the agent’s endogenous capacities at t-n. We have to look at those capacities, and the local environment, and the relationship between those two things.

Maybe Vargas intends for tracing to mean exactly this. In his later book (2012), he places a lot of emphasis on ‘the moral ecology,’ and the moral ecology consists of the conditions that support moral agency (2013). This view is a corollary of his agency-cultivation model. But I think that it’s informative to frame this perspective as a response to control theory, since it implies that, while control is relevant to responsibility, control is not an endogenous capacity, in the agent’s mind-brain, i.e., something that we can evaluate, in principle, with a brain scanner. Control is a relational property, between an agent and the environment.

Similarly with character: if character is a reliably-manifested property of an agent, then it’s a feature of the interaction between the agent as an individual and the environment. Sher’s view captures this insight: he sees character as a set of physical and psychological states that interact to produce an agent’s “characteristic patterns of thought and action” (2010: 124), and he describes this account as ‘suitably interpersonal (2010: 125). But insofar as a person is responsible for only ‘characteristic patterns,’ we need to identify the cause of a person’s action, to see if it is part of his character or not. If Jeff is curt with an employee, is it because he’s a jerk or just because he’s sick? If the latter, this isn’t part of his character. So character theory also requires tracing, though to a much more limited degree.

I’m going to continue this train of thought in the next post.

Responsibility and sympathy?

Unknown.jpeg

Tonight I was thinking about the relationship between responsibility and sympathy, and more specifically, whether there is one.

I could be wrong, but I don’t think that a lot of people have written about whether sympathy should affect our reactive attitudes. A lot of people have written about whether sympathy (or empathy) is required for someone to be a responsible agent – that is, for someone to be susceptible to moral address at all; but not much has ben said about whether, when contemplating someone who is unambiguously a moral agent and has done wrong, sympathy ought to mollify our blame, indignation, or resentment toward that person.

It might be natural to think that sympathy ought to mollify blame, but this isn’t obvious at all. On a strict retributivist view, a person deserves blame just in case she has done wrong and possesses certain agential capacities (rationality, reasons-responsiveness, second-order reflection, or what have you); whether the person has suffered pitiable hardships (which have not undermined her agency) has no bearing on whether she deserves blame. On a strict retributivist view, if you know that Smith is a moral agent and Smith has committed a murder, and you find out that Smith has had a wretchedly abusive childhood, and you feel sorry for him on this account, this is no reason for you to revise your original moral stance. Your natural sympathy is just a meaningless emotional effluence. Smith deserves blame because he satisfies all the retributivist criteria for blame: he committed a violation and he’s a moral agent. Your feelings about his plight are irrelevant.

As I’ve noted in earlier posts, Strawson did think that ‘peculiarly unfortunate formative circumstances’ could attenuate blame, but not because they elicit our sympathy. (At least, he never talks about sympathy). His rationale seems to be that these kinds of circumstances can prevent someone from developing any moral personality at all, and from being responsive to the reactive attitudes. Such people don’t satisfy the agency conditions of retributive blame – they’re not moral agents. But here, I’m talking about people who are responsible agents, but who had really challenging lives. Not necessarily such bad circumstances that they turned out to be psychopaths, but still really hard lives.

Watson (1987) wrote an interesting paper on the reactive attitudes with a section called ‘sympathy and antipathy,’ where he notes that murderous psychopaths like Robert Harris (a famous case) elicit a mix of sympathy and antipathy in us. We feel ambivalent because we see Harris as both a villain and a culprit. But Watson doesn’t take this to mean that sympathy should move us to excuse Harris. He thinks that whether we should blame him depends on whether he’s capable of responding to moral address, which is a lot like Strawson’s view. So it doesn’t seem like sympathy should play any role. All that matters is (a) whether the individual has done wrong, and (b) whether the individual is a moral agent. Sympathy doesn’t play a role in this equation. It’s just an irrelevant moral remainder.

But what if we don’t take a retributive stance? What if, like Vargas (2013), we think that blame functions to enhance moral agency? Well, then I think that sympathy is relevant, because if we lack appropriate sympathy toward someone, this can damage the person’s moral agency. Suppose that Salima has had a wretched life, but is still, by all appearances, a moral agent. In fact, Salima is surprisingly sensitive. But Salima has committed an infraction. If we blame Salima unsympathetically, it might make her a worse person; it might alienate her further from the moral community. People who have had wretched lives are usually already alienated, since wretched lives typically have alienating features: neglect, abuse, etc. Being unsympathetic to such people can push them of the fold. Towards traumatized people, we might decide to attenuate our blame, or suspend it altogether.

Should we adopt a stance of universal sympathy, and eschew the reactive attitudes altogether? Strawson seemed to think that this was impossible. On a forward-looking account, it might not be desirable, inasmuch as some people might benefit from blame. Maybe over-privileged people who are too self-entitled. Maybe people with ‘affluenza’ like Ethan Couch, who have never been properly held responsible. This doesn’t mean that they’re not moral agents, just that they’re very irresponsible moral agents. Perhaps, unlike people with wretched lives, they suffer from too much privilege, and blame would correct that.

This is obviously very speculative. Whether blame accomplishes anything is an empirical question. We need to study the effects of blame – blame as a reactive attitude – to see if it’s justified on an agency-cultivation model. This is a separate question from whether punishment accomplishes anything – punishment such as locking someone up in prison. I doubt that punishment is ever as effective as rehabilitation, and so I doubt that punishment is ever strongly justified. But theorists who write about the reactive attitudes aren’t talking about punishment, they’re talking about cognitive-conative states. It would be worth studying and writing about whether deploying these kinds of attitudes can be effective in certain types of cases, and if so, which types. Unfortunately I can’t do that here.

*****

Moral responsibility and implicit bias

 

Unknown.png

Today I want to talk about moral responsibility and implicit bias. I have to confess that I’m not quite up-to-date on the implicit bias literature, including this new book, “Implicit Bias and Philosophy,” eds. M. Brownstein & Jennifer Saul (2016), so this will be an informal treatment.

As I noted in previous posts, some of the debates in moral philosophy centre on whether a person can be responsible for unconscious/implicit/system-1 states or processes. There are disagreements in both major camps: character theory and control theory. That is, there are disagreements between camps, and also within camps. Here’s an example of some fractures.

On the control theory side, Jules Holroyd (2012) argues that we cannot be responsible for having implicit biases, since we lack direct control over them, but we can be responsible for manifesting implicit biases insofar as we failed to put into place strategies for preventing them from influencing our behaviour – strategies like anonymizing job applications. Neil Levy similarly (2014, 2014b) argues that we cannot be responsible for manifesting implicit biases because we don’t have personal-level control over them.

Amongst character theorists, Angela Smith (2005) thinks that we can be responsible for implicit biases because they can reflect our evaluative judgments – our character – whether we explicitly endorse them or not. On the other hand, Holly Smith (2014) (not to be confused with Angela) argues that we cannot be responsible for implicit biases because they don’t reflect our ‘full evaluative structure’: they’re recalcitrant, alien states, that are outweighed by our explicit commitments. Levy similarly holds that implicit biases are not blameworthy on character theory because they’re ‘too alien to the self to ground responsibility.’

Obviously this doesn’t exhaust the possible responses to this question. I just wanted to point our some of the fault lines in the literature and then throw in my own two cents.

I feel like we can be responsible for acting on implicit biases, even if we don’t have direct reflective control over them, and even if they run afoul of our explicit commitments. I’m not going to make a theoretical case for this intuition: I’m just going to tell a kind of story.

When I was searching for pictures for my post on responsibility and attachment theory the other day, I did an image search for ‘attachment theory’ to find some stock photos. I had already decided that I wanted to use fairly generic, mostly cartoon-y, images for all of my posts. These are the first three images of human figures that I found, and they’re typical examples:

Unknown.jpegUnknown-1.jpegUnknown-2.jpeg

Notice anything? These are all pictures of White women with White babies. (The third picture is of a White woman’s hand holding a White baby’s hand – another typical image). The frequency of this particular type of image is troubling for several reasons, including the following. First, it marginalizes interracial families (like my own extended family). Second, when interpreted against the backdrop of western patriarchal colonialism, it implies that White people are more concerned to ensure that their (White) children grow up with a healthy attachment style: that this is a special concern for White mothers. And third, it suggests that women (not men) are primarily responsible for ensuring that their children develop secure attachment. And conversely, when children turn out badly, mothers are to blame.

The pictures of White dads and babies were way down the list. And the pictures of Black dads and babies were way, way, way down the list. No one curated this list, so I’m not saying that there’s a biased curator behind the results. But the list reflects the top hits of Google users, so it reflects a culture-wide implicit bias – a composite sketch of our implicit biases.

It’s worth noting that attachment theory was originally a theory about women’s parenting abilities. John Bowlby’s original formulation of the view was called ‘maternal attachment theory,’ and it posited a lack of proper bonding with the mother as the main source of later emotional problems in the child. In a report for the World Health Organization titled ‘Maternal Care and Mental Health,’ Bowlby wrote that “the infant and young child should experience a warm, intimate, and continuous relationship with his mother in which both find satisfaction and enjoyment” or else the child was at risk of serious mental health problems (1953). To be fair to Bowlby, women were obliged to do a majority of the care-taking in his time, but only because they lived in a patriarchal culture where men were generally unwilling to do their fair share. This doesn’t mean that women were exclusively, or primarily, blameworthy for their children’s emotional problems. Bowlby also lived at a time when it was natural to assume that parents formed heterosexual pair-bonds, which explains his assumption that there would even be a mother, and only one (cisgender) mother, paired with a (probably cisgender) father. In other words, the theory, in its original formulation, is archaic. That’s why people now call it just ‘attachment theory,’ and expunge all of the (implicitly) heteronormative, racist, and sexist parts of the original theory.

It’s problematic that in 2016, the Google image page for attachment theory still reflects Bowlby’s original version of the theory: the idea that women have a responsibility to foster secure attachment in their babies in the context of a heteronormative same-race family.

Now, here’s how this discussion is related to responsibility. Yesterday when I was browsing the pictures, I unreflectively chose the first image in the list: the first one posted above. I was prepared to insert it at the top of my entry, under the title. I wasn’t being very discriminating: I just needed stock images to divide my entries, to make it easier for the reader to distinguish one from the other. I didn’t much care about the image. Then I scrolled down and noticed that most of the pictures were of White mothers and White babies. And then I thought about Bowlby’s implicitly sexist ‘maternal attachment theory’ from the 1950s, and how those Google images implicitly reaffirm Bowlby’s problematic conception of the ‘healthily-attached American family.’ Then I decided not to use that picture, and instead chose a generic black-and-white image of two hands. Then I did a special search, and paired the original picture with a less ambiguous one of a Black man’s hand holding a child’s hand. I didn’t want to perpetuate Bowlby’s archaic conception of the family.

I think that if I had hastily posted a stereotypical stock image, it would have been a manifestation of implicit bias, because it would have reflected my tendency to see this kind of stereotypical image as unproblematic. If someone had said, ‘you chose a stereotypical image for your picture, and it reflects implicit bias on your part,’ I would have been inclined to agree. But even if I hadn’t been acting on implicit bias, I would have been making a morally problematic choice, something for which I might be blameworthy provided that I satisfy other conditions – having the capacity for control, being a certain kind of person.

On either interpretation, the choice is problematic.

Now, when I made my original selection, did I have control over my decision? In one sense, no. I didn’t realize at the time that I was making a criticizable choice. On the other hand, with more reflection, I could have made a better choice. (Fortunately, I did reflect more; but this was something of an accident. What if I had been tired, or hungry, or in a hurry?). The idea of control is elusive. Do we have control over a certain decision if we make the decision in a hurry, and we could have made a better decision with more time? Do we have control if we act on a bad decision, when we could have solicited advice from another person prior to so acting?  What if we solicit feedback from the wrong person, when we could have gotten better advice from someone else? Is this scenario too remote to say that we had ‘control’ over our decision? In other words, how context-bound is control? Is our capacity for control ‘narrow’ or ‘extended’?

Here’s one of the problems with the notion of control (reasons-responsiveness). Our reasoning capabilities are ‘bounded,’ i.e., limited by available information, available time, and the mind’s information-processing ability (H. A. Simon 1982). This is why people perform differently in different contexts, as situationist psychology shows: moral competency is situation-sensitive (e.g., Doris 2008). A person might be better able to notice and respond to moral reasons in context A than in context B. We might be in a better position to make a decision after we have taken a nap, or had a good meal, or consulted with a trustworthy advisor, or taken a class, and so on and so forth.

It’s instructive to consider that both food and sleep (amongst other creature comforts) affect reasoning ability. Judges tend to give more lenient verdicts after lunch (Danziger et al. 2011). Sleep enhances memory processing and emotional brain reactivity (Walker 2009), and facilitates creative problem solving (Mednick 2009). Robert Louis Steven claims to have come up with the plot for ‘Dr. Jeckyl and Mr. Hyde’ during a dream, and this is also how Mary Shelley conceived of Frankenstein’s monster. Perhaps people are more responsive to relevant reasons when their basic necessities have been met – when they are at peak cognitive performance. Situations also make a difference. A person might be unwittingly sexist until taking a university course in feminist philosophy. The person was, in a broad sense, ‘capable’ of responding to reasons not to be sexist all along, but he needed the right circumstance to realize that capacity. He wasn’t capable of responding appropriately using only his own cognitive faculties prior to taking the class.

The point of this discussion is to point out that the correct notion of control – whether it should be narrow or extended – is debatable. Should we hold someone responsible for manifesting implicit bias only when her physical needs are met and she is at peak cognitive performance? Or also when she is hungry, tired, sick, etc.? Or also when she failed to take an interest in the patriarchal, colonial, heteronormative history of her own culture?

I don’t think this question has been satisfactorily decided, and it might underlie some of the disputes in the literature.

This also connects with character theory. Is someone who acts on implicit biases – someone who would unreflectively post problematic content on social media, or unreflectively say problematic things – ultimately not responsible for those behaviours because they are at odds with her full evaluative structure? Or does her behaviour reflect a lack of appropriate concern? If the latter, there is a case to be made that her indifference makes her responsible for manifesting implicit bias, because her indifference is really what defines her evaluative structure. She’s more indifferent than she is concerned about her moral integrity.

I can’t pretend to be able to resolve these issues. This was mostly a reflective exercise for my own benefit, and a way of understanding some of the debates in the responsibility literature.

I am aware of some good work that makes inroads into this debate. First, Levy has a couple of articles that argue that an agent’s responsibility-relevant capacities are extended beyond the agent’s mind-brain (2008, 2014); and Doris has a book that argues that agency is socially-embedded and interpersonal (2015). We can see earlier precedents for these views in feminist philosophy, including relational accounts of autonomy (e.g., Code 1991, Friedman 2003, Oshana 2006). Unfortunately, feminist philosophy hardly ever gets cited in non-specialized journals. But a relational/extended account of responsibility owes a lot to feminism.

*****

Responsibility and attachment theory

dad-and-baby-hands.jpgUnknown.jpeg

Lately I’ve been thinking about attachment theory and it’s implications for responsibility. I’m going to make a case for the idea that an attachment disorder can partially excuse certain attachment-related behaviours.

Attachment theory is rooted in studies on child developmental psychology, such as those performed by John Bowlby (1969) and Mary Ainsworth (1973). Bowlby posited that a child’s attachment relationship with her parent(s) would influence her adult attachment style – whether she can bond securely with other adults. He held that there is a critical developmental period (from 1 to 2 years) during which parent-child bonding is crucial, and if this bond is inadequate, the child is likely to acquire lasting attachment deficits. Modern attachment theorists are less strict, in that they hold that attachment can be influenced by factors other than upbringing; but it is hard to deny that upbringing is a significant factor. Bowlby’s theory received some corroboration from Harry Harlow’s appalling (to be frank) experiments with rhesus macaque monkey, which included giving monkeys surrogate ‘mothers,’ ranging from metal cones to ‘abusive’ mothers with sharp spikes. He also ran a trial in which he placed monkeys in an isolation chamber called ‘the pit of despair’ for up to a year. Predictably, the monkeys developed severe psychological disturbances. Those that were reunited with other monkeys for playtime were bullied, and two staved themselves to death (Blum 2002). All exhibited psychological distress and attachment problems.

The upshot of attachment theory is that some people have a ‘secure’ attachment style, and others have an ‘insecure’ attachment style. Within the insecure category, there are three sub-types: anxious attachment, avoidant attachment, and anxious-avoidant attachment. Anxious attachers are very emotionally invested in their partner – too invested. They bond quickly and jealously. They worry about being rejected. Avoidant attachers are independent – too independent. They’re uncomfortable with intimacy and avoid sharing their feelings. Anxious-avoidant attachers combine both traits: they bond anxiously and jealously, but withdraw intimacy at the least provocation. People with an insecure attachment style do not bond well with each other (no matter what type they are): it’s recommended that they seek out a secure attacher for a relationship. (To read about these attachment styles or to test yourself, see Levine & Heller’s ‘Attached’ [2010]).

How do attachment styles affect responsibility? It’s notable that in the responsibility literature, there are references to childhood (though not as many as you might expect). Perhaps most notably, Strawson includes in his list of exempting conditions ‘peculiarly unfortunately formative circumstances,’ along with being “warped or deranged or compulsive in behaviour” (1963). In the face of these deficits, we withdraw the reactive attitudes, and take the ‘objective perspective’: we treat the agent as an “object of social policy,” to be “managed or handled or cured,” but not praised, blamed, resented, etc. I have contested this stark distinction between the participant stance and the objective stance (and correlatively, between excusing and exempting) in previous posts, and have suggested that we treat people as responsible in a ‘local’ sense, in light of the person’s distinct capacities. (So someone can be responsible for X but not responsible for Y due to a Y-relevant incapacity) And I have suggested that we should see responsibility as a matter of degree, not a zero-sum proposition. So, we should resist ‘exempting’ anyone from responsibility. But this does not mean that local incapacities cannot provide an excuse. Perhaps if someone had a peculiarly unfortunate childhood, the person is  pro tanto excused, to the extent that the person’s interpersonal (bonding) capacities are impaired. For example, if Jones is a shitty partner only because of his severely abusive childhood, he might be partially excused for his relationship deficits, without being exempt from blame altogether. Relatedly, we would not want to regard Jones as not a moral agent at all.

Susan Wolf (1986) makes a notable reference to childhood in her work on responsibility. She gives the now-famous example of JoJo, the son of an evil dictator, who grows up to enjoy torturing and killing his own citizens “on a whim,” just like his father. Wolf leads us to believe that JoJo’s father has, in effect, groomed his son to be like him, giving him a “special education” and bringing him along to “observe his daily routine.” So, she says, JoJo is not responsible for his vicious behaviour because he is a “deprived childhood victim” –  a victim of his father’s grooming. This kind of childhood could be seen as a kind of child abuse, inasmuch as JoJo’s dad has given him a deranged upbringing, training him to want to kill and torture innocent people. He has, in effect, turned him into a sociopath, indifferent to other people’s interests. This can be seen as a severe example of insecure bonding: JoJo was inculcated into an extreme attachment disorder through a peculiar and disturbing kind of child abuse. (At least, this a reasonable construal of the situation)

These examples, on my interpretation, suggest that someone could be (partially) excused of wrongdoing on account of an attachment disorder acquired in childhood. But Strawson’s and Wolf’s accounts don’t obvious support this conclusion, since they describe extreme, ‘peculiar’ cases of child abuse and deprivation, and attachment disorders can be relatively mild and mundane – lots of people have them. However, on a spectrum conception of responsibility, we can hold people responsible for problematic behaviour due to attachment disorders to a limited degree. Philosophers, however, seem resistant to this conclusion – though it is hard to say, since they rarely discuss childhood at all. But Vargas (2013), for one, seems to think that formative circumstances are too historically remote to count as excusing, and Sher seems to think that they are just irrelevant (2010). (This is my best extrapolation from their books). Yet, if peculiarly unfortunate formative circumstances can severely undermine responsibility, as Strawson says, then surely a lesser attachment disorder can undermine responsibility to a lesser extent.

 

I can’t provide a knock-down argument for this claim, but I’ll give an example that I think is intuitively compelling. Suppose that Smith has been avoiding his partner Karen. Karen is a secure attacher, and is not jealous of Smith, but she very reasonably feels neglected, since he has not been returning her phone calls. When Smith finally does call, Karen accuses him of being distant and insensitive. Smith replies that he has an attachment disorder due to an abusive childhood. His father abandoned him as a child and his mother had clinical depression, preventing her from expressing affection. Smith is trying to overcome his attachment disorder, but he thinks it will take time.

I think that Smith has a partial excuse. It would be different if Smith had responded that he’s sorry, but he would rather be at work than talk to Karen, because he cares more about work than the relationship, and this is his voluntary choice. Having an attachment disorder is (1) not fully voluntary, and (2) not necessary reflective of a person’s moral personality – it may be a deeply disvalued part of a person’s motivational set. I have discussed these conditions elsewhere, and noted that they are not universally endorsed. Still, I think that it makes good sense to think that an attachment disorder has excusing force.

One more thing. If child abuse and child neglect are partially excusing, it would be useful to know how excusing they are, and what capacities they impair, so that we can refine our reactive attitudes accordingly. I’m aware of research showing that childhood abuse and neglect predictably give rise to “severe, deleterious short- and long- term effects on children’s cognitive, socio-emotional, and behavioural development,” and that the deleterious effects are more severe for neglect than for abuse (Hildyard & Wolf 2002: 1). So neglect victims might be particularly excusable (ceteris paribus). The predictability of these effects – the strength of the causal correlation between child neglect/abuse and deleterious long-range socio-emotional and other deficits –  suggests that the formative circumstances are coercive, in much the same way that water-boarding torture is coercive: most victims submit and almost none resist. These particular types of deficits are responsibility-relevant inasmuch as they underwrite our interpersonal competencies, and responsibility (for Strawsonians, at least) is an interpersonal practice – a matter of being able to respond appropriately to other people’s emotions. These studies are thus relevant to moral philosophers inasmuch as they show that childhood abuse and neglect can have a lasting negative effect on the capacities implicated in responsible agency.

(If you’re curious to see some of my publications on responsibility and unfortunate childhood circumstances, check out my 20152014b, and 2014a).

Responsibility on a spectrum

 

Unknown.jpeg

In my last three entries, I have been arguing that blogs may function to buffer responsible agency against the rapid social changes that have emerged over the last 35 years, because blogs can reinforce character and control – two hallmarks of responsible agency. Millenials have witnessed rapid changes in the nature of their relationships and the workforce, and these social factors are (arguably) critical to responsible agency. If character and control are heavily dependent on underlying social structures, and these structures are in flux, we can expect a concomitant weakening of character and control in the general population. So it is reasonable conjecture that Millenials in the aggregate have weaker agency than previous generations. However, they may be able to use technology to counteract these effects.

This is a very generic claim about agential capacities in general and in the aggregate. I realize that Millenials likely have enhanced agency in some respects and diminished agency in others. For example, they likely have enhanced agency in the sense that they are less susceptible to problematic implicit biases, in light of general knowledge of this phenomenon. Still, I think it is legitimate to say, in general, agential capacities may be threatened by social instability, provided that we understand this as a very blunt statement.

Now, one might accept the gist of my proposal and yet argue that responsibility is a zero-sum proposition, not a comparative notion in the way that I am suggesting. Millenials, that is, have just as much responsible agency as their parents, because they are above the threshold for responsibility, and any additional capacities above this threshold are irrelevant. So we cannot say, for instance, that 30-year-old Smith is less responsible for a particular type of antisocial behaviour because he lacks a stable family structure, compared to 50-year-old Jones who has a strong support network, if Smith has enough social resources to count as a responsible agent. There are no grounds for drawing interpersonal comparisons, in other words, except between responsible agents and non-responsible agents. Interpersonal differences above the minimal threshold are irrelevant. Those below the threshold are not moral agents at all: they are outside of the participant perspective, “objects of social policy,” as Strawson put it (1963). This applies to children, non-human animals, severely mentally ill people, and others who lack basic moral competence.

It may seem intuitive to see responsible agency in this way. Some fall below the minimal line, and others are responsibility-apt. We can deploy the reactive attitudes to this latter population. But it is no longer standard to see responsibility in this light. More and more, responsibility theorists are insisting that responsibility comes in degrees. One example is Faraci and Shoemaker (2010), who argue that Wolf’s famous example of JoJo, the evil dictator, would benefit from the addition of an explicit ‘spectrum’ conception of responsibility. They administer a survey in which subjects can rate JoJo’s degree of responsibility from 1 (least responsible) to 7 (most responsible). This spectrum notion of responsibility fits very nicely with Strawson’s account of the reactive attitudes, on reflection, since the reactive attitudes, inasmuch as they have emotional content, surely come in degrees. I can feel resentment very strongly or relatively weakly, depending on the precise quality of a person’s will and the consequences of the person’s action. By introducing degrees of responsibility, we can rate evil dictators like JoJo across varying situations, allowing that situations can modulate responsibility even if the agents are basically the same. So if JoJo1 is an evil dictator on a remote island, whereas JoJo2 is an evil dictator in a well-connected, modern country, JoJo1 is less responsible (blameworthy) than JoJo2.

Another example comes from Washington & Kelly (2015). They present three examples of explicitly biased people: one who lives in the distant past, one who lives in America circa 1980, and one who lives in America circa 2014. They say that, ceteris paribus, the third person is more blameworthy than the second, who is more blameworthy than the first, based on the social resources available to each person. Because the third agent has access to more remediating measures for implicit bias (e.g., IATs, counterstereotypical exposure, implementation intentions, and so on) than the other two, he is proportionally more blameworthy.

I think that this is how we actually blame people when comparative examples and salient social information are available, and when we are being judicious judges (as we should be). Sometime we make a hasty judgment without taking into account the context, but this is an incomplete and error-prone assessment. We make better responsibility judgments when we are more careful and mindful of contextual information. My initial judgment might be that 25-year-old Jack is a lazy, self-entitled loafer, but if I consider his context, and the fact that his generation is relatively economically disenfranchised compared to his parents’ generation, I may revise my judgment and hold him less accountable (or not accountable at all) for his lack of initiative. When I compare his situation against other contexts, I get a better picture of his quality of will and agential capacities. In light of my comparative judgement, my absolute judgment of Jack is modified: in realizing that he is less responsible for failing to contribute to society compared to the average Baby Boomer, I realize that he less blameworthy in absolute terms – and perhaps not blameworthy at all.

In this way, the comparative examples offered in the recent literature lend credence to the view that we can make cross-generational and cross-cultural comparisons of responsible agency.

It might seem like a mystery why one would ever think of responsibility in all-or-nothing terms, but this view makes sense on certain pictures of responsibility. Some theorists, like Scanlon (2014), think of responsibility (in the accountability sense) as a relationship-modifying practice, with blame consisting of an attenuation or suspension of a relationship. On a strong reading, this may seem to suggest that responsibility is zero-sum: we can continue a relationship or end it. We can continue support it or withdraw it. This might be fine for a certain conception of responsibility, but it does not work for a Strawsonian picture, since Strawson thinks of responsibility as a practice of deploying reactive attitudes like resentment and indignation, and these attitudes (surely) come in degrees. Scanlon’s view is arguably too blunt. Modifying a relationship is an extreme form of blame: we have recourse to a vast range of reactive attitudes first. If these don’t work, then we can end the relationship, but this is arguably a case of taking the objective perspective, i.e., withdrawing responsibility. In any case, if we endorse Strawson’s view for whatever reason, and we understand the reactive attitudes as cognitive-affective in nature, there is good reason to see responsibility, in its affective content, as a matter of degree.

It is worth noting that in some institutional contexts, it makes sense to see responsibility as all-or-nothing. For example, in determining whether someone is clinically mentally ill according to DSM-5 standards, we have to make a yes-or-no determination. Either the person should have access to mental-health resources or not. Similarly, in court, either the defendant is guilty or not guilty. Sentencing is proportional, but the verdict is either-or. This is not an objection to the spectrum conception of (moral) responsibility, though, since reactive attitudes are not like psychiatric diagnoses or criminal sentencing. They are reactions within interpersonal communication that can come in kinds (resentment, indignation, etc.) and degrees (strong, moderate, weak).

If this is right, then the story about responsibility, blogs, and social change is that much more plausible.