In my last post on moral enhancements, I considered whether there is a duty to enhance oneself and others, and correspondingly, whether one can be blameworthy for failing to fulfil this duty. I said that this is a complicated question, but it depends to a great extent on whether the intervention is susceptible of prior informed consent, which in turns hangs on whether there are likely to be unknown (especially potentially adverse) side-effects.
Here, I want to consider whether intended moral enhancements – those intended to induce pro-moral effects – can, somewhat paradoxically, undermine responsibility. I say ‘intended’ because, as we saw, moral interventions can have unintended (even counter-moral) consequences. This can happen for any number of reasons: the intervener can be wrong about what morality requires (imagine a Nazi intervener thinking that anti-Semitism is a pro-moral trait); the intervention can malfunction over time; the intervention can produce traits that are moral in one context but counter-moral in another (which seems likely, given that traits are highly context-sensitive, as I mentioned earlier); and so on – I won’t give a complete list. Even extant psychoactive drugs – which can count as a type of passive intervention – typically come with adverse side-effects; but the risk of unintended side-effects for futuristic interventions of a moral nature is substantially greater and more worrisome, because the technology is new, it operates on complicated cognitive structures, and it specifically operates on those structures constitutive of a person’s moral personality. Since intended moral interventions do not always produce their intended effects (pro-moral effects), I’ll discuss these interventions under two guises: interventions that go as planned and induce pro-moral traits (effective cases), and interventions that go awry (ineffective cases). I’ll also focus on the most controversial case of passive intervention: involuntary intervention, without informed consent.
One of my reasons for wanting to home on this type of case is that there is already a pretty substantial body of literature on passive non-consensual interventions, or ‘manipulation cases,’ in which a futuristic neuroscientist induces certain motives or motivational structures in a passive victim. We can tweak these examples to make the interventions unambiguously moral (the intervener is tampering with the victim’s moral personality), to derive conclusions about passive moral interventions and how they effect responsibility. My analysis isn’t going to be completely derivative on the manipulation cases, however, because theorists differ in their interpretations of these cases, and specifically on whether the post-manipulation agent is responsible for her induced traits and behaviours. I want to offer a new gloss on these cases (at least, compared to those I will consider here), and argue that the victim’s responsibility typically increases post-manipulation, as the agent gains authentic moral traits and capacities by the operation of her own motivational structures. (Except in the case of single-choice induction, where there are no long-term effects). I will also say some words on the responsibility status of the intervening neuroscientist.
I’m going to assess three kinds of case, each of which we can find in the literature. First, Frankfurt-type cases (in fact, Frankfurt’s original case), in which an intervener induces a single choice. Second, ‘global manipulation’ cases like Alfred Mele’s (2010), in which an intervener implants a whole new moral personality. And finally, partial manipulation cases, in which the intervener replaces part (let’s say half) of a person’s moral personality. I’m not aware of philosophical ‘partial manipulation’ cases per se, but there have been discussions of partial personality breakdowns, as in (e.g.) split-brain patients, which we can use as illustrative examples. Partial manipulation cases are like that, only instead of generic cognitive deficits that impair performance, there are moral deficits (due to internal conflict) that may undermine the agent’s ability to make and carry out moral plans.
Let’s consider each of these manipulation cases in turn.
1. Induced choice (minimal manipulation)
In the original Frankfurt-type case (1979), a futuristic neuroscientist named Black secretly implants a counterfactual device in Jones’ brain, which would compel Jones to vote for a certain political candidate if he were to choose otherwise, but the device is never activated. It’s causally inert, so it doesn’t affect Jones’ responsibility status, according to Frankfurt. (Jones is responsible because he acts on his own decisive desire). This example has to be modified to fit our purposes. First, let’s imagine that the implant is casually efficacious as opposed to merely counterfactual, as this is a more interesting kind of case. And second, let’s suppose that the intervention is designed to induce a moral effect – say, to make Jones donate to charity. Finally, let’s consider two types of case – one in which the intervention produces the intended effect, and one in which it produces a counter-moral effect.
I’ll assess these cases after discussing global and partial manipulation scenarios.
2. Global manipulation
Alfred Mele (2006) offers a case of ‘global manipulation,’ in which a person’s entire motivational system is changed. He asks us to imagine two philosophy professors, Anne and Beth, and to suppose that Anne is more dedicated to the discipline whereas Beth is more laid back. The Dean wants Beth to be more productive so he hires a futuristic neuroscientist to implant Anne’s values in Beth, making her Anne’s psychological double. Is Beth responsible for her new psychological profile and downstream behaviours? According to Mele, no, because Beth’s new values are practically unsheddable: they cannot be disavowed or attenuated under anything but extraordinary circumstances (e.g., a second neuroscientific intervention). That is, Beth lacks control over her implanted values.
This is again not obviously a moral example, so let’s imagine that Anne is a saintly person and Beth is a jerk, and the Dean doesn’t like working with jerks, so he hires a futuristic neuroscientist to implant Anne’s saintly values in Beth. The intervention works and Beth becomes Anne’s saintly doppelgänger. This is a moral intervention and it’s a causally efficacious one. In this example, Beth’s moral personality is radically transformed.
If Mele is right, Beth is not responsible for her saintly values (and downstream behaviours) because these values are practically unsheddable. There are two natural ways of explaining this judgment. (1) Beth lacks control over her new values because they are firmly inalterable (Mele’s explanation). And (2) Beth is no longer herself – she has become someone else (Haji’s preferred explanation [2010]). These interpretations exemplify the control view and the character view – competing views in the literature that I described earlier. But in this case they converge on the same conclusion – Beth is not responsible – though they provide different (albeit overlapping) grounds. On one account, Beth lacks control over her desires, and on the other, her desires are not authentic, because they did not emerge through a process of continuous reflective deliberation (diachronic control). The two criteria are related, but come apart (as Mele emphasises [2006]): we can imagine implanted desires that are susceptible to revision but not the agent’s ‘own’; and we can imagine authentic desires that are relatively fixed and impervious to control (e.g., think of Martin Luther King saying, ‘Here I stand; I can do no other’). Yet on a historical picture of control, the two conditions substantively overlap: a person’s mental states are not authentic if they were not amenable to continuous reflective deliberation. At least, this is the case for passive intervention cases, which we are considering here. If we consider a third account of responsibility – Vargas’ agency cultivation model (2013) – we find still more convergence: it’s typically not agency-enhancing to hold someone responsible if she lacked diachronic control and wasn’t herself at a particular point in time. These accounts do not always converge, but in the case of passive intervention, there is substantive agreement. So, conveniently, we don’t need to arbitrate them. Yet there is still room for debate about whether a manipulation victim is responsible on any of these pictures, since there is room for debate about how to treat this person. Vargas holds that a post-intervention agent is responsible for her behaviour, but is a new person, contra Haji and Mele.
I’ll return to this question after considering a third case: partial manipulation.
3. Partial manipulation
Imagine that instead of implanting Anne’s values in Beth, making her Anne’s double, the manipulator had only implanted half of Anne’s values, or a substantial portion, leaving part of Beth’s personality intact. This is a partial manipulation case. Now Beth is internally fragmented. She has some saintly desires, but she also has a lot of non-saintly desires. Is she responsible for only her pre-implantation desires, or also for her post-implantation desires, or none of her desires (and downstream effects)? This is a more complicated scenario (not that the other two are simple!) We can perhaps compare this situation to neurocognitive disorders that involve internal fragmentation, which have also drawn attention from responsibility theorists – examples like psychosis, alien hand syndrome, and split-brain (callosotomy) patients. Perhaps by assessing whether these individuals are responsible (to any degree), we can determine whether partial-manipulation subjects are responsible. (Spit-brain surgery, which partially or wholly divides the two hemisphere down the corpus callosum, induces a similar split-personality effect).
Let’s look at these cases separately, beginning with global manipulation.
Analysis
- Global
If Mele is right, then Beth is not responsible in the global manipulation case because she lacked control, and if Haji is right, she is not responsible because she lacked authenticity. These accounts have some degree of intuitive plausibility, surely. But Vargas (2013) offers a different interpretation. Vargas suggests that we should see post-global-manipulation agents as different people, but responsible people. So post-manipulation Beth is Beth2, and she is responsible for any actions that satisfy relevant conditions of responsibility for her. (Vargas’ preferred condition is a consequentialist one, but we can remain neutral). Since Beth2 can control her post-intervention desires (as much as any normal person), and they reflect her character, and it seems efficient to hold Beth2 responsible for these actions, we ought to regard her as responsible. Beth, on the other hand, is dead, and responsible for nothing. This view, it must be admitted, also has a degree of plausibility. But it seems to ignore the fact that Beth2, in very significant ways, is not like the rest of us.
I’m going to suggest a middle-of-the-road view, and it’s a view that emerges from a focus on how we become moral agents – a historical view. It is somewhat indebted to Haji’s [2010] analysis of psychopaths, who, somewhat like Beth2, have peculiar personal histories. According to Haji, psychopaths are not responsible agents because, unlike ordinary people, from childhood onward they lack the capacity to respond to reason due to myriad cognitive deficits (which we do not need to get into here). Children are not (full) moral agents: they lack certain rational capacities, yet they have certain emotional capacities that psychopaths lack, which allows to them to pass the moral/conventional distinction, at least from 5-years-old onward. Still, their rational deficits impair their moral agency. It is not only psychopaths who lack moral agency: Haji suggests that anyone who is barred from acquiring moral competency from childhood (due to congenital or involuntarily acquired deficits) is not a moral agent (a ‘normative agent,’ in his terms), because such people are, in effect, still children. (This includes severely neglected and abused children who develop psychopathic traits not of their own choosing). These individuals lack control over their motives, as well as anything that could be called a moral personality. If we want to say that children are not full moral agents, we must grant that these individuals, who suffer from arrested moral development, also are not full moral agents.
Now consider Beth2, who has been globally manipulated to have a different moral personality. Beth2 has zero (authentic) personal history. She is, in this respect, like Davidson’s ‘swamp-person’ (though different in other salient respects) – she emerges ex nihilo, fully formed. In lacking a personal history, Beth2 is like a newborn baby – a paradigm case of non-responsibility. Yet unlike the newborn baby, Beth has intact moral capacities, albeit not her own moral capacities – they were implanted. Beth2, then, is not responsible in either an authenticity sense or a diachronic control sense. Nonetheless, her extant moral capacities (though not her own) allow her to reflect on her motivational set, explore the world, interact with other people, and live a relatively normal life after the intervention. In this regard, Beth2 differs from psychopaths, who can never acquire such capacities, and are in a permanent baby-fied state, morally speaking. Moreover, as time goes by, Beth3 will become increasingly different from Amy, warranting the judgment that Beth3 is a separate person – her own person. So over time, it becomes more and more reasonable to say that Beth3 is an independent moral agent with her own moral personality. Although Beth3 cannot completely overhaul her motivational system, she can make piecemeal changes over time, and these changes are attributable to her own choices and experiences.
With this in mind, I submit that we treat Beth2 as not responsible immediately post-intervention (since she lacks any authentic motives at that time), but increasingly responsible thereafter (since she has the capacity to acquire authentic moral motives and capacities over time, unlike psychopaths). This doesn’t mean that Beth2 will ever be as responsible as an ordinary (non-manipulated) person, but she is certainly more responsible than newborn babies and psychopaths, and increasingly responsible over time.
Another problem with saying that Beth2 is fully responsible for her post-manipulation behaviour is that this leaves no room for saying that the clandestine manipulator – the Dean or the futuristic neuroscientist or whoever – is responsible for Beth2’s behaviour, especially in the case that the moral intervention goes wrong and induces counter-moral effects.
Suppose that Beth2 goes on a killing spree immediately after being manipulated: is she responsible for this effect? Surely the intervener is to blame. One could say, like Frankfurt (1979), that two people can be fully responsible for a certain action, but this seems like a problematic case of overdetermination. Surely blame must be distributed, or displaced onto the one who preempted the effects. Indeed, Frankfurt’s proposal doesn’t fit with how we ordinarily treat coercion. Consider entrapment: if a law enforcement agent induces someone to commit a crime, the officer is responsible, not the victim. This might be because the victim lacked adequate control (due to coercion), acted against character, or because it’s not useful to blame victims – entrapment fits with all of these explanations. Shoemaker seems to appeal to the last consideration when he says of entrapment, we blame the public official and not the victim because “we think the government shouldn’t be in the business of encouraging crime” (1987: 311) – that is, we don’t want to encourage government corruption. But Shoemaker also appeals to fairness, which is tied to control and character: it’s not fair to blame victims who lacked sufficient control or weren’t themselves when they acted (which is also why it’s not useful to blame them). So on any standard criterion of responsibility, it’s not clear why we would blame a manipulation victim.
Now, suppose that the intervention worked and the Dean makes Beth2 a moral saint. If I am right, Beth2 isn’t praiseworthy for the immediate effects of the intervention because they’re not her own (on various construals). The intervener’s moral status is more complicated. While he might seem prima facie praiseworthy for Beth’s pro-moral traits, we also have to consider the fact that he’s patently blameworthy for intervening without informed consent or any semblance of due process (viz. decisional incapacity protocols), and this might vitiate or cancel out his prima facie praiseworthiness. If we consider praise and blame as competing attitudes, it makes sense to see the Dean as blameworthy on balance.
2. Partial
Next, let’s consider a partial manipulation case. Let’s s imagine that the Dean replaces half of Beth’s personality with Anne’s, creating Beth3. This is trickier than the global manipulation case inasmuch as Beth3 has some authentic mental states, but they are in competition with implanted states that are not her own. So we can’t say that Beth3 lacks authenticity or diachronic control entirely, yet she is deficient in both respects compared to an ordinary person. We might compare Beth3 to examples of neurocognitive disorders that cause internal fragmentation, such as alien hand syndrome and split-brain effects, which have attracted a lot of philosophical interest. These subjects have ‘alien’ motives like Beth3, but unlike split-brain patients, Beth2 can presumable enhance her control and motivational congruence over time – assuming that none of her implanted states are pathological. So Beth is somewhere between split-brain patients and non-pathological individuals.
There are three ways that we can go in assessing Beth3’s responsibility. (1) We can hold that Beth3 is not responsible (period) because she has insufficient control and insufficient depth of character to ground responsibility. (2) We can say that Beth3 is actually two agents, not one, and each agent is responsible for its own motives and downstream behaviours. King and Carruthers (2012) seem to suggest that, if responsibility exists (on which they seem unsure), commisurotomy patients and alien hand patients must be responsible for their aberrant behaviours, since the former have two “unified and integrated” personalities (205), and alien-hand gestures reflect a person’s attitudes, albeit unconscious ones. Furthermore, they think that consciousness cannot be a requirement of responsibility since most motives aren’t conscious anyways. If this is right, then perhaps Beth2 is two people, each responsible for ‘her’ own motives. This, to me, seems impossibly impracticable, because we can only address responsibility attributions to one self-contained agent. We cannot try to reason with one side of Beth3’s personality at a time, because Beth3 doesn’t experience herself as two people. She won’t respond to targeted praise and blame, aimed at adjacent but interconnected motivational hierarchies. And she might even find this kind of moral address baffling and/or offensive. So this won’t do, practically speaking. But there’s also a good case to be made that commisurotomy patients and the like lack control and character in the normal sense, given that they have significant cognitive deficits. So it’s reasonable to see them as impaired in their responsibility compared to cognitively normal people. And likewise for Beth3.
This leads to the third possibility, which I favour: the middle-of-the-road proposal. Beth3 is somewhat responsible immediately after implantation, and increasingly responsible thereafter. This is because Beth3 is subject to control-undermining motivational conflict and disorientation following the clandestine intervention, but she nonetheless has some intact moral (relevant rational, emotional, and motivational) capacities, which differentiate her from the psychopath, and which should allow her to regain psychological congruence over time, enhancing her control and authenticity. So Beth3 should be seen as increasingly responsible over time (under ordinary circumstances). That said, she will likely never be as responsible as someone who had an ordinary learning history, since most people never suffer from this degree of fragmentation. So she may have relatively diminished responsibility for a long time, even if she becomes more apt for praise and blame.
Once again, this analysis can be applied to effective interventions and ineffective ones. If Beth3 acquires pro-moral traits (as per the Dean’s intention), she is not immediately responsible for them, but she gains responsibility for any induced traits that persist and amplify over time – indeed, for all of her post-intervention traits. Regarding the intervener, he is not necessarily praiseworthy for the pro-moral effects of the intervention, inasmuch as he might be blameworthy for intervening without consent or due process, and this might outweigh any praise that he might otherwise warrant.
Worse still, if the Dean inadvertently implants counter-moral traits in Beth3, he is blameworthy for intervening without consent as well as the effects of the botched intervention.
3. Minimal
Finally, let’s consider the induced-choice subject. Call her Beth4. Suppose that Beth4 has been inculcated with a desire to give to charity, and she acts on this desire. (Note: presumably to be causally efficacious a desire must be combined with a supporting motivational structure, but for simplicity I’ll refer to this bundle, following Frankfurt’s example, simply as a desire. Assume than an induced desire comes with supporting motivational states). Is Beth4 praiseworthy? On my view, no, because her desire is not her own. But the intervener also is not praiseworthy, insofar as he is blameworthy for intervening without consent or due process, which (I think) cancels out the import of of any good intentions he may have had. (The road to hell is paved with good intentions, as they say). (Note that I am construing praise and blame as competing reactive attitudes; other people hold alternative conceptions of ‘responsibility,’ which I will conveniently ignore).
Next, suppose that the Dean intends to induce a pro-moral choice in Beth4, but inadvertently induces her to commit a murder. The Dean, I think, is blameworthy for the murder, because he is responsible for implanting the relevant desire in Beth4 without her consent. This can be construed as an omission case, in which the Dean was ignorant of the likely consequences of his choice, but is responsible because he failed to exercise proper discretion. (Compare this to Sher’s [2010] example of a soldier who falls asleep on duty; the Dean failed in his duties as a person, a citizen, a Dean… he failed on many levels. He is, as it were, a spectacular failure as a moral agent). The Dean acted in character, and on a suitably reasons-responsive mechanism, when he chose to surgically intervene on Beth4 without her consent, and he bears responsibility for this infraction and the fallout of his choice.
*****