Moral enhancements & moral responsibility

Unknown.jpeg

I’m going to write a somewhat lengthy but off-the-cuff entry on moral enhancements, because I have to present something on them soon, in Portugal.

I’m going to write about (1) moral enhancement and moral responsibility, and how these things intersect (we might be responsible for failing to use moral interventions); (2) the possibility of duties to enhance oneself and others, and (3) passive moral interventions – the most controversial type – and whether we have a duty to use, submit to, or administer them.

A qualification: I don’t know very much about moral enhancement, so this is going to be a bit sketchy.

Moral enhancements and moral responsibility

Here’s what I know. A ‘moral enhancement’ is a “deliberate intervention [that] aims to improve an existing capacity that almost all human beings typically have, or to create a new capacity” (Buchanan 2011: 23). These interventions can be traditional (e.g., education and therapy), or neuroscientific (e.g., biomedical and biotechnological interventions). The latter are more controversial for various reasons. Examples include transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and psychoactive drugs, along with futuristic ‘Clockwork Orange’-type methods that we can only imagine. (Typical science fiction examples are dystopic, but we shouldn’t let this prejudice the discussion). These enhancements are moral insofar as they aim to enhance the subject’s moral-cognitive capacities, howsoever construed. (I’ll leave this vague).

Some philosophers hold that the distinction between traditional and modern interventions is that one is ‘direct’ while the other is ‘indirect’: TMS, for instance, directly stimulates the brain with electric currents, whereas education ‘indirectly’ influences cognitive processing. Focquaert and Schermer argue that the distinction is better characterised as one between ‘active’ and ‘passive’ interventions: traditional interventions typically require ‘active participation’ on behalf of the subject, whereas neuroscientific interventions work ‘by themselves’ (2015: 140-141). More precisely, passive interventions work quickly and relatively unconsciously, bypassing reflective deliberation. The authors acknowledge that this is a rough and imperfect distinction, and that interventions lie on a continuum; but they think that this distinction nicely captures a normative difference between the two methods. Passive interventions, they say, pose a disproportionate threat to autonomy and identity because (1) they limit continuous rational reflection and autonomous choice, (2) they may cause abrupt narrative identity changes, and (3) they may cause concealed narrative identity changes. These effects undermine the agent’s sense of self, causing self-alienation and subjective distress (2011: 145-147).

For this reason, the authors recommend that we introduce safeguards to minimise the negative effects of passive intervention – safeguards like informed consent and pre- and post-intervneiton counselling, which help subjects integrate their newfound moral capacities into their moral personality, or come to terms with their new moral personality, in case of radical transformation. So although passive interventions have a higher threshold of justification, they can be justified if their adverse effects are managed and minimised.

(Some objections to this view, with responses, can be found here).

Now for a few words on moral responsibility. In earlier posts, I’ve discussed three models of responsibility, two backward-looking (character and control theory) and one forward-looking (the agency-cultivation model). These theories have something in common: they hold agents responsible (blameworthy) for omissions. On character theory, we are responsible for failing to act virtuously, if a reasonable person would have done better in our circumstances (see Sher 2010). So someone can be blameworthy for forgetting about her friend’s birthday. On control theory, we are ‘indirectly’ responsible for failures to exercise reasons-responsiveness responsibly, in accordance with people’s ‘reasonable’ expectations. As Levy remarks when discussing ‘indirect’ responsibility for implicit bias, a person can be “be fully morally responsible for [a] behaviour [resulting from implicit bias], because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014; see also Holroyd 2012). Since we can be responsible for omissions on both models, it stands to reason that we can be responsible for failures to enhance our moral agency, if this is what a reasonable person would have done or expected of us in the circumstances. In fact, remediating measures for implicit bias can be seen as ‘moral enhancements,’ though they are closer to education than biomedical interventions. But the point is that there are grounds for blaming someone for eschewing moral enhancements of either type – active or passive – depending on the particularities of the case.

On the agency-cultivation model (ACM), it’s even more obvious that a person might be blameworthy for failing to use moral interventions, since blame functions to enhance moral agency. If blaming someone for not using a moral intervention would entice the person to use it, thus enhancing the person’s agency, blame is a fitting response. And if forcibly administering the intervention would enhance the person’s agency, this might be fitting, too.

All of these theories as I have presented them invite questions about when blame for failing to use moral enhancements, or for using them inappropriately, is apt. But this leads somewhat away from responsibility (as an agential capacity or cluster of traits that warrant blame and praise), to the domain of duties: from the question of what we are responsible for, to what we are responsible to do. These two things are related insofar as we can be responsible (blameworthy) for omitting to fulfil our responsibilities (duties) on all three models. But they are distinct enough that they necessitate separate treatments. First we figure out what our duty is, and then we determine if we are blameworthy for falling short of it.

Moral duties and moral perfection

What are our duties with respect to moral enhancements? We can approach this question from two directions: our individual duty to use or submit to moral interventions, and our duty to provide or administer them to people with moral deficits. This might seem to suggest a distinction between self-regarding duties and other-regarding duties, but this is a false dichotomy because the duty to enhance oneself is partly a duty to others – a duty to equip oneself to respect other people’s rights and interests. So both duties have an other-regarding dimension. The distinction I’m talking about is between duties to enhance oneself, and duties to enhance other other people: self-directed duties and other-directed duties.

These two duties also cannot be neatly demarcated because we might need to weigh self-directed duties against other-directed duties to achieve a proper balance. That is, given finite time and resources, my duty to enhance myself in some way might be outweighed by my duty to foster the capabilities of another person. So we need to work out a proper balance, and different normative frameworks will provide different answers. All frameworks, however, seem to support these two kinds of duties, though they balance them differently. For Kant, we have an absolute (perfect) duty to abstain from using other people as mere means, so we have a stringent duty to mitigate deficits in our own moral psychology that cause this kind of treatment; and we also have a weaker (imperfect) duty to foster other people’s capabilities. On a consequentialist picture, we have to enhance the capabilities of all in such a way as to maximise some good. Our duty to enhance ourself is no greater or lesser than our duty to foster others’ capabilities. Aristotle, too, believed that we have a duty to enhance ourselves and foster others’ capabilities by making good choices and forming virtuous relationships. Rather than arbitrate amongst these frameworks, we can just note that they all support the idea that we have duties to enhance ourselves and others. The question is how stringent these duties are, not whether they exist. Of course, in a modern context these questions take on new significance, because the duty to enhance oneself and others no longer just means reflecting well and providing social assistance; it could potentially mean taking and distributing psychoactive drugs, TMS, etc.

One pertinent question here is how much enhancement any individual morally ought to undergo. How much enhancement is enough enhancement? We can identify at least four accounts of the threshold of functioning we ought to strive for: (1) the ubermensche view: we must enhance our own and others’ capabilities to the greatest extent possible; (2) prioritarianism: we should prioritise the capabilities of the least well-off or the most morally impaired; (3) sufficientarianism: we should enhance people’s capabilities until they reach a sufficient minimal threshold of functioning, after which it is a matter of personal discretion; and (4) pure egalitarianism: we should foster everyone’s capabilities equally.

No matter which of these accounts we favour, they all face potential objections. The more stringent forms (the ubermensche view) are more susceptible to these objections than the less stringent forms (sufficientarianism). If we are considering enhancing our own capabilities, the scope for criticism is minimised, but we might still worry that our attempts to enhance ourselves morally might go awry, they might interfere with our ability to foster others’ capabilities, they might impel us to neglect valuable non-moral traits, or (relatedly) they might prevent us from becoming well-rounded people, capable of living flourishing lives (see Susan Wolf’s ‘Moral Saints’ 2004). When considering whether we should foster others’ capabilities, there are still more worries. Namely, in seeking to enhance other people – particularly if we do this through institutional mechanisms – we risk insulting and demeaning those deemed ‘morally defective,’ stigmatising people, jeopardising pluralism (which may be inherently valuable), undermining people’s autonomy, or fostering the wrong kinds of traits due to human error. That is, by intervening to enhance moral agency, we might inadvertently harm people.

The most controversial type of moral intervention – passive – is the most vulnerable to these objections, and others that I haven’t mentioned. From now on, I’ll just talk about this type. I’ll consider Focquaert and Schermer’s cautionary advice, and then offer some suggestions.

Passive intervention, autonomy, and identity

The authors argue that passive moral interventions are particularly morally problematic because they disproportionally jeopardise autonomy and identity, insofar as they limit continuous rational reflection and precipitate abrupt and concealed narrative identity changes. To minimise these adverse effects, interveners should provide informed consent procedures and pre- and post-intervention counselling. Let’s suppose that this is right, and also that we have a duty to enhance ourselves and others, as per the above discussion. This seems to have the following implications. First, a duty to submit to a passive moral intervention obtains only on condition that informed consent and pre-and post- intervention counselling are available and reasonably effective. So, a person can be blameworthy for failing to undergo a passive moral intervention (if at all) only if the intervention came with those provisions. Secondly, there is (at minimum) a duty to provide these safeguards if administering a passive intervention; so, if an intervener fails to do so, the person is thereby blameworthy. This is the upshot of holding that there is a duty to enhance oneself and others, and this duty includes providing harm-minimising safeguards.

That said, there are other considerations that might vitiate the perceived duty to enhance one’s own and others’ moral capabilities. I think that the most forceful objection, which may underlie other objections to these interventions, is that we can’t be sure of their effects. We can call this the knowledge constraint. If we can’t know, in medium-grained terms, the probable effects of a passive intervention, we can’t obtain informed consent in a strong sense, since we can’t adequately assess whether the effects will be good or bad. Let’s reconsider Focquaert and Schermer’s worries, and tie it to this concern. They say that (1) passive interventions threaten continuous rational reflection and autonomy. When someone undergoes TMS, for instance, the person doesn’t have the capacity to endorse or reject the effects of the magnetic stimulus. But why is this a problem? I don’t think it suffices to say that it undermines autonomy, because many autonomy-enhancing procedures undermine the capacity for continuous reflection: I forfeit this capacity when I undergo a root canal under general anaesthesia, but I can still give prior informed consent unproblematically. This isn’t just because a root canal is localised to the mouth (although this is a related issue). It’s mainly because I know precisely what the effects of the root canal will be, so I can consent to them. If dental procedures induced unforeseeable side-effects, informed consent for them would be much more problematic. This suggests that the unforeseeableness of the side-effects of a procedure, as opposed to the reflection-undermining nature of the procedure per se, is what undermines autonomy and has moral import.

With neuroscientific interventions, we can’t very accurately predict the general immediate and long-term effects , because these interventions function very coarsely, across a broad range of cognitive systems and processes, and their mechanisms are not fully understood. Similarly, the brain is very complex, and its mechanisms are not very well understood. And so the interactions between these mechanisms are pretty murky. For these reasons, any targeted (intended) effect of neurointerventions comes with a range of possible side-effects. If we just focus on the immediate side-effects of TMS, they include fainting, seizures, pain, as well as transient hypomania, hearing loss, impairment of working memory, and cognitive changes (unspecified). In terms of long-term effects, NICE found that there is insufficient evidence to evaluate safety for long-term and frequent uses. That is, not enough is known to predict the long-range effects in a scientifically responsible way. Speaking more generally, TMS is used for multiple conditions; its activation is very un-specified, targeting various brain regions that serve various functions. It can’t be used to, say, ‘cure schizophrenia’; it acts on multiple brain regions, bringing about a cluster of effects, some intended and others unintended. All neurointerventions work this way: they induce a cluster of effects. Subjects can’t select just one desired effect: undesired side-effects are very likely, if not inevitable. Given these considerations, it’s far from clear that there is an obligation for anyone to submit to passive interventions that act directly on the brain, and even less clear that anyone has an obligation to induce them on other people.

So, it seems that the threat to continuous rational reflection is a problem because we don’t know, in medium-grained terms, what the effects of the procedure will be in advance, which threatens prior informed consent. If we could predict the effects relatively accurately, the loss of continuous rational reflection would not be a problem.

(2) Passive interventions cause abrupt narrative identity changes. Again, I think that this is a problem because the end result of the passive intervention is impossible to predict in medium-grained terms. If we could provide someone with an accurate specification of the effects of a proposed intervention and gain the person’s consent, the ‘abruptness’ of the narrative identity change wouldn’t be an obstacle, it would be an advantage. If I could get a root canal ‘abruptly’ rather than slowly, I would choose the abrupt method – but only if I understood the consequences. If I could quit smoking ‘abruptly’ as a result of voluntary hypnosis, I would choose that option. Abrupt changes are problematic only if we can’t predict the outcomes of those changes.

(2) Passive interventions cause concealed narrative identity changes. This is also a problem only because we can’t predict the effects of the intervention. If we could, we might even prefer that the procedure be concealed. I might want to have a ‘concealed’ root canal – a root canal that I completely forget about after the fact, but with my prior consent – but not if the procedure induces unwanted side-effects that I can’t detect until after the fact. If side-effects are an issue, I want to be conscious so that I can do what I can to mitigate them.

The fact that passive moral interventions impinge on the brain rather than the body (not that this is a hard and fast distinction) is also problematic because our brain is the seat of our personal identity, and changes to our personal identity are especially jarring. Still, the analogy between cognitive interventions and bodily interventions is informative. If I sign up for an appendectomy and wake up with a third arm, this is a violation of my autonomy. Waking up a different, or radically altered, person, is that much more autonomy-undermining. In both cases, the lack of prior knowledge about probable outcomes is what violates the person’s autonomy: an unforeseen transformation is an affront to the self. (This is true even if the transformation is an accident: if a piano player’s hand is crushed by a boulder, this is autonomy-undermining, even if no one is to blame [see Oshana 2008]).

Post-intervention counselling might mitigate some of these concerns, because it allows the agent to ‘consent’ retroactively. But this is problematic for the following reasons.

(1) For no other procedure is retroactive consent permissible. If my teeth are rotting and someone kidnaps me and performs involuntary dental work while I’m unconscious, this is a moral infraction. (Note that when I was discussing clandestine changes above, I meant that they were permissible only if there was prior informed consent). Even if I’m pleased with the dental work, the dental intervener could have performed the procedure less invasively – with my consent – and to this extent, he’s blameworthy: he chose the most invasive intervention instead of the least invasive one. Now, when moral enhancements are at issue, the intervention may be prima facie justified by the potential positive consequences, but we have to consider (1) the strength of the benefits, (2) the probability of adverse side-effects, and (3) whether a less invasive intervention was available. If passive interventions are used as a first response, this is clearly impermissible. If they are used as a ‘last resort,’ when no alternative is available or all alternatives have been tried, we have to consider the moral import of possible side-effects. Consider Alex (shown above) from ‘A Clockwork Orange’: the movie is a horror story because the Ludovico technique made him miserable and dysfunctional, unintentionally. And he was not informed beforehand of the risks. Interveners are, I believe, blameworthy for using interventions that may cause severe adverse side-effects, because there is a duty to protect people from these kinds of risks. This not to say that passive interventions are never permissible, but they are not permissible if they carry severe or unknown risks to wellbeing.

(2) Because moral interventions target a person’s moral identity, there is an argument to be made that post-intervention ‘consent’ is not consent at all, because the agent on which the intervention acted – the pre-intervention agent – no longer exists, or is so drastically altered, consent is not possible for that agent. Moral interventions have the ability to ‘kill off’ the pre-intervention agent. This places a massive burden of justification on those who would intervene. We need to consider whether it is permissible to ‘kill off’ unwanted moral personalities, or radically alter them in a ways that preclude obtaining consent. When a person is radically altered by an intervention without consent, the person cannot consent after the treatment, so any post-intervention consent given is a different person’s. This is a radical departure from the standard model of informed consent, and this is something we need to consider. We also need to weight whether society should be choosing which moral personalities have value, as this might threaten pluralism and self-determination.

I should make a qualifying statement here. I don’t want to stigmatize the use of TMS and other existing passive interventions. Some people voluntarily use TMS for personal (prudential) reasons, which I would deem morally unproblematic. I’m specifically concerned with whether people have a duty to use passive interventions – not just existing ones, but also interventions that may arise in the future – to enhance their moral capabilities; and whether society has a duty to impose these interventions on people without their consent. I’m saying that it’s much harder to justify a categorical duty (as opposed to a subjective desire) to undergo passive interventions, and still harder to justify a duty to impose (as opposed to merely offer) interventions on people deemed morally deficient.

Back to responsibility

Here’s how this bears on responsibility. On a simplistic reading, it might seem that we have a duty to use moral interventions to enhance ourselves and others to a substantial (sufficientarian, if not ubermenschean) degree. And maybe this is true for traditional moral enhancements, which we can reflectively choose and consent to with full information. But it’s not clear that this is the case for passive moral interventions, just because we can’t tell if those interventions are going to have overall positive consequences. And this is because neurointerventions target clusters of traits, some of which may be adaptive while others may be maladaptive. So, in trying to morally enhance someone, we might inadvertently harm the person and even introduce new counter-moral traits. If so, we might be blameworthy for the consequences of those interventions. And if there is no clear duty to submit to passive moral interventions that might produce harmful and even counter-moral side effects, then a person might not be blameworthy for refusing them.

Consider stories like ‘A Clockwork Orange’ and ‘Brave New World.’ They’re about attempts at moral enhancement and social engineering gone horribly awry. And this is a legitimate worry. I’ve suggested that when technology is morally problematic, it’s not because it threatens rational reflection and narrative identity per se. Rather, it’s because it has unpredictable side-effects, and these side-effects might be morally problematic. This isn’t mean to imply that we have no duty to enhance ourselves and others, but rather that when trying to enhance our capabilities with modern methods, we should proceed with caution.

*****

Advertisements

2 thoughts on “Moral enhancements & moral responsibility

  1. thanks for this great post, i’m inclined to say that our limited knowledge of the effects of passive interventions towards moral enhancement could be handled in the way we handle knowledge of the effects of medical procedures. We do clinical trials to find out (up to a point) and insure safety, then we disclose what we know to patients. But if this analogy holds, then we have a duty to undergo passive interventions towards moral enhancements to the same degree that we have a duty to undergo medical procedures. That seems to be a very limited duty at best, isn’t it?

    Like

  2. No, I don’t think we necessarily have a moral duty to undergo ordinary medical procedures, like a blood transfusion, unless, e.g., we have dependents and our death would harm them. Getting a blood transfusion is a personal choice, barring considerations about dependents. Getting morally enhanced is an other-regarding choice – it makes us better able to respect other people’s rights and interests. So we have a prima facie duty to undergo moral enhancements; but we have to weigh that duty against possible side-effects, including possible counter-moral ones. And when deciding whether society ought to impose moral enhancements on people, there is again a prima facie obligation, but it has to be weighed against side-effects, as well as more general considerations about whether society ought to be in the business of enhancing people. I didn’t talk much about the latter point, but I think there’s good reason to worry about a loss of personal autonomy, self-direction, pluralism, and other social goods.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s