Implicit bias: The limits of control/charater



I’m going to write an entry pursuant upon my previous post on responsibility and implicit bias. (Find it here). There, I noted that there is no agreed-upon definition of ‘control’ (see Fishcer 2012, Levy 2014). And there’s also no agreed-upon definition of character (see, e.g., Angela Smith 2005, Holly Smith 2014, David Shoemaker 2014, and George Sher 2010 for different definitions of character). And this is likely why there are debates about the scope of moral responsibility within in each camp, as well as between camps. Some control theorists think that we can be responsible for the manifestation of implicit bias and others think that we can’t. And ditto for character theorists. It is debatable whether implicit biases, which are neither explicitly valued by the agent nor under the agent’s direct control, are blameworthy.

Let’s start with control, which is, I think, the easier metric. How ‘narrow’ or ‘internal’ is control? That is, is control over motive M merely a matter of whether we have conscious access to M? Certainly not, since we lack access to a great many mental states that are patently blameworthy. We take it that Smith can be responsible for killing a pedestrian when driving under the influence, because Smith was in a sound state of mind when he started drinking. This is a ‘benighted omission,’ in which Smith lacks control at t only because he impaired his reasons-responsive capacity at t-1. In this case, Smith bears ‘indirect responsibility’ for hitting the pedestrian because he had ‘indirect’ (prior time) control. The only thing that would excuse Smith would be if he were drunk only because he had been subjected to clandestine brainwashing by an evil neuroscientist who induced him to drink, or something of that nature – or so people tend to assume. (There are innumerable examples of ‘Frankfurt-type’ brainwashing scenarios, including in Fischer 2006, 2012).

The point is that direct conscious control at the time of action is not needed. Virtually every theorist agrees on this point. Only some kind of ‘indirect control’ is necessary. But it is still an open question how broad the scope of indirect control should be. According to Levy (2014), we aren’t directly responsible for the effects of implicit bias because we have at most ‘patchy’ control over these states, but we might be indirectly responsible for their effects. He doesn’t say much about indirect responsibility, however, which is a shame because this is arguably the more interesting question. If Smith selects a White male job candidate due to implicit bias, did he have indirect control over his choice? As Holroyd points out, we have indirect control over implicit biases through the use of strategies such as implementation intentions, countersterotypical exposure, explicit belief formation, and so on. So in principle, Smith could be held responsible on this basis. But much of this debate hangs on how reasonable it is to expect someone to preemptively use these strategies. As Levy states,

“an agent may lack direct responsibility for an action caused by their implicit attitudes, because given what her implicit attitudes were at t, it would not be reasonable to expect her to control her behavior, or to recognize its moral significance, or what have you. But the agent might nevertheless be fully morally responsible for the behavior, because it was reasonable to expect her to try to change her implicit attitudes prior to t” (2014).

So indirect responsibility ends up being tied to reasonable expectations. Is it reasonable to expect someone to use implementations intentions etc., prior to taking a certain action? It’s natural in our current climate to expect employers to take steps to remediate implicit bias, but is it reasonable to expect this of ordinary people? Suppose that Jones, an ordinary person with no special responsibilities, make a sexist joke, thinking that he’s being witty. He has no idea that he’s being sexist and is explicitly committed to egalitarianism. Is he responsible because he could have used remediating measures against implicit bias in the past?

If so, the scope of direct responsibility turns out to be very slight, while the scope for indirect responsibility turns out to be enormous. In fact, the control theory veritably collapses into character theory! Character theorists (for the most part) hold that we can be responsible for unconscious omissions such as implicit biases, provided that those mental states are caused by certain parts of our motivational set (as opposed to extraneous factors). So a character theorist might hold that someone is not responsible for acting under hypnosis (because this is an ‘alien’ motivation, external to the agent), but responsible for deeply-entrenched implicit biases that reliable cause patterns of behaviour. Well, that’s what control theorists would hold someone indirectly responsible for, too.

Consider for a moment Sher’s version of character theory (2010). He says that we’re responsible for an omission if it’s one that a reasonable person in our position would not have committed, provided that the omission stems from our own motivational psychology. So, we’re responsible for omissions for which a reasonable person could complain against us. That sounds a lot like the control view as I have just described it. On the latter, we’re responsible for omissions if we could have acquire the capacity to avoid them, by taking reasonable measures – by doing what a reasonable person would do in the circumstances. Construed this way, control view and character view turn out to be very similar.

Moving forward, let’s assume that a person can be responsible for A just in case it would have been reasonable to expect the person to acquire the capacity to control A at some point in the past. This somewhat circumscribes the scope of control, forbidding acquisition measures like killing someone, or severely injuring oneself, to enhance control. What’s ‘reasonable’ is still very debatable, but I won’t settle it here – let’s just grant that it rules out killing, death, and severe injury. Still, this doesn’t circumscribe control very much, because we still need to determine whether someone had the capacity within reason to acquire a better capacity for control, and doing this still requires careful analysis.

These remarks are somewhat speculative, because control theorists haven’t said much about indirect control. Fischer and Levy write a lot about direct responsibilility, but not much about indirect responsibility, except to note its existence. Yet direct responsibility, I think, turns out to be pretty irrelevant. Does it matter if we’re directly responsible for something, versus being indirectly responsible? If someone acts on implicit bias only because she failed to critically reflect on this mental state, is she more blameworthy than someone who fails to use remediating measures? Not obviously. Levy says that ‘indirect responsibility can underwrite a great deal of praise and blame; it is not necessarily a lesser kind of responsibility.’ We don’t have an argument to the effect that direct responsibility is a special, worse kind of responsibility, just because it is a more remote kind. Maybe direct responsibility matters because it tells us something about a person’s capacities: someone who is unaware of her implicit biases might be less capable of suppressing those biases. But now this seems irrelevant, since the critical question is not, ‘is S reasons-responsive now,’ but, ‘was S capable of becoming reasons-responsive at some ‘suitable prior time’ (to quote Vargas 2005)? We must trace reasons-responsiveness back – maybe way back – to prior times when S had the opportunity to improve herself. If indirect responsibility is a thing, the buck no longer stops at the agent’s direct capacity for control; we need to see if she was ‘indirectly,’ as some prior time, capable of acquiring direct control.

So for this reason, I don’t really understand why we need to be concerned with consciousness or direct responsibility at all. Correct me if I’m wrong here.

Indirect responsibility is slippery. It’s epistemically slipper. Here’s why: it requires that we ‘trace responsibility’ back to a ‘suitable prior time’ at which the agent could have foreseen the long-range consequences of her choices. This is what Vargas calls ‘the trouble with tracing (2005). Fischer subsequently responds, basically, that tracing isn’t really a problem, because we can generally foresee in ‘coarse grained’ terms what the consequences of our choices will be (2012). But I doubt this. I think we’re pretty terrible at predicting the future, and we’re terrible because we’re susceptible to all kinds of cognitive biases, including but not limited to implicit bias. (See also: self-serving bias, confirmation bias, confabulation, rationalisation, and all these). So, we’re generally bad at forecasting outcomes.

This is one dimension of the epistemological problem, but I’ve just adverted to another one: we not only need to determine whether someone satisfies the forecasting condition directly, but we also need to determine whether the person could have acquired a better forecasting capacity at some ‘suitable prior time.’ So we need to trace back beyond the person’s immediate capacity for forecasting (and beyond the person’s immediate capacity for control more generally), to see if the person had a prior opportunity to refine that capacity (or those capacities), in light of her circumstances. (Our circumstances are the resources that we use to acquire and hone our capacities, so they’re relevant here). Suppose that Jeff, the middle-aged middle manager, is a jerk, and that Vargas is right that Jeff could not foresee in his halcyon youth that he would one day become a jerk. This doesn’t settle the question of whether Jeff is all-things-considered responsible, because we need to trace back further still to see if Jeff could have acquired the capacity to foresee that he would become a jerk. Because if he could have, then he may be indirectly responsible for failing to acquire or hone that capacity, and thus indirectly- indirectly responsible for becoming a jerk. He’s responsible at n degrees of remoteness for his jerkiness.

If this is right, then control entails even more tracing than Vargas’ 2005 paper suggests. What I mean is, we can’t just home in on the agent’s brain at time t-n (some time in the agent’s personal history), and assess whether the agent had a particular capacity at that time. We need to abstract away from the agent’s capacity, to see if the agent could have enhanced, modified, or refined that capacity, using the resources of his environment. The capacity for control is not just a matter of an agent’s internal (physical, psychological, cognitive) capacities, but the relationship between those capacities and the world. So we can’t just trace back and look at the agent’s endogenous capacities at t-n. We have to look at those capacities, and the local environment, and the relationship between those two things.

Maybe Vargas intends for tracing to mean exactly this. In his later book (2012), he places a lot of emphasis on ‘the moral ecology,’ and the moral ecology consists of the conditions that support moral agency (2013). This view is a corollary of his agency-cultivation model. But I think that it’s informative to frame this perspective as a response to control theory, since it implies that, while control is relevant to responsibility, control is not an endogenous capacity, in the agent’s mind-brain, i.e., something that we can evaluate, in principle, with a brain scanner. Control is a relational property, between an agent and the environment.

Similarly with character: if character is a reliably-manifested property of an agent, then it’s a feature of the interaction between the agent as an individual and the environment. Sher’s view captures this insight: he sees character as a set of physical and psychological states that interact to produce an agent’s “characteristic patterns of thought and action” (2010: 124), and he describes this account as ‘suitably interpersonal (2010: 125). But insofar as a person is responsible for only ‘characteristic patterns,’ we need to identify the cause of a person’s action, to see if it is part of his character or not. If Jeff is curt with an employee, is it because he’s a jerk or just because he’s sick? If the latter, this isn’t part of his character. So character theory also requires tracing, though to a much more limited degree.

I’m going to continue this train of thought in the next post.


3 thoughts on “Implicit bias: The limits of control/charater

  1. Dropping by mainly to say how much I’ve been enjoying some of your posts. (It would be great to get some of your posts over at the Flickers of Freedom blog—I hope that gets worked out soon.)

    Your great run of posts deserve more thoughtful responses than I can provide at the moment, but here’s a couple of quick reactions (basically, agreements with you). I’m inclined to think that you are right that the tracing story is going to have to be more complicated than I recognized in my 2005 piece. There are some lurking issues here, I think, having to do with how much upstream epistemic failures pollute downstream decision-making. I’ve been meaning to figure out whether this is at all related issue that crops up in the “bootstrapping” problem in the philosophy of action, in connection with some puzzles that Bratman and others have worked on about the rationality of intention stability given prior irrational intentions. Anyway, you are definitely right that the view in BBB is a step or two away from the simpler story in my 2005 piece—although, honestly, I suspect I got there not so much by worry about tracing but thinking about social science-y approaches to agency.

    Anyway, terrific job bringing together so many different interesting threads in the literature and it has been a lot of fun to see how you are thinking about these things. Looking forward to your next post!

    Liked by 1 person

    1. Thanks, Manuel. I sincerely appreciate your comments because they enhance my capacity to develop my view!
      I’m obviously very influenced by your work, as well as Michael McKenna’s. I actually think that tracing is indispensable. I mention this in my subsequent post (just posted): even on a consequentialist/teleological model, we need to know things about a person’s character, capacities, environment, and how those variables interact, to judge a persons responsibility status, so we need to ‘trace’ the source(s) of a person’s behaviour.
      This is my preferred view (which I don’t defend in this blog, yet). I take blame (following McKenna) to be a part of a conversational exchange that serves two functions: (1) providing information about the target’s motives and circumstances, and (2) thereby enhancing the agency of the conversational partners – confirming or disconfirming the blamer’s accusation, and providing useful information to the blamed. This process helps to correct for blind-spots inherent to first-person and third-person perspectives, facilitating a third-person perspective that’s more accurate. On this view, blame serves two related functions, one epistemic and the other moral. It’s a conversational-functionalist account.
      Anyways, your comments are always very welcome. Cheers.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s