Monday, November 12, 2018

Appiah's Misplaced Critique of Meritocracy

Alternative working title: Appiah is a bad writer and he should feel bad.

Warning: There be many words ahead.

It took several false starts before I came up with a good way to write about Kwame Anthony Appiah's discussion of meritocracy in The Lies That Bind. After mulling it over for an extended period of time I've come to the conclusion that it's simply a bad piece of writing. Appiah buries the lede and, when you finally get to his primary concern, it turns out that it's not really related to meritocracy at all. Along the way commits another fairly major sin, sliding between different sense of "meritocracy" as necessary to make his argument without acknowledging that he's doing so. Most importantly, he never proposes a superior alternative to meritocracy, possibly because to do so would make many of his criticisms moot.

"I.Q. + effort = merit" vs. "meritocracy of talent"

Appiah kicks off the discussion by citing Michael Young's characterization of meritocracy as "I.Q. + effort = merit" (p. 171), followed immediately by another Young quote:

Today we frankly recognize that democracy can be no more than an aspiration, and have rule not so much by the people as by the cleverest people; not an aristrocracy of birth, not a plutocracy of wealth, but a true meritocracy of talent. (p. 171)

The implication is that the latter quote is an elaboration of the former, which to me didn't feel quite right, since "I.Q. + effort = merit" isn't at all the same thing as a "meritocracy of talent". "merit" in the first formulation seems to refer to something intrinsic to the individual, some form of moral dessert. The references to "cleverest" and "talent" in the second, on the other hand, suggest a functional evaluation i.e. rule by the most capable.

The mismatch was notable enough that I checked the associated end notes (41 and 42, p. 242). "I.Q. + effort" comes from a paper Young published in 1998, while the longer quote comes from The Rise of Meritocracy, written by Young 40 years earlier in 1998. I won't go so far as to accuse him of academic malpractice, but Appiah's juxtaposition of two quotes written 40 years apart is certainly sketchy. In any case, there's no reason to think that Young was referring to the same concept in both quotes.

And now let's consider the formulation "I.Q. + effort = merit" as a defintion of "meritocracy". I can't recall ever running across a similar definition in any other context; it certainly doesn't come anywhere close to contemporary definitions. As far as I can tell the "I.Q." formulation is idiosyncratic to Young, a point to which I will return later.

Careers Open to Talents

Appiah next references the concept of "careers open to talents" (p. 172), the idea that an individual's pursuit of education or career should not be limited by accidents of "birth or fortune" (p. 172). This definition, at least, closely tracks the various dictionary definitions that turn up via Google and, I will hazard, is probably what most people have in mind when the term "meritocracy" is mentioned. So far, so good.

However, he immediately muddies the water by bringing in the concept of "unfair advantage":

As Michael Young recognized, however, this ideal was bound to conflict with a force in human life as inevitable and as compelling as the idea that some individuals are more deserving thnt others, namely, the desire of families to pass on advantages to their children. As he said in The Rise of Meritocracy, "Nearly all parents are going to try to gain unfair advantages for their offspring." (p. 172)

So... this is one example where it would really do Appiah's argument some good to suggest an alternative to meritocracy. Even if you buy the concept of "unfair advantage" (more on that in a minute), there's no reason to think that it's a pathology unique to meritocracy. Surely the hereditary aristocracy, which he discusses in the preceeding section of the book, conferred a great deal of "unfair advantage" on their offspring. The same could be said of nepotism, another time-honored practice for doling out careers. In fact, the tendency of parents to seek advantage for their children seems to be totally independent of the means by which jobs are allocated. If anything, meritocracy represents an improvement over both hereditary transmission and nepotism in this regard as it reduces the direct impact a parent can have over their offspring's success.

And what of the notion of "unfair advantage"? Here, Appiah turns to Richard V. Reeves' Dream Hoarders, apparently unaware that this work is rife with arbitrary distinctions. Here's a snippet from a friendly venue, Crooked Timber, in their review of the book:

One disappointment is that [Reeves] doesn't give more guidance about exactly what to do individually, given with the unease that they ought to feel. Should they not contribute to 529s? Not help their children with their homework? Not give their children piano lessons? Reeves quotes me and Swift to the effect that parents should not aim to give their children a competitive advantage relative to others and thinks we go too far. But he also quotes Charles Murray, with whose Coming Apart Reeves's book has some parallels, as saying that "I am not suggesting that [upper-middle class families] should sacrifice their self interest" to which he responds "I am suggesting that we should, just a little". But how, exactly, he doesn't say.
And a decidedly more cutting take from The New Republic:

At first glance, it's awfully hard to see a distinction between Reeves's approved "human capital formation" and his disallowed "opportunity hoarding." After all, in both cases, wealthy parents are leveraging their position to give their children a head start over their peers. Reeves has an answer for this-sort of. He concludes that "opportunity hoarding" only takes place when the opportunity in question is valuable and scarce, and the hoarding itself is "anticompetitive." He discerns a difference between "parental behavior that merely helps your own children and the kind that is 'detrimental' to others."

Unfortunately, this carefully-parsed dividing line is delicate to the point of collapse. What is, for instance, the most likely result of a cello lesson: artistic enrichment, or a bullet point on a resume? Unless those lessons turn into a lifelong passion or a performance career, their main effect is surely to grant children an edge over rival applicants in the race for academic recognition. The line blurs the other way too: Presumably most parents angling for a legacy admission to an Ivy believe their children stand to grow personally from the experience.

Charitably, I think we can say that there might be a kernel of truth to the concept of "unfair advantage", but a lot of work is needed before it can bear the weight of an argument.

Economic Rewards

Here I'd like to stop and point out just how much wrong Appiah has managed to cram into 3 short pages (pp. 171 - 173). And he's not done yet! On p. 172 he slips in yet another sense of "meritocracy" without missing a beat:

There is nothing wrong with cherishing our children. But a decent society governed by the ideal of merit would have to limit the extent to which this natural impulse permitted people to undermine that ideal. If the economic rewards of social life depended not just on your individual talent and effort but also on the financial and social inputs of your parents, you would no longer be living by the formula that "I.Q. + effort = merit". (pp. 172 - 173)

Gods, where to even start? Here, I'll just make a list:

  • New definition of "meritocracy": Allocation of economic awards according to talents. This definition, at least, has the benefit of aligning most closely with dictionary definitions of the term. I still think it's a toss-up whether most people are referring to this sense of the term or "offices open to talents" when they say "meritocracy".
  • It's trivially true that the "economic rewards of social life" depend on the "financial and social inputs of your parents". The mere act of being raised in a decent, middle-class household gives you a competative advantage over someone who was regularly beaten as a child. Even orphans, people who have never known their parents, are affected by their parents (lack of) financial and social inputs.
  • There's that weird formulation, "I.Q. + effort = meritocracy", again.

Now, lest I be accused of myopia, I do see the bigger point that Appiah appears to be arguing: Allowing parents to spend their resources giving their children a leg up undermines the notion of meritocracy. The truth of that statement really depends on how you define "meritocracy". If you treat "merit" as some sort of innate attribute, as Appiah does via the "I.Q." formulation, then yes, it could be the case. However, I've already argued that "I.Q. + effort = merit" doesn't track current usage of the term "meritoracy", which makes it something of a red herring.

Definitions of "meritocracy" more in line with contemporary usage, which tend to focus on ability, are not materially undermined in most cases. A child that is exposed to high-quality education and lots of extracurriculars has the opportunity to develop their talents more fully. Provided that those talents actually are developed, meritocracy is not offended when the child ends up with a better job then their peers. About the only current, parent-related practice that I can think of which is genuinely offensive to meritocracy are "legacy admissions" to elite universities. By all means, kill them with fire.

Meritocracy as Intrinsic Worth

Appiah's main beef is not with meritocracy per se, but with the fact that, as practiced in America, it leads to social stratification:

"American meritocracy," the Yale law professor Danila Markovits, drawing on similar research, argues, "has thus become precisely what it was invented to combat: a mechanism for the dynastic transmission of wealth and privilege across generations." (p. 173)

This, ladies and gentlement and members of the jury, is why I characterize Appiah's critcisms as "misplaced". He has no objection to meritocracy as it is commonly practiced, going so far as to say:

If we want people to do difficult jobs that require talent, education, effort, training, and practice, we are going to need to be able to identify candidates with the right combination of talent and willingness to exert themselves, and provide them incentives to train and practice. (p. 178)

Rather, what he seems to have in mind is meritocracy as a system of bestowing dignity or intrinsic worth. Discussing Michael Young once again, he says

A system of class filtered by meritocracy would, in his view, still be a system of class: it would involve a hierarchy of social respect, granting dignity to those at the top, but denying respect and self-respect to those who did not inherit the talents and the capcity for effort that, combined with proper education, would give them access to the most highly reumunerated occupations. (p. 176)

This isn't really even a criticism of meritocracy but rather of class-based society which, in the present day, happens to be indexed by merit. And this is why I accuse him of "burying the lede"; why spend so much time railing against meritocracy when your actual complaint is against class?

So What are We to do About It?

So, if class is the problem, is Appiah going to call us to revolution? That would be kinda awesome but, sadly, no. Instead, he offers a vague and somewhat confused defense of... affirmative action?

Social origin is not, in itself, a permissible basis for excluding people from places in colleges. Nor is race, gnder, or religion. In a world poisoned by prejudices directed at certain identities, it may nevethless be a good idea to take these identities into account in designing the selectino process, if it contributes to ending those forms of prejudice. And, as long as we do so in a rational, morally permissible way, it may turn out that some working-class and black people, some women, some Muslims, will deserve, institutionally speaking, places that some otherwise equally qualified upper-class or white or Christian or male people will not.

That's it, you're just going to restate Regents v. Bakke? WTF, man?

Where the hell is Appiah's editor? This entire section of the book is an absolute horroshow. He spends all sorts of time talking about meritoracy when he really cares about class, and then his call to action has diddly-squat to do with either of them.

Wanna know something? I'm starting to think Appiah deliberately played this whole book safe. Consider:

  • The logical conclusion to grievances against class would be a call for the dismantling of class. Instead he reiterates a common, left-ish talking point about taking identity into account when making hiring and admissions decisions.
  • The book as a whole builds a good case that common concepts about identity are wrong, but then fails to actually do anything with that observation.
  • He acknowleges gender as a highly-salient component of identity multiple times throughout the book and yet somehow fails to write a chapter on it. But he did find room to write about 'class' and 'country', despite the fact that neither of these topics are cited nearly as much in discussions of identity as gender. Maybe he couldn't think of a word for "gender" that starts with a 'c'?
These items are all glaringly obvious, especially to a seasoned writer such as Appiah. I have to assume that they represent deliberate decisions on his part to stay away from anything really controversial.

Friday, October 19, 2018

Two Arguments Against Equalization of Group Outcomes

In my previous post I offered an argument for why attempting to equalize group outcomes requires a showing that the outcomes are the result of unjustified, differential treatment of the groups. In this post I'd like to offer two arguments against the attempt to equalize group outcomes at all.

Argument 1: Groupings Are Arbitrary

Recent articles about differences in group outcomes, especially those dealing with ML/AI algorithms, typically look at disparities on only one axis: Amazon's screening algorithm was biased against women, or the COMPAS tool produced differential outcomes respect to race. In the case of COMPAS, Alexandra Chouldechova discusses various trade-offs (p. 12) that can be made in the model in an attempt to achieve a more fair outcome with respect to race.

But why focus on just one axis of classification? Surely gender is as important as race, so don't we need to equalize along that axis as well. Rather than just worrying about disparities between blacks and whites we need to tweak the model so that outcomes are equal: black women = white women = black men = white men. Note that if you do so the nature of your tweaks will be different than if you're just equalizing against one axis; the solutions are mutually irreconcilable.

And why stop at two protected characteristics? Why not include sex, or religion, or sexual orientation? Who's prepared to argue that any of those are less (or more) important that gender or race?

In fact, the only even remotely principled approach that I can come up with is to make sure that you, in the minimum, include all Federally protected characteristics as independent axes of analysis:

  1. Race
  2. Religion
  3. National origin
  4. Age
  5. Sex
  6. Pregnancy
  7. Citizenship
  8. Familial status
  9. Disability status
  10. Veteran status
That means there are more than 2^10 = 1024 separate groupings that you have to equalize, at which point you should jus t pitch your model entirely because it'll be totally useless.
I'm not seriously suggesting that this needs to be done, but rather pointing out that the group outcomes which need to be equalized depend heavily on which protected characteristics you select for analysis; an algorithm (or human-run decision process) which equalizes outcomes against one set of characterstics isn't guaranteed to do so against another. Since the selection of protected characteristics (Race? Gender? Both?) is arbitrary there's no way to choose one process/algorithm over the other; you're stuck, like Buridan's ass, between mutually irreconcilable solutions.

Argument 2: Groups Have No Independent Moral Standing

A "group of people" is an abstraction, a convenient fiction useful for talking about the aggregate properties of the individuals of which it is composed. When we talk of "black men" or "white women" we're not talking about some entity, "out there" somewhere, but rather making a generalization about the set of people fulfulling the predicates "black" (or "white") and "man" (or "woman"). It's simply a category error to assert that there is some entity "black men" that has moral standing, and that can be made whole by equalizing outcomes with other group abstractions.

Here, have a parable, with cake:

I'll be the first to admit that the above is a rather loose analogy, but it still concisely illustrates my point. Everyone ends up the same amount of cake, on average, but relying on averages conceals the fact that Alice is still stuck doing communications for a direct mail marketing firm when she would have been much happier as a materials engineer. You could give Beatrice an arbitrarily large amount of cake and it still wouldn't redress the initial injustice perpetrated on Alice.

Which is, I think, a fundamental truth: Injustices happen to individual people. The insistence on statistical parity at the group level is an exercise in moral bookkeeping that mistakes the measure for the goal; the Alices of the world will not be made whole no matter how much cake we give to the world's Beatrices.

Which is not to say that there aren't cases, potentially many of them, where ensuring statistically equal outcomes does address the underlying injustice. However, given the counter-example above, we cannot simply assume that to be true. Rather, there has to be a showing that ensuring equality of outcome actually addresses the underlying injustice.

In Closing: High Hurdles

So what have we learned?

  • Group selection is frequently, if not always, arbitrary.
  • Ensuring equal outcomes at the group level is not inherently just.
Anyone who claims that justice requires equality of outcome has the burden of demonstrating that
  • The groups which are being equalized are non-arbitrary.
  • Ensuring equality of outcome actually addresses the underlying injustice.
To do otherwise results in a violation of individuals' right to equal treatment in a way which cannot be publicly justified.

Saturday, October 13, 2018

A Coda About Equal Treatment vs. Equal Outcomes

Separate from my general observations about "algorithmic bias", I wanted to dwell for a bit on something that Narayanan says around 32:06 in his presentation. Paraphrasing slightly:

If we want to harmonize, to balance outcomes between different groups, we have to treat different people from different groups differently, even if they are similar in all the ways we think that matter to the decision making task. That seems like a very uncomfortable notion to deal with.
He's absolutely right; you can't have both equal treatment and equal outcomes if there's any sort of difference in prevalence between groups. The tension between the two should make us uncomfortable, because no matter how you slice it it seems that someone is being treated unfairly.

Honestly, I don't find this problem to be nearly as vexing as he does. Here's my recapitulation of the underlying reasoning:

  1. Treating different groups differently, when all the relevant facts about them are the same, is presumptively bad; "failure to treat like groups alike" is actually a pretty good definition of "unjustified discrimination".
  2. This presumption can be overcome if such treatment serves to rectify injustices at the group level.
  3. Inequality in outcome X at the group level is indicative of just such injustice.
  4. From 1, 2, 3: Differential treatment of like groups is justified.
The problem here, though, is that it's easy to demonstrate that 3 doesn't hold for all X.

Consider Ibram X. Kendi's recent statement regarding racial disparities:

As an anti-racist, when I see racial disparities, I see racism.
This is a concrete example of the reasoning in step 3. If we take Kendi's statement at face value we should, for example, treat the overwhelming prevalance of African American employees in instituations catering to African Americans as a sign of anti-white bias. After all, racial disparities in workforce composition are, per Kendi, a clear sign of racism. But no reasonable person (including, presumably, Kendi himself) actually believes this to be the case, which demonstrates two things:
  • Kendi's statement has unvoiced caveats.
  • Group disparities in outcome can, in some cases, be explained by innocuous causes.
Or, put more plainly, it doesn't take an assumption of invidious motives to explain why Ebony's staff is mostly African-American.

Having demonstrated that unequal outcomes can occur for morally blameless reasons it follows that step 3 above needs to be rewritten:

3. Inequality in outcome X at the group level is indicative of just such injustice, provided a showing can be made that the disparity results from unjustified differential treatment between those groups.

"But", you may say, "you've set your standard of proof too high. It's quite difficult, in practice, to prove that differences in outcome are due to unequal treatment". My rebuttal is that's a feature, not a bug; it should be difficult.

I'm going to go all Rawlsian for a bit, because that seems to be a good framework for talking about this issue. It's plausible that an arbitrary individual, looking at this issue from behind the veil of ignorance, might agree to "take one for the team" and cede eir right to equal treatment in order to further a more just society, provided that it's clear that there is acctually an injustice at the group level. However, it's a much harder sell if the injustice is merely speculative; why should anyone give up eir claim to equal treatment to correct an injustice that is stricly conjectural? Assuming that disparities in group outcomes must be rectified is bad policy because it fails the test of public justification.

Before I sign off I should also point out that I've said nothing about step 2 so far. I don't think it holds either, but it wasn't necessary to go that far in this post. I'll have more to say about that next.

Notes On "Algorithmic Bias"

In which I complain about people conflating broken algorithms with a broken world.

What is (Presently) Meant By "Algorithmic Bias"

Let's make sure I'm not attacking a straw man. Google tells me that:

Alright then, it's pretty clear that the phrase "algorithmic bias" as used in contemporary dialogue applies to any situation where algorithmically-generated results promote inequity.

"Bias" is Already Defined for Algorithms

Inciting complaint: "algorithmic bias", as currently used, tramples all over the concept of "measurement bias"/"systematic error" as applied to algorithms.

The results of an algorithmic computation can be systematically "off": too high, too low, or reliably wrong in some other way. "Bias" has historically been used as a label to describe this kind of wrongness.

An algorithm which displays systematic error is objectively wrong; it's producing the wrong output for any given set of inputs. Provided that the root cause of the error is understood it's often the case that the algorithm can be modified/improved to reduce or eliminate the error. The key point here is that the code itself is broken.

In some sense this is an argument about semantics; why should we prefer one definition to the other? We shouldn't, necessarily, but it's important to be able to separate the concept of bias due to broken code (which I'll call the "technical sense") from bias due to other causes (which I'll call the "lay" sense). In the very least we need some sort of a colloquialism which captures this distinction for the lay public, but such a colloquialism doesn't seem to be forthcoming.

Why is the Distinction Important?

There are certainly cases where code itself can be biased in the broad, discrimination-against-classes-of-people, sense of the word. The previous generation of expert systems were often rule-based; they made decisions on the basis of heuristics which were hand-coded by humans. By explicitly introducing rules about different classes of people the programmers could produce an algorithm which was "biased" in the present, lay sense of the word. The fix for this sort of biased algorithm is to eliminate the rules which encode the biased behavior.

However: Expert systems have had their day. Building rule-based systems turned out, in the end, to be impractical: it relied on humans to pick out salient behaviors, maintaining and enlarging rulesets is laborious and error-prone, and so on. In contrast, the current generation of AI/ML systems is basically all stats under the hood; there generally aren't explicit rules about classes of peopled. Rather, they work by taking solved instances of the problem in question (aka "training set") combined with some sort of error minimization technqique (gradient descent is popular) to build a predictive model (often a linear equation) to produce outputs for arbitrary future inputs.

Some things to note about this new generation of algorithms:

  • They don't have heuristics and generally have no explicit knowledge of classes of people. Even when class membership is included as an input value, the algorithm has no a priori reason to treat one class any differently from any other.
  • There are standard methods, like cross-validation to test how "good" the model is.
  • There is generally a well-defined metric for "goodness", be it precision and recall in the case of classifiers or root mean square error (RMSE) for models producing continuous output.
The above is important because it means that you can objectively measure whether one of theses stats-based algorithms is doing it's job i.e. to accurately map inputs to outputs.

And herein lies the crux of the matter: An algorithm can be unbiased (i.e. accurately maps inputs to outputs) in the technical sense of the word but produce results which are biased in the lay sense of the word.

Responding to a "Biased" Algorithm

The Nature article that I linked to above provides a good example of a model which is unbiased in the technical sense but biased in the lay sense:

The developer of the algorithm, a Michigan-based company called Northpointe (now Equivant, of Canton, Ohio), argued that the tool was not biased. It said that COMPAS was equally good at predicting whether a white or black defendant classified as high risk would reoffend (an example of a concept called 'predictive parity'). Chouldechova soon showed that there was tension between Northpointe's and ProPublica's measures of fairness. Predictive parity, equal false-positive error rates, and equal false-negative error rates are all ways of being 'fair', but are statistically impossible to reconcile if there are differences across two groups - such as the rates at which white and black people are being rearrested (see 'How to define 'fair''). "You can't have it all. If you want to be fair in one way, you might necessarily be unfair in another definition that also sounds reasonable," says Michael Veale, a researcher in responsible machine learning at University College London.

So, how are we to respond? Is a response merited at all?

We've already stipulated that the algorithm is technically correct in that it maps inputs to outputs appropriately. So, to start with, we have to ask how do we know that it's biased in the lay sense? There seem to be a couple of distinct cases here:

  • The algorithm's output is, in aggregate, in conflict with other observations.
  • The algorithm results in differential treatment of some populations, which is definitionally taken to be indicative of bias.

I think it's uncontroversial to hold that, if we use AI-/ML-based decision tools, those tools need to produce judgements that are congruent with reality. Several lifetimes ago I worked for an ML company, and one of the primary challenges we had was simply getting the data needed to adequately represent reality. ML models are very much a GIGO technology; if your inputs are crap then your outputs are going to be crap as well. When a model conflicts with reality we should trust reality and either fix the model or discard it entirely.

But what about the other case, where the algorithm is technically sound and whose output is consistent with independent observations?

Arvind Narayanan gave a lecture on ML and defintions of fairness which I believe provides a fair survey of current thinking in this area. There's a lot of discussion (with algebra and proofs even) about how its literally impossible to fulfill various equality-of-outcome criteria if different groups have different prevalances of whatever it is you're trying to predict. Also, too, the persistent problem of real-world brokenness; reality itself is biased against various and sundry groups of people. At which point I step back and say "Hold on, why are you using a predictive model in the first place?".

Being as charitable as possible, I think this is a case of missing the forest for the trees. People become very invested in predictive models, only to have concerns regarding equality of outcome arise at a later date. The natural inclination, especially if the continuing use of predictive models is crucial to your livelihood, is to try to square the circle and hack some sort of compromise solution into the algorithm.

But let's take a step back and make a couple of observations:

  • The whole reason why you use a predictive model in the first place is because you don't know what your outcome distributions should be. If you know a priori what your distributions should be that significantly undercuts the case for using a model.
  • Predictive models are designed to replicate facts-on-the-ground for novel inputs. If your critique is that the facts on the ground are fundamentally broken then this too argues against the use of predictive models.
Taking the concerns in Narayana's video at face value, it seems like predictive models are simply the wrong technology to apply if your overriding concern is equality of outcome.

In the case where people are dead set on using a predictive model I'd tender the following arguments against modifying the model itself:

  • Predictive models are good at taking existing patterns and extending them; stop trying to square the circle and let the technology do what the technology is good at.
  • Unmodified, such models have the useful property (discussed above) that they are free from bias in the "unjustified, differential treatment of groups" sense. Setting aside the problem of broken training sets, this preserves the important and useful ability to generate outputs according to an unbiased decision-making process. If you start putting in adjustments to ensure certain outcome distributions you've introduced (benevolent) bias and thus lose this ability.
  • De-coupling the model from equity adjustments increases transparency. It forces equity adjustments into their own process, distinct from model learning, rather than wrapping everything together in one opaque process, which makes it much easier to understand the nature and magnitude of the adjustments being applied.

In Conclusion

Current discussion of "algorithmic bias" could benefit from some nuance:

  • ML/AI models which are unacceptably error-prone or obviously broken in some other way should be discarded. There doesn't appear to be anyone arguing otherwise.
  • It is very, very rarely (never?) the case that ML/AI algorithms themselves engage in any sort biased reasoning. This property compares favorably with the previous generation of expert systems, as well as human decision makers, both of which have been shown to engage in biased reasoning.
  • In the case where ML/AI models result in biased outcomes this is usually due to unequal prevalances in the training data. Determining whether the training data adequately captures reality is a related, but ultimately separate, problem.
  • Predictive models aren't an appropriate tool for situations where equality of outcome is a driving concern.

Sunday, September 30, 2018

Generally Disappointed in "The Lies That Bind"

I finished reading Kwame Anthony Appiah's The Lies That Bind and was totally underwhelmed; the book feels like a missed opportunity. He spends 6 chapters demonstrating, quiet eloquently in places, that Sturgeon was right, only to finish up with the world's most milquetoast coda.

If it turns out, as Appiah has so ably demonstrated, that our conceptions of religion ("creed") or race ("color") are badly mistaken, then surely that has some implications for current discourse? I mean, he writes "The modes of identity we'e considered can all become for of confinement, conceptual mistakes underwriting moral ones." (p. 218), which would be a great segue to the second half of the book where he talks about common moral mistakes in the present. And yet the book ends at p. 219 with a bland call for us to recognize our shared humanity.

What gives? I mean, its a glaring and obvious omission which cries out for explanation. As far as I can tell Appiah has a lot of credibility with the public at large, which puts him in a good position to say something that people will actually listen to. His decision not to venture into an explication of moral mistakes stems from what?

I can't really even hazard a guess... maybe he just didn't want to? In any case, The Lies That Bind ends up feeling like a joke without a punchline.

Thursday, September 27, 2018

Since it Seems to be De Rigueur Today

I dunno... a pox on both their houses?

Judicial confirmations are a political process; I'll concede that without issue. But for the love of Dog Almighty is it too much to ask for something which at least approximates a truth-tracking process?

And lordy, we thought that confirmations were viscious before... I expect what comes after this will be a blood sport.

Monday, September 24, 2018

Wherein Ophelia Benson Demonstrates My Point and Agrees With My Conclusion

Quoth Opehlia:

The calendar pages from June, July and August 1982, which were examined by The New York Times, show that Judge Kavanaugh was out of town much of the summer at the beach or away with his parents. When he was at home, the calendars list his basketball games, movie outings, football workouts and college interviews. A few parties are mentioned but include names of friends other than those identified by Dr. Blasey.
*gasp* What a bombshell! I’m sure he recorded every single name of every single person who was at every single gathering he attended that summer. I’m sure there’s no chance that he could have recorded some people but not all.
Setting aside the wholly-justified question "Who the fuck has calendars from 1982 lying around?", this is exactly what I'm getting on about. The calendar, assuming its legit, is consistent with both scenarios: he did commit sexual assault, and he didn't. What evidence could he produce at this point which would demonstrate, to Ophelia's satisfaction, that he's not guilty?

I would hope that she'd give that question serious consideration, given that she quotes this tweet approvingly:

Juror: The testimony I hear at trial won’t change my vote.
Judge: You may be excused.

Graham: Ford’s testimony won’t change my vote via @politico

— Richard W. Painter (@RWPUSA) September 23, 2018

If it's true that Graham should be willing to be convinced by Ford's testimony then it's equally true that Ophelia should be willing to be swayed by Kavanaugh's.

Friday, September 21, 2018

There Are Good Reasons to be Cautious About Any Kavanaugh Inquiry

There are good reasons that we should be cautious about endorsing any Kavanaugh inquiry, formal or informal, reason which are a direct result of taking the charges levelled against him seriously.

I take "believe the victim" to mean that we should start from a presumption of guilt. However, in order to meet basic standards of fairness/due process such a presumption has to be defeasible i.e. there exists the possibility to effectively rebut the charge. In my previous post I characterized this as there being one or more pieces of evidence which are practically accessible, the production of which would serve to meet whatever burden of proof is needed. This is true regardles of whether we're talking about formal legal proceedings or extra-legal, fact-finding processes.

In the case at hand we could ask "Is there anything Kavanaugh could say, any evidence that he could produce, which would convince people that he didn't commit sexual assault?". I'm having a hard time coming up with anything that would meet this burden under a "preponderance of evidence" standard, much less a more-stringent standard. Charges of sexual assault are notoriously hard to (dis)prove even when the event takes place in the present; that the charges against Kavanaugh involve an event which is 36 years removed makes the prospect significantly more difficult. Due to the passage of time there's unlikely to be any corroborating/exculpatory evidence of a physical nature, so we're going to be left with the verbal testimony of people trying to remember the details of something that happened 36 years ago. I'm simply at a loss to identify anything that Kavanaugh, or people speaking in his defense, could say that would make it more probable than not that he didn't commit sexual assault.

And that's where the problem arises; the accusation of sexual assault against Kavanaugh appears to be unanswerable. The outcome is predetermined, and predetermined outcomes are generally taken to be a failure of due process. Put more succinctly: If there's nothing that Kavanaugh could say to establish his innocence then the inquiry process is pointless at best and a sham at worst.

Let's stop for a moment and look at the features of the argument above. It is not an argument

  • That "boys will be boys".
  • To let "bygones be bygones".
  • That the evidence which exists points to either Kavanaugh's guilt or innocence.
Rather, the argument is based purely upon structural features associated with the presumption of guilt; it turns on no specifics about Kavanaugh's sitution (apart from the passage of time) and could apply equally well to many, many other situations.

What we have here is a conflict between analysis at the aggregate level and analysis at the individual level. We presume guilt because, taken in aggregate, reports of sexual assault are usually true. However, when trying to assess any particular individual's culpability, we're analyzing things at an individual level. Regardless of what the stats say at an aggregate level, individuals must have the ability to rebut the charges against them.

And this is why people of all political leanings should proceed with caution; given the considerations above it looks like the charge leveled against Kavanaugh is indefeasible. By all accounts the man is a rat bastard, but he's still entitled to basic considerations of fairness/due process. And yes, it's true that sexual assault victims are also treated unfairly in an unfortunately large number of ways, but I simply reject the idea that introducing yet more unfairness is an appropriate response.

Blog Information Profile for gg00