Friday, October 19, 2018

Two Arguments Against Equalization of Group Outcomes

In my previous post I offered an argument for why attempting to equalize group outcomes requires a showing that the outcomes are the result of unjustified, differential treatment of the groups. In this post I'd like to offer two arguments against the attempt to equalize group outcomes at all.

Argument 1: Groupings Are Arbitrary

Recent articles about differences in group outcomes, especially those dealing with ML/AI algorithms, typically look at disparities on only one axis: Amazon's screening algorithm was biased against women, or the COMPAS tool produced differential outcomes respect to race. In the case of COMPAS, Alexandra Chouldechova discusses various trade-offs (p. 12) that can be made in the model in an attempt to achieve a more fair outcome with respect to race.

But why focus on just one axis of classification? Surely gender is as important as race, so don't we need to equalize along that axis as well. Rather than just worrying about disparities between blacks and whites we need to tweak the model so that outcomes are equal: black women = white women = black men = white men. Note that if you do so the nature of your tweaks will be different than if you're just equalizing against one axis; the solutions are mutually irreconcilable.

And why stop at two protected characteristics? Why not include sex, or religion, or sexual orientation? Who's prepared to argue that any of those are less (or more) important that gender or race?

In fact, the only even remotely principled approach that I can come up with is to make sure that you, in the minimum, include all Federally protected characteristics as independent axes of analysis:

  1. Race
  2. Religion
  3. National origin
  4. Age
  5. Sex
  6. Pregnancy
  7. Citizenship
  8. Familial status
  9. Disability status
  10. Veteran status
That means there are more than 2^10 = 1024 separate groupings that you have to equalize, at which point you should jus t pitch your model entirely because it'll be totally useless.
I'm not seriously suggesting that this needs to be done, but rather pointing out that the group outcomes which need to be equalized depend heavily on which protected characteristics you select for analysis; an algorithm (or human-run decision process) which equalizes outcomes against one set of characterstics isn't guaranteed to do so against another. Since the selection of protected characteristics (Race? Gender? Both?) is arbitrary there's no way to choose one process/algorithm over the other; you're stuck, like Buridan's ass, between mutually irreconcilable solutions.

Argument 2: Groups Have No Independent Moral Standing

A "group of people" is an abstraction, a convenient fiction useful for talking about the aggregate properties of the individuals of which it is composed. When we talk of "black men" or "white women" we're not talking about some entity, "out there" somewhere, but rather making a generalization about the set of people fulfulling the predicates "black" (or "white") and "man" (or "woman"). It's simply a category error to assert that there is some entity "black men" that has moral standing, and that can be made whole by equalizing outcomes with other group abstractions.

Here, have a parable, with cake:




I'll be the first to admit that the above is a rather loose analogy, but it still concisely illustrates my point. Everyone ends up the same amount of cake, on average, but relying on averages conceals the fact that Alice is still stuck doing communications for a direct mail marketing firm when she would have been much happier as a materials engineer. You could give Beatrice an arbitrarily large amount of cake and it still wouldn't redress the initial injustice perpetrated on Alice.

Which is, I think, a fundamental truth: Injustices happen to individual people. The insistence on statistical parity at the group level is an exercise in moral bookkeeping that mistakes the measure for the goal; the Alices of the world will not be made whole no matter how much cake we give to the world's Beatrices.

Which is not to say that there aren't cases, potentially many of them, where ensuring statistically equal outcomes does address the underlying injustice. However, given the counter-example above, we cannot simply assume that to be true. Rather, there has to be a showing that ensuring equality of outcome actually addresses the underlying injustice.

In Closing: High Hurdles

So what have we learned?

  • Group selection is frequently, if not always, arbitrary.
  • Ensuring equal outcomes at the group level is not inherently just.
Anyone who claims that justice requires equality of outcome has the burden of demonstrating that
  • The groups which are being equalized are non-arbitrary.
  • Ensuring equality of outcome actually addresses the underlying injustice.
To do otherwise results in a violation of individuals' right to equal treatment in a way which cannot be publicly justified.

0 Comments:

Post a Comment

<< Home

Blog Information Profile for gg00