Notes On "Algorithmic Bias"
In which I complain about people conflating broken algorithms with a broken world.
What is (Presently) Meant By "Algorithmic Bias"
Let's make sure I'm not attacking a straw man. Google tells me that:
- Algorithmic bias occurs when a computer system reflects the implicit values of the humans who are involved in coding, collecting, selecting, or using data to train the algorithm.
- Microsft's Tay, a hilarious demonstration of why the Internet can't have nice things, is an example of a biased algorithm.
- An algorithm can be considered "biased" if it "entrench[es] existing inequality".
Alright then, it's pretty clear that the phrase "algorithmic bias" as used in contemporary dialogue applies to any situation where algorithmically-generated results promote inequity.
"Bias" is Already Defined for Algorithms
Inciting complaint: "algorithmic bias", as currently used, tramples all over the concept of "measurement bias"/"systematic error" as applied to algorithms.
The results of an algorithmic computation can be systematically "off": too high, too low, or reliably wrong in some other way. "Bias" has historically been used as a label to describe this kind of wrongness.
An algorithm which displays systematic error is objectively wrong; it's producing the wrong output for any given set of inputs. Provided that the root cause of the error is understood it's often the case that the algorithm can be modified/improved to reduce or eliminate the error. The key point here is that the code itself is broken.
In some sense this is an argument about semantics; why should we prefer one definition to the other? We shouldn't, necessarily, but it's important to be able to separate the concept of bias due to broken code (which I'll call the "technical sense") from bias due to other causes (which I'll call the "lay" sense). In the very least we need some sort of a colloquialism which captures this distinction for the lay public, but such a colloquialism doesn't seem to be forthcoming.
Why is the Distinction Important?
There are certainly cases where code itself can be biased in the broad, discrimination-against-classes-of-people, sense of the word. The previous generation of expert systems were often rule-based; they made decisions on the basis of heuristics which were hand-coded by humans. By explicitly introducing rules about different classes of people the programmers could produce an algorithm which was "biased" in the present, lay sense of the word. The fix for this sort of biased algorithm is to eliminate the rules which encode the biased behavior.
However: Expert systems have had their day. Building rule-based systems turned out, in the end, to be impractical: it relied on humans to pick out salient behaviors, maintaining and enlarging rulesets is laborious and error-prone, and so on. In contrast, the current generation of AI/ML systems is basically all stats under the hood; there generally aren't explicit rules about classes of peopled. Rather, they work by taking solved instances of the problem in question (aka "training set") combined with some sort of error minimization technqique (gradient descent is popular) to build a predictive model (often a linear equation) to produce outputs for arbitrary future inputs.
Some things to note about this new generation of algorithms:
- They don't have heuristics and generally have no explicit knowledge of classes of people. Even when class membership is included as an input value, the algorithm has no a priori reason to treat one class any differently from any other.
- There are standard methods, like cross-validation to test how "good" the model is.
- There is generally a well-defined metric for "goodness", be it precision and recall in the case of classifiers or root mean square error (RMSE) for models producing continuous output.
And herein lies the crux of the matter: An algorithm can be unbiased (i.e. accurately maps inputs to outputs) in the technical sense of the word but produce results which are biased in the lay sense of the word.
Responding to a "Biased" Algorithm
The Nature article that I linked to above provides a good example of a model which is unbiased in the technical sense but biased in the lay sense:
The developer of the algorithm, a Michigan-based company called Northpointe (now Equivant, of Canton, Ohio), argued that the tool was not biased. It said that COMPAS was equally good at predicting whether a white or black defendant classified as high risk would reoffend (an example of a concept called 'predictive parity'). Chouldechova soon showed that there was tension between Northpointe's and ProPublica's measures of fairness. Predictive parity, equal false-positive error rates, and equal false-negative error rates are all ways of being 'fair', but are statistically impossible to reconcile if there are differences across two groups - such as the rates at which white and black people are being rearrested (see 'How to define 'fair''). "You can't have it all. If you want to be fair in one way, you might necessarily be unfair in another definition that also sounds reasonable," says Michael Veale, a researcher in responsible machine learning at University College London.
So, how are we to respond? Is a response merited at all?
We've already stipulated that the algorithm is technically correct in that it maps inputs to outputs appropriately. So, to start with, we have to ask how do we know that it's biased in the lay sense? There seem to be a couple of distinct cases here:
- The algorithm's output is, in aggregate, in conflict with other observations.
- The algorithm results in differential treatment of some populations, which is definitionally taken to be indicative of bias.
I think it's uncontroversial to hold that, if we use AI-/ML-based decision tools, those tools need to produce judgements that are congruent with reality. Several lifetimes ago I worked for an ML company, and one of the primary challenges we had was simply getting the data needed to adequately represent reality. ML models are very much a GIGO technology; if your inputs are crap then your outputs are going to be crap as well. When a model conflicts with reality we should trust reality and either fix the model or discard it entirely.
But what about the other case, where the algorithm is technically sound and whose output is consistent with independent observations?
Arvind Narayanan gave a lecture on ML and defintions of fairness which I believe provides a fair survey of current thinking in this area. There's a lot of discussion (with algebra and proofs even) about how its literally impossible to fulfill various equality-of-outcome criteria if different groups have different prevalances of whatever it is you're trying to predict. Also, too, the persistent problem of real-world brokenness; reality itself is biased against various and sundry groups of people. At which point I step back and say "Hold on, why are you using a predictive model in the first place?".
Being as charitable as possible, I think this is a case of missing the forest for the trees. People become very invested in predictive models, only to have concerns regarding equality of outcome arise at a later date. The natural inclination, especially if the continuing use of predictive models is crucial to your livelihood, is to try to square the circle and hack some sort of compromise solution into the algorithm.
But let's take a step back and make a couple of observations:
- The whole reason why you use a predictive model in the first place is because you don't know what your outcome distributions should be. If you know a priori what your distributions should be that significantly undercuts the case for using a model.
- Predictive models are designed to replicate facts-on-the-ground for novel inputs. If your critique is that the facts on the ground are fundamentally broken then this too argues against the use of predictive models.
In the case where people are dead set on using a predictive model I'd tender the following arguments against modifying the model itself:
- Predictive models are good at taking existing patterns and extending them; stop trying to square the circle and let the technology do what the technology is good at.
- Unmodified, such models have the useful property (discussed above) that they are free from bias in the "unjustified, differential treatment of groups" sense. Setting aside the problem of broken training sets, this preserves the important and useful ability to generate outputs according to an unbiased decision-making process. If you start putting in adjustments to ensure certain outcome distributions you've introduced (benevolent) bias and thus lose this ability.
- De-coupling the model from equity adjustments increases transparency. It forces equity adjustments into their own process, distinct from model learning, rather than wrapping everything together in one opaque process, which makes it much easier to understand the nature and magnitude of the adjustments being applied.
In Conclusion
Current discussion of "algorithmic bias" could benefit from some nuance:
- ML/AI models which are unacceptably error-prone or obviously broken in some other way should be discarded. There doesn't appear to be anyone arguing otherwise.
- It is very, very rarely (never?) the case that ML/AI algorithms themselves engage in any sort biased reasoning. This property compares favorably with the previous generation of expert systems, as well as human decision makers, both of which have been shown to engage in biased reasoning.
- In the case where ML/AI models result in biased outcomes this is usually due to unequal prevalances in the training data. Determining whether the training data adequately captures reality is a related, but ultimately separate, problem.
- Predictive models aren't an appropriate tool for situations where equality of outcome is a driving concern.
0 Comments:
Post a Comment
<< Home