At this point, it’s abundantly clear that if we aren’t extremely careful, AI will identify and exploit harmful biases in training data. And AI-based discrimination — even if it’s unintentional — can have dire regulatory, reputational, and revenue impacts. Knowing this, many companies, governments, and NGOs have adopted “fairness” as a core principle for ethical AI. Espousing this principle is one thing, but turning it into a set of consistently practiced, enterprisewide policies and processes is quite another. As with most things AI, the devil is in the data.

The first issue plaguing companies is that there are over 20 different mathematical representations of fairness. Broadly speaking, AI fairness criteria fall into two camps:

  • Accuracy-based criteria that optimize equality of treatment. These criteria compare different measurements of a model’s accuracy, such as the true positive or false positive rate. For example, a hiring algorithm may be optimized so that its precision rate — the percent of true positives to all positives — is equal across men and women. The problem with optimizing for accuracy across groups is that it assumes the training data is an accurate and fair representation of reality.
  • Representation-based criteria that optimize equity of outcome. These metrics theoretically correct for historical inequities by ensuring equitable outcomes across groups. In the hiring example, an algorithm optimized for demographic parity would hire a proportion of male to female candidates that is representative of the overall population, regardless of potential differences in qualifications.

The burning question, of course, is: Which criteria should a company employ for a given use case? My research has found that while diligently applying fairness criteria is essential, there are also best practices organizations can adopt across the AI lifecycle — from project conception to deployment and continuous monitoring — that significantly reduce the likelihood of harmful bias and discrimination. To learn more about this, listen to this episode of the What It Means podcast where I discuss bias in AI.

This report is the latest addition to a substantial body of research on responsible and ethical AI at Forrester. In case you missed them, here are some of the other key reports on the subject, grouped by the common ethical AI principles:

This report on fairness in AI was an absolute pleasure to research and write. Thank you to the wonderful clients, colleagues, and thought leaders who contributed their time and wisdom to this critical research! Want to talk more about this? Schedule an inquiry with me.

Note: Forrester client access is required for research featured in this post.