Blog Posts Business Management

AI Fairness – An honest introduction

Blog: Capgemini CTO Blog

Ethical AI focuses on the societal impact of AI systems and their perceived fairness. There is a multitude of overlapping definitions, but the common denominator is always there: Will this AI system affect our society in a positive or negative way? As with all other transformative technical leaps, the implications of integrating AI systems in our society are far-reaching and often not obvious. Almost all major international organisations have published working papers or reports on Ethical AI or AI for good; the IEEEITUOECDEuropean CommissionUK governmentGoogleMicrosoftIBM; the list is long enough to be a blog post by itself! This shows that people care about the impact of AI systems on their daily lives. In an early post from the AI Guild has already touched upon this need to build trust in AI systems. To achieve this task current ML research has (re-) introduced terms regarding Fairness, Accountability, Transparency & Explainability, and Causality. Succinctly here is what each of them refers to:

We note that while interlinked these terms are not equivalent.

A fair system might be opaque; we do not necessarily appreciate how an intelligent defibrillator works but we all agree that it has life-saving impact on all people. A perfectly explainable system might be completely arbitrary; one might use a simple risk estimation method (e.g. logistic regression) for a classification task without an apparent reason as to why a given threshold was chosen. A causal link might be completely unfair; an A/B validated marketing campaign might be successful because it exploits people’s vulnerabilities (e.g. shopping addictions). When employing an AI system all these points should be commented on in an informed way. For the rest of the blogpost, we will focus on a simple a/the exposition of fairness and the metrics associated with it.

Research activity in fairness in AI has exploded in recent years; Google Scholar suggests 6 results for the term “AI Fairness” between 2010 and 2015 but 444 results from 2016 to mid-October 2020. This has led, correctly, to several different definitions of fairness. We emphasise that the fact that there are multiple definitions of AI fairness, is correct and expected. In different settings, the definition of fairness is not straightforward. While we all agree that unfair discrimination and biases are wrong, they are also not clearly defined in all cases and even if they are, their remedy is not obvious either. We will explore three simple fairness criteria: Demographic Parity, Equal Opportunity, and Equal Accuracy. We will focus on their application in terms of classification tasks. Each of them can be interpreted as follows:

We will use a simplified example of an AI system that classifies job applicants as “hireable” or not. To that extent, we assume we want to do not want to utilise gender information when making a hiring decision, i.e. we do not want an applicant’s sex to be a factor in our decision making. Let’s start:

So, is there any universal AI fairness solution? No. The verdict on this question is already here (see Kleinberg et al. (2016) for this important result showing that “key notions of fairness are incompatible with each other”) but that does not mean that fair AI systems are unattainable. It means that AI, like its makers, is an imperfect framework that must be tuned and trained to a particular task. Fairness in university admissions and in face identification does not refer to the sam­­­­­­e concept. We should accept that we need to do informed trade-offs between different fairness metrics and stand accountable for them.

To conclude, it is equally easy to be lulled into a false sense of security or to panic about an AI system’s societal impact. It should not be that way and it does not have to. For an informed, experienced, and current approach to your AI solution and its fairness implications, please contact me here.

Through our Capgemini UK’s Ethical AI Guild we provide guidance on ethical issues and practices. Made up of experienced AI practitioners, the guild looks to accelerate our clients’ journeys towards ethical AI applications that benefit all.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/ai-fairness-an-honest-introduction/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×