Contents

AI Ethics

Contents
Note
I recently had this article published in Synapse Magazine and wanted a broader audience to access it. So here it is.

As Data Science and AI practitioners, and organisations looking to adopt AI, we need to be aware of how systemic oppression and discrimination is represented within our data. It is important for us to identify problematic biases and tackle them head-on to ensure the products we put in people’s hands make their lives better.

A bias model, statistically, is one that has learnt too well on the initial data that it cannot make accurate predictions on new, unseen data. This is the type of bias that Data Scientists often deal with and is a function of the model they’re building. The lesser dealt with bias, the bias with ethical implications, are the biases that exist within the data being used by the model and the biases of the person building it. We acknowledge that there is systemic oppression; by definition, it is encoded within the institutions we interact with daily and consequently in the data we are exposed to.

Recently an AI tool designed to take pixilated photographs of people and reconstruct them into a more accurate picture was posted on Twitter.

/posts/ai-ethics/designyourtrust.png
source: designyourtrust.com

While some results were humorous, one user highlighted that the AI was problematic when it converted an obvious photo of Barack Obama into a white man. Other users had similar experiences when using images of non-White groups.

/posts/ai-ethics/BarackObama.png
source: Twitter / @Chicken3gg

The use-case for this face depixelizer was to extract facial features from data, however, what has been created is a model that places more emphasis on white features and works better on white people. This happened because the model was trained on a research dataset named FlickFaceHQ (FFHQ) which mainly consists of white people. The skew of white representation in datasets extends further than FFHQ and can simply be seen by googling “beautiful woman” which returns pictures of largely young, white females. This can lead to models working better for white people, for example, facial recognition.

While the above are examples of how hegemonic whiteness is perpetuated through imbalanced data, more insidious is how minority groups are represented by models trained on bias data. In April this year, another experiment on Twitter went viral where Google’s cloud image recognition platform labelled an image of a black man holding a thermometer as a “gun” and the same image, when the hand was overlaid with white skin as a “monocular”.

/posts/ai-ethics/Google.png
source: Twitter / @bjnagel

In a statement, Google said they found some objects were mis-labeled as firearms and found no evidence of systemic bias related to skin tone. However, without rigorous testing we can never know. Despite Google resolving the issue, it remains that if we don’t critique and understand our data collection strategy, models will learn to uncover bias that can result in problematic classifications based on race.

This has far reaching implications for non-White groups. Data collection needs to be understood and scrutinized. In America, data shows that people of colour are discriminated by police more than white people. They’re stopped more frequently at traffic stops and police are more likely to use force against them. With the advent of predictive policing using this data, we’re seeing increasing amounts of black crime because police are now using bias, historical data to do their jobs. This, coupled with the collider bias of racist practices is not just perpetuating but compounding racial inequality.

Some may argue that simply removing race as a variable will remove bias. However, if we remove race we cannot ensure racial parity within our data nor can we tell directly if our models are racially biased in themselves. Even with racial parity, you cannot completely remove bias from your data therefore one of the objects of our models should be to optimize for fairness.

Inequality exists in any data where there are protected attributes, be it age, gender, race or religion and optimizing for fairness should be a requirement of your solution, not a nice-to-have. The solutions we are building have more of an affect on people’s lives than people’s lives have on our models. If we do not correctly address biases within our data, we’re only perpetuating these systemic injustices and further embedding them within our world.