Is bias a problem in machine learning? 

Written on
byLexi Birch

In all decision-making processes, whether human or machine, bias can result in unfair outcomes. We discuss why this can be a problem for machine learning, and what we can do about it.  

Machine Learning: Machine learning (ML) is very good at capturing signals and correlations in data. Deep learning, coupled with substantial data, and lots of computation, allows us to build systems that exploit correlations in data. This behaviour allows us to model complex problems, but it also means that these models are vulnerable to bias. 

Bias: Is defined as observing results that are systematically prejudiced. If we look at why ML models might be biased, we see that the datasets used to train and test models contain biases. Bias is introduced when it reflects the biases that exist in humans. Datasets can be further biased by what data is available, or simply what the dataset creator’s frame of reference is. For example, it is very common that a dataset specialises in a particular demographic. Someone developing a speech recognition system might collect predominantly audio recordings of male voices. Our Job as machine learning practitioners is to separate the genuine signals in the data from the bias that is discriminative or unfair. 

Fairness: ML models will contain biases, but the real question is does this product treat people fairly? Models are trained to maximise accuracy over a training set and the models with the highest accuracy overall, might not have the best performance for particular subsets of the data. Often improving accuracy for a subpopulation, like facial recognition for black women, might mean slightly lower accuracy for the more frequent class, such as white men. Fairness means trying to gauge whether the models are achieving the desired tradeoff between overall accuracy and performance for all subpopulations. 

Discrimination: With machine learning systems becoming more ubiquitous in automated decision making, it is crucial that we make these systems sensitive to the type of bias that results in discrimination, especially discrimination on illegal grounds. Machine learning is already being used to make or assist decisions in the following domains of Recruiting (Screening job applicants), Banking (Credit ratings/Loan approvals), Judiciary (Recidivism risk assessments), Welfare (Welfare Benefit Eligibility), Journalism (News Recommender Systems) etc. Given the scale and impact of these industries, it is crucial that we take measures to prevent unfair discrimination. 

 

Example 

These experiments were run in 2017 by a researcher at the university of Washington, Rachael Tatman. She took a set of common words and collected recordings from people who marked their gender and their place of origin and compared the performance of Google’s speech recognition software. The results show robust differences in accuracy across both gender and dialect, with lower accuracy for 1) women and 2) speakers from Scotland. This finding shows the need for sociolinguistically-stratified validation of ML systems. Before we can fix the problem, we need to be able to quantify it.  

 

Automatic Speech Recognition (ASR) 

 

Rachael Tatman, Gender and Dialect Bias in YouTube’s Automatic Captions (2017) (Note that lower “word error rate” indicates better performance) 

Dealing with Bias in ML 

Because of fears that algorithms will further entrench and propagate human biases, there have been significant efforts by the Artificial Intelligence (AI) community to avoid and correct discriminatory bias in algorithms, while also making them more transparent.  

There has been an explosion of academic interest in methods for developing fair algorithms, however fewer methods have been implemented in production machine learning systems used by governments or private companies and there is little transparency about how ML decisions are made and how fair they really are. I describe three general approaches which have been deployed: 

 

Data transformation:

The fairness-enhancing approaches that have achieved the most practical success seem to be efforts to improve performance by adding training data, especially for underrepresented groups. IBM’s facial gender classification system was performing poorly for dark-skinned people—and dark-skinned women in particular—IBM responded with a system trained on more representative data which reportedly reduced the error rates on dark-skinned women Tenfold. 

 

Algorithm manipulation:

Other methods which are of intense interest in the academic field are methods which manipulate the algorithm and introduce a penalty while learning which encourages the model to reduce discrimination. It can be hard to implement in practice – especially with overlapping characteristics, like gender, age, race. 

 

Outcome manipulation:

Here you could adjust for known biases directly on the output of the models. 

 

Dealing with Bias in Aveni 

At Aveni we are putting significant effort into: understanding, tracking and mitigating bias. 

 

One factor which reduces our risk of exposure to bias is the fact that many of our models are geared towards assisting humans and augmenting their abilities to make decisions quickly and with maximum information. This Human+ approach means that decisions our models make are less likely to affect the fairness of the system outcome. Where models make predictions, we link the decision directly to the evidence we had for making it. This makes the model as transparent as possible and further helps to mitigate problems with bias.  

 

The other key piece to making sure that bias in our systems does not lead to unfair outcomes is rigorous testing. We use this to continually improve and review our results to verify accuracy, precision and recall on different subgroups, and we do this on a regular basis. We also produce annual bias reports, which can be used for auditing and reporting purposes.  

 

In circumstances where we have access to large enough datasets, we deploy fairness toolkits to investigate and mitigate bias in our datasets and in our models. We are continuously evaluating the latest tools and research, to deliver the most accurate and fair models possible.  

 

To learn more about how our cutting-edge techniques can deliver the outcomes that you want. Visit: work with us  

 

You can also find us on  LinkedIn and Twitter  

Other recent posts

Podcast logos-03

AI’ll Take Your Job

Join Joseph Twigg and Jamie Hunter, the dynamic duo of financial services and AI, as they unleash their wit and wisdom on the game-changing influence of recent AI development on the industry.  

Replay social 2-06

An Introduction to ChatGPT in Financial Advice

Our CEO Joseph Twigg was joined by Iria Del Rio, our lead NLP engineer to talk about the explosive rise of ChatGPT and other large language models, what got us here and what this...

Three confident business people having discussion while working in the office together

Beyond the basics: Debunking common QA assumptions in financial services

Quality assurance (QA) is a critical business function, ensuring that products and services are compliant with regulation and meet customer needs. However, there are some common assumptions about QA within the financial services industry...

Board member and FCA proposal

Aveni partners with Delta Capita to power their Consumer Duty offering

Aveni has partnered with Delta Capita, a leading global capital markets consulting, managed services and technology provider, to advance their Consumer Duty offering by providing their clients with access to our cutting edge AI...

limitation of ChatGPT in financial services

Limitations of adopting chatGPT and other large language models in the financial services industry

There has recently been a lot of hype surrounding ChatGPT, and other large language models (LLM’s) and rightly so. Their extensive capabilities make them impactful for a wide range of use cases across various...

It pays to plan properly for the future. Cropped shot of a senior couple getting advice from their financial consultant.

Age Partnership selects Aveni Detect platform to enhance customer outcomes and mitigate risks

Aveni.ai, has been selected by Age Partnership, one of the UK’s leading Equity Release Advisory firms, to revolutionise its Quality Assurance systems setting new standards for customer service and outcomes.     Following a successful...

featured-image

Demonstrating Consumer Duty Compliance with Technology - Key Takeaways from Aveni’s recent webinar

In our latest Consumer Duty webinar series,”Demonstrating Consumer Duty Compliance with Technology,” Joseph Twigg, CEO of Aveni, sat down with John Liver, Strategic Adviser at Kore and NED at Barclays, and Alan Blanchard, Head...

Aveni SPW

Schroders Personal Wealth adopts AI-based Aveni Detect platform to transform compliance function

Aveni, has been selected by Schroders Personal Wealth (SPW) to transform its compliance function. Through the deployment of the Aveni Detect platform SPW will use the latest advances in Natural Language Processing (NLP) to...

Humain-in-the-loop

Human in the loop 101: what is it and why is it so important?

Financial services firms have been turning to Natural Language Processing (NLP) solutions to extract valuable insights from vast amounts of unstructured data. But even the most advanced algorithms can’t match the intuition and creativity...