Eliminating bias in in biometric authentication: How to make your algorithms work for you

December 15, 2021

While biometric bias can be part of an identity lifecycle, biometric technology itself is not inherently biased — it is the design of biometric technology that can introduce discrimination.

 

Biometric systems analyze the physiological or behavioral traits of an individual for the purposes of biometric identity verification and biometric authentication. This is often conducted through fingerprint and facial recognition technology built on machine learning and AI — all powered by algorithms. Bias occurs when the algorithm operates in a discriminatory fashion, which often stems from how the algorithm is built, designed or tested.

There are real world implications of biased algorithms, especially for biometric identity authentication. African-American and Asian faces are up to 10 to 100 times more likely to be misidentified by facial recognition than Caucasian faces. According to a study of 189 algorithms, face recognition technologies are the least accurate on women of color. There is also the issue of over-representation in data sets. According to The Brookings Institute, researchers at Georgetown Law School found over 115 million American adults are in facial recognition networks used by law enforcement, and that African-Americans were more likely to be singled out because of their over-representation in databases of mug shots. Consequently, African American faces had more opportunities to be falsely matched, which produced a biased effect.

While we know biometric bias is wrong, preventing it is not as simple. The first step in combating bias for a biometric verification system is understanding how it happens.

Biometric bias is the result of two components — inputting biased data into the biometric authentication system and biased analysis of the data. Algorithms are trained using datasets. When datasets skew towards certain characteristics, the machine learning model then focuses more on that characteristic. This is known as over-fitting and causes the system to be less able to identify patterns found outside that characteristic. Therefore, the data is not actually biased towards a certain race or age, but less able to accurately identify the outlying demographics based on the original dataset.

The second component of biometric bias —the assessment of the data — refers to how the data itself is identified. According to Towards Data Science, there are multiple types of human cognitive biases that can negatively impact the identification of data, such as confirmation bias — where we only interpret the data in a way that confirms our preconceived ideas.

There are real-world implications of both using data sets that don’t include diverse faces, as well as the biased analysis of the data. In 2020, a Michigan man was arrested for a crime he didn’t commit. Why? Because the biometric facial recognition system returned an inconclusive match. The police officer interpreted this as a definite match and cause for arrest. In this case, both the lack of accuracy in the algorithm and the human bias resulted in the officer's decision.

Click here for more insight on Biometric Bias from Steve Ritter

We are living in a digital society and our digital world should be as equitable, if not more, than our physical world. According to Gartner, by 2022, AI-based face comparison will be used by 80% of organizations for document-centric identity proofing in the onboarding of new customers. It’s crucial that we care about reducing biometric bias. It’s about freedom. Mitek software plays a key role in deciding who is free to access essential services. All individuals have an intrinsic right to access digital services in an unbiased way.

What can we do about biometric authentication bias? We have a long way to go, but there are multiple solutions for decreasing biometric bias that we can work towards.

First solution: Testing standards
First, we need a way to evaluate biometric bias. There is currently no standardized, third-party measurement for evaluating demographic bias in biometric authentication technologies.

The industry needs a way to evaluate the equity and inclusion of biometric technologies. This would give service providers a way to ensure that their solution is equitable, regardless of whether it was built in-house or based on third-party technology from a vendor. This benchmark would provide the public with the information they need to select a service provider that’s more equitable.

Second solution: Global AI guidelines
Determining ‘what is right’ goes beyond creating accuracy benchmarks — we also need to create ethical guidelines. Until there are ethical guidelines for the use of this technology, there is no way to understand what is ‘right.’

AI ethical guidelines would serve to solidify the rights and freedoms of individuals using or subject to data-driven biometric technologies. Until we define what is and is not an ethical use of biometric technology, there is no metric or benchmark that will exist to gauge the quality of technology.

Fortunately, for many developed countries, there are discussions of what this may look like. For the U.S., the Biden Administration is in talks to create an AI Bill of Rights. The U.K. has recently released a 10 Year National AI Strategy. The EU is currently working through the proposal of the EU AI Act. However, we need to do more than theoretically talk about AI and its implications. We need to act on a global scale.

Our technology should work for us, not against us. Right now, we know that bias exists in the technology we all use every day. However, as we look ahead at the possibility of a way to standardize testing for demographic bias and ways that we can decide on ethical guidelines, the possibilities open up. Once we move to a world where we are able to eliminate bias from our tech and algorithms, what’s next for this tech? How can we utilize biometrics to make a more equitable world and improve customer experience?

 

Other Biometric Bias content:

eBook: Biometrics and bias: the science of inclusivity

Blog: Biometric bias in the transgender and non-binary community

Innovator video: Biometrics, bias, and age verification | Filip Verley

Blog: What is Demographic Bias in Biometrics?