By Stephen Ritter | CTO at Mitek
Biometric technology itself is not inherently biased — it is the design of biometric technology that can introduce discrimination.
Biometric systems analyze the physiological or behavioral traits of an individual for the purposes of identity verification and authentication. This is often conducted through fingerprint and facial recognition technology built on machine learning and AI — all powered by algorithms. Bias occurs when the algorithm operates in a discriminatory fashion, which often stems from how the algorithm is built, designed or tested.
There are real world implications of biased algorithms. African-American and Asian faces are up to 10 to 100 times more likely to be misidentified by facial recognition than Caucasian faces. According to a study of 189 algorithms, face recognition technologies are the least accurate on women of color. There is also the issue of over-representation in data sets. According to The Brookings Institute, researchers at Georgetown Law School found over 115 million American adults are in facial recognition networks used by law enforcement, and that African-Americans were more likely to be singled out because of their over-representation in databases of mug shots. Consequently, African American faces had more opportunities to be falsely matched, which produced a biased effect.
While we know biometric bias is wrong, preventing it is not as simple. The first step in combating bias is understanding how it happens.
Biometric bias is the result of two components — inputting biased data into the system and biased analysis of the data. Algorithms are trained using datasets. When datasets skew towards certain characteristics, the machine learning model then focuses more on that characteristic. This is known as over-fitting and causes the system to be less able to identify patterns found outside that characteristic. Therefore, the data is not actually biased towards a certain race or age, but less able to accurately identify the outlying demographics based on the original dataset.
The second component of biometric bias —the assessment of the data — refers to how the data itself is identified. According to Towards Data Science, there are multiple types of human cognitive biases that can negatively impact the identification of data, such as confirmation bias — where we only interpret the data in a way that confirms our preconceived ideas.
There are real-world implications of both using data sets that don’t include diverse faces, as well as the biased analysis of the data. In 2020, a Michigan man was arrested for a crime he didn’t commit. Why? Because the biometric facial recognition system returned an inconclusive match. The police officer interpreted this as a definite match and cause for arrest. In this case, both the lack of accuracy in the algorithm and the human bias resulted in the officer's decision.
We are living in a digital society and our digital world should be as equitable, if not more, than our physical world. According to Gartner, by 2022, AI-based face comparison will be used by 80% of organizations for document-centric identity proofing in the onboarding of new customers. As we are rapidly approaching 2022, it’s crucial that we care about reducing biometric bias. In my view, it’s about freedom. Our software plays a key role in deciding who is free to access essential services. So yes, I think that all individuals have an intrinsic right to access digital services in an unbiased way.
What can we do about biometric bias? We have a long way to go, but there are multiple solutions for decreasing biometric bias that we can work towards.
First solution: Testing standards
First, we need a way to evaluate biometric bias. There is currently no standardized, third-party measurement for evaluating demographic bias in biometric technologies.
The industry needs a way to evaluate the equity and inclusion of biometric technologies. This would give service providers a way to ensure that their solution is equitable, regardless of whether it was built in-house or based on third-party technology from a vendor. This benchmark would provide the public with the information they need to select a service provider that’s more equitable.
Second solution: Global AI guidelines
Determining ‘what is right’ goes beyond creating accuracy benchmarks — we also need to create ethical guidelines. Until there are ethical guidelines for the use of this technology, there is no way to understand what is ‘right.’
AI ethical guidelines would serve to solidify the rights and freedoms of individuals using or subject to data-driven biometric technologies. Until we define what is and is not an ethical use of biometric technology, there is no metric or benchmark that will exist to gauge the quality of technology.
Fortunately, for many developed countries, there are discussions of what this may look like. For the U.S., the Biden Administration is in talks to create an AI Bill of Rights. The U.K. has recently released a 10 Year National AI Strategy. The EU is currently working through the proposal of the EU AI Act. However, we need to do more than theoretically talk about AI and its implications. We need to act on a global scale.
Our technology should work for us, not against us. Right now, we know that bias exists in the technology we all use every day. However, as we look ahead at the possibility of a way to standardize testing for demographic bias and ways that we can decide on ethical guidelines, the possibilities open up. Once we move to a world where we are able to eliminate bias from our tech and algorithms, what’s next for this tech? How can we utilize biometrics to make a more equitable world?
Eliminating Biometric Bias: How to Make Your Algorithms Work for You by Stephen Ritter - CTO at Mitek