Biometric technology as a unique identifier for identity verification has worked its way into many aspects of everyday life, making activities ranging from unlocking your phone to accessing your bank account or even completing new customer onboarding safer and easier. It’s a trusted tool for identity verification that customers and institutions have come to rely on, and customers also often rate it as more convenient than other methods. As more transactions are completed online, biometric technology has become the foundation of trust between businesses and the people they serve in this digital-first world.
For institutions, delivering convenience and trust means ensuring that these systems work equally well for everyone. Biometric systems must perform the same for customers across all demographics. To ensure a consistent user experience, and also to prevent customer exclusion, reduce mistrust, and avoid compliance risks. When a customer is unable to authenticate due to an algorithm’s failure to interpret age, gender, or regional differences, that customer is more likely to abandon an onboarding process, move to another institution, and tell others that the organization’s system is discriminatory or flawed. This is why one of the most important metrics in biometric verification, the Bona Fide Presentation Classification Error Rate (BPCER), has become central to conversations about inclusivity and fairness in biometric verification algorithms.
BPCER is a measure of how often a biometric system rejects a genuine user incorrectly during the verification process. Understanding and minimizing BPCER is essential to conversations about biometric bias and identity verification. In general, a lower BPCER is indicative of a more inclusive and user-friendly system that’s capable of recognizing legitimate users in any conditions, and a high BPCER rate can be a signal of potential bias and should be used as a cue to evaluate rejection rates across demographic groups. In practice, BPCER is a starting point for evaluating both system performance and demographic fairness. A truly fair system must exhibit both a low absolute BPCER and statistical parity in how it treats all demographic groups, delivering a smooth and equitable user experience for everyone. Low error rates and equitable performance across demographics are the gold standard in biometric inclusivity.
In this blog, we’ll go further into why biometric bias matters, its common causes, how it can be addressed, and best practices for reducing bias and minimizing BPCER.
Biometrics to combat human biases
Biometric systems are often scrutinized for replicating human biases, but a well-designed system can actually reduce bias when compared to human decision-making. Bias in algorithms typically stems not from the biometric technology itself, but from imbalanced or non-representative training data that reflects societal inequities. By intentionally curating large and diverse datasets, applying rigorous demographic performance testing, and continuously tuning fairness metrics throughout development, biometric authentication systems can be engineered to perform more consistently and objectively across populations. While no system is ever completely free of bias, responsible data practices and ongoing monitoring make it possible for biometrics to mitigate bias rather than reinforce it.
Why biometric bias matters in identity verification
The rise of tools like generative AI has empowered fraudsters to take ever-more-sophisticated approaches to digital fraud. Banks and financial institutions know they must improve their security posture against these and other emergent threats — but fairness and inclusivity must evolve in parallel, with a commitment to training on all populations so that no legitimate users are disadvantaged. Biometric bias can occur when a system has been trained to work well when authenticating one demographic, but not presented with diverse data sets, so it may struggle when presented with some populations. For example, a person from a region that was underrepresented in the system’s training data may experience higher false rejection rates.
Recent independent research highlights this challenge. In a 2024 large-scale study of remote identity verification (RIdV), researchers tested five commercial systems against nearly 4,000 diverse participants. Three out of the five systems studied struggled to correctly authenticate users with darker skin tones, resulting in higher rejection rates. The study illustrates that working with a biometrics vendor that proactively tests and trains its systems to avoid unintentional bias is essential.
For businesses, it’s important to understand that customer trust is fragile. Nearly 80% of financial institutions are now reliant on document-centric identity proofing for new customers, making verifying identities equitably across populations essential to ensure everyone has access to financial products. Bias undermines confidence not just in the biometric matching process, but in the institution itself — among the demographic group impacted, as well as other customers that hear about demographic issues and believe the process or institution to be systematically biased. Regulators are increasingly focused on fairness, and biased outcomes in digital identity systems can create compliance exposure. It also creates additional operational costs when customers who are unable to use biometric authentication require manual support or review, or abandon the process.
What are the common causes of biometric bias?
Systems aren’t trained to be intentionally biased. Rather, one of the most common causes of bias is a lack of training in a specific area due to an unbalanced training data set. When the data used to train a biometric system consists only of certain age groups, genders, or people from specific regions, the resulting model will perform better for those groups than for those who are underrepresented in the training data.
Facial presentation attack systems face similar challenges when appearance varies by cultural expression. Studies of liveness detection models have shown higher false rejections for users wearing headscarves, hoods, or hijabs, suggesting that some algorithms haven’t been sufficiently trained for these expressions. Incorporating this diversity during training and validation phases has been proven to lower BPCER; for example, other providers’ biometric authentication systems can return up to 30 percent false rejects on users wearing hijabs.
Environmental and technical factors can exacerbate some of these biases. Poor lighting, inconsistent camera quality, or degraded image resolution are common causes of elevated BPCER rates and can further disadvantage some groups. If a system hasn’t been properly trained to account for these conditions and how they impact features in darker skin tones.
How biometric technology addresses bias
Many solution providers have made significant investments in addressing these challenges. As Mitek notes in its most recent buyers’ guide, inclusivity must be built into every stage of design and testing. Biometric systems should be validated across skin tones, age groups, and document types to ensure fair performance for all users. Organizations should choose an identity proofing vendor that tracks and reports on biometric performance by demographic group, and one that demonstrates the use of a top-performing global facial recognition algorithm in one-to-one matching.
One of the most effective ways to drive improvements is with the use of broad, more diverse datasets. Providers that deliberately train their algorithms on datasets that include a wide range of ages, ethnicities, genders, and cultural attributes will build biometric systems that are more capable of serving all demographics within a population fairly.
Machine learning also allows for continuous improvement of biometric solutions. Modern systems don’t maintain a static algorithm; they’re continuously updated and retrained with new data, so they can adapt over time to demographic and cultural shifts. Algorithms can also now account for gradual changes in a person’s appearance, reducing the likelihood of rejection related to age-related changes, or weight gain or loss, for example.
The use of bias monitoring and mitigation, with independent testing against diverse cohorts, also helps uncover disparities to address. When these disparities are discovered, corrective measures like model retraining or threshold adjustments can ensure that the systems’ efficacy remains equitable.
The use of multimodal options can also assist by creating fallback mechanisms. If one modality has challenges with accuracy within a particular cohort by combining face, voice, and behavioral biometrics, consistent results can still be delivered.
Overall, the trend is encouraging, with biometric performance continuing to improve. A 2025 analysis of facial recognition technology showed that leading algorithms achieved 98-99% accuracy across demographic groups. Still, accuracy on average doesn’t eliminate the need for continuous improvement and monitoring, with transparency that ensures the results are fair and equitable both statistically and experientially.
Examples of bias reduction in practice
Institutions are putting the aforementioned practices and others into place to reduce bias. Outside of training and the implementation of multimodal solutions, there are novel practices that are being put into place to improve accuracy. For example, organizations that refresh their biometric enrollment data more frequently are able to reduce the delta between the user’s enrollment and any changes in their appearance over time, resulting in significant reductions in their false rejection rates for their older users.
At a policy level, transparency and accountability are improving, with some sources like the UK’s 2025 Inclusion Monitoring Report showing that 41% of digital identity services collect or publish accuracy rates broken out by demographic group, up from 30% in the previous year.
Industry best practices for reducing biometric bias
A commitment to addressing all the technical improvements discussed earlier, including expanded training data, consistent improvement with machine learning, algorithms capable of age progression, and a commitment to bias monitoring and discrepancy mitigation—should be a foundational industry best practice for reducing bias and lowering BPCER.
Vendors in the industry should also commit to collaboration with regulators and advocacy groups. This collaboration ensures that all perspectives are represented in the development and implementation of biometric systems.
Transparency with the public should also be considered best practice. Users want to know how their data is being used, the safeguards that have been put in place, and how equitably an institution treats its customers. Open communication helps to build trust in the use of biometrics as a whole, as well as trust in biometrics vendors and the institutions that use biometrics to protect customer accounts.
Finally, all vendors should commit to responsible AI design that extends beyond the algorithm. Machine learning teams should use techniques that allow them to rebalance underrepresented categories—including dynamic sampling, weighted loss functions, and synthetic data generation. These methods can help systems stay equitable and accurate.
The future of inclusive biometrics
Innovation continues to drive improvements in biometric inclusivity. Multimodal systems capable of using combinations of facial, voice, and behavioral biometrics are gaining momentum; with less reliance on any single data point, these systems are less likely to create false rejections. Artificial intelligence and machine learning capabilities also continue to advance, moving beyond recognition tasks to be able to actively detect and even correct bias—which creates a real-time safeguard against biased outcomes.
Over the long term, the establishment of equitable and user-friendly systems that reflect real-world diversity can only be assured by designing with inclusivity at the core of every system, a commitment the industry should make to ensure that biometric systems remain accurate, accessible, and fair.
Why addressing biometric bias is critical
A commitment to inclusivity is core to building trust in digital identity as a whole. By tackling bias head-on, biometric technology providers can ensure that users receive secure, seamless, and universal access to services.
For businesses, fewer false rejections also translates into smoother onboarding processes, higher customer trust, and better regulatory compliance. For users, inclusive systems ensure fairness, accessibility, and a high level of confidence that their identity will be recognized without unnecessary obstacles, and that everyone in society will have the same level of access.
Mitek is proud to lead in developing inclusive biometric solutions that reduce bias, improve accuracy, and create fairer and more equitable digital identity experiences for everyone. Explore how Mitek is building the future of inclusive biometrics.
About Anastasia Molotkova - Product Manager at Mitek
Anastasia Molotkova is a certified product leader, specializing in AI-driven cybersecurity solutions that address emerging threats like deepfakes and generative AI fraud. She leads the development of innovative biometric technologies, including injection attack detection, deepfake detection and facial liveness detection, helping to set new security standards in digital onboarding and authentication.