Steve Ritter is Chief Technology Officer at Mitek Systems, a provider of digital identity verification and mobile deposit solutions. In his earlier professional life, he headed up engineering for a number of tech firms, including leaders in web and mobile security as well as innovators in cypher genomics and AI-based facial recognition of emotions. Steve recently wrote Digital Identity in a New World—the Future Came Faster, a sequel to his popular Future of Identity. He talked with Mike Sasaki, leader of the Mitek Systems global customer success team, about how much has changed over the past year in the digital transformation of society and the importance of identity verification. He also shared his view of what the future looks like.
Mike: What would you say are the most important changes happening over the past year or so?
Steve: There have been lots of advancements during the pandemic, and I think for the most part they’re going to stay with us. Take digital grocery services—that’s something I never used until last year, but now as a mostly working-from-home father, I use it all the time, and it certainly has made my life and my family’s life so much more convenience. Many people are having a similar experience. From necessity, they’ve moved past whatever it was that previously prevented them from trying various types of digital services, and they’re going to continue to use them.
Along with that, people are becoming more aware of and comfortable with the idea of digital identity. It’s really identities—plural—because most of us have more than one. Say I’m commenting on an Amazon product, I might use a pseudonym or version of my name or some kind of handle as my identity for that interaction. And that’s fine. It still serves a valuable purpose because if a person or analytic is looking at all the comments made by that identity, they see something consistent and informational. But the Steve it represents doesn’t have to be the same as the Steve I present professionally on LinkedIn or the true, formal, certified identity I need to use and have verified when I apply online to open a bank account. I might want to keep those identities separate, and there’s nothing wrong with that. Actually, I think it’s very natural and appropriate. We just need to provide people with ways to manage their multiple identities.
We’ve also seen a lot of advancements and innovations in technologies that are enabling these social and attitudinal changes. The real game-changer, in my opinion, is the ability to prove identities over the internet. Thanks to companies like Apple, it’s now widely accepted that you can just take out your mobile device and securely pay for things digitally. They’ve started this wave of adoption that is making biometrics, including thumbprint, face, and voice recognition, part of our daily lives. The best facial biometrics now have very good liveness detection, which makes them a highly accurate and secure means of authentication. Voice technology has been around for a long time, but recent advances are making it much more usable as an authentication mechanism for digital channels, and so we’re seeing a major upswing in usage now.
Is this wave of consumer adoption driving regulations around biometrics and other identity verification technology. Or is regulation helping to speed up consumer adoption?
I’d say consumer adoption of digital technology is causing governments around the world to look at the need for new regulation. Recently, a lot of this scrutiny has focused on ensuring fairness and inclusion in the digital world by addressing the issue of bias in biometrics. Another focus for regulators is the way personally identifiable information (PII) is stored and used. That’s really important because a lot of entities in the digital world are still in this outdated mode of collecting and storing large databases, which are becoming ever larger as the quantity and variety of digital interactions continue to rise. I worry about the next big data breach and what that could really mean—especially if it’s biometric data. If your social security number is stolen, you can replace it (a lot of people don’t know this, but you can). Not so easy with your fingerprint.
Now some people in the industry look at regulation and say, hey this is a hindrance, it’s creating a roadblock to adoption. And there’s some truth to that, especially in the US, where regulation is currently a patchwork, making it difficult for companies to comply. For example, there are biometric privacy laws in Illinois and a couple of other states, and this legislation is influencing other states, but there’s no consistency yet.
Mitek’s attitude is that we definitely want governments to provide clear guidance and frameworks for how organizations are expected to handle data and interact with consumers. And our general approach is to take the most stringent requirements we find around the world and make those our corporate standard everywhere we do business.
Can you unpack for us the issue of demographic bias in biometrics?
Sure. Let’s take facial biometrics since most of the concern is currently around it. Say an organization is using a facial biometric for identity proofing when onboarding new accounts. They’re asking new applicants to take a picture of their driver’s license or other official ID as well as a selfie, then comparing the selfie to the portrait photo printed on the ID.
If the biometric algorithm has a lower match rate for faces of a particular class of people—could be any demographic of age, race, gender or a mix—then those people may, at best, go through a longer, higher friction onboarding process with additional identity verification steps. At worst, they may be denied access to the digital service.
That access might be to credit and other financial services, to educational services, healthcare services, government services, transportation services or even grocery shopping services. With so much of daily life going digital we have to do everything we can to prevent bias and ensure fairness, inclusion and equal access.
So how do you prevent biometric bias?
Well, a key point I want to make is that the technology itself generally does not have bias built in. I’m sure you could build an algorithm that did, but in most cases, it’s not the technology and it’s not the architecture. As with all AI, potential for bias tends to have a lot more to do with the data used to train the algorithm. If your training set has many more images of White men age 30 to 50 than of Black women age 50 to 70, then in production it’s going to be a better at matching faces that look similar to the ones it was heavily trained on.
There are environmental factors too that can affect the matching of different types of faces. For example, because the biometric requires a certain amount of contrast to accurately identify facial features, a dark face against a dark background or dim room can be a problem—as can a light face against a light background in a very bright room. We address these challenges by making sure we’re training our algorithms with a wide range of environmental conditions in the lab that approximate as closely as possible what’s actually going to happen in the real world. The other part of it is to provide end-users with really good guidance for how to capture good images.
And, apart from the biometric itself, there are other things that can cause bias in identity proofing. We need to make sure we’re not introducing bias through the OCR that extracts information from the ID for use in various identity verification subprocesses running in the background. To improve accuracy to extraction process may incorporate a dictionary of known names, which helps the software to correct various types of single-character errors. If the dictionary being used doesn’t adequately reflect the culture or ethnicity of the production population, the data sent to the background analytics will be less accurate, which could result in a higher recognition error rate, slow down the verification process or even cause a false positive—which is when a legitimate applicant scores high for potential identity fraud.
Steve, I know you’ve written an executive briefing each of the last couple of years on the future of identity, and covered quite a bit of ground in them. But let me ask you to pick out one thing most on your mind, or maybe something that’s emerged more strongly since you wrote the latest brief. What would that be?
I think the future of identity is taking us to a place where our ability to use the correct, appropriate digital identity for a specific purpose is much easier than it is today. And that makes it much easier for individuals to control how and when their various identities are applied.
Think about a scenario where the internet of things (IoT) and identity verification are working together in our digital homes. A biometric—maybe facial, fingerprint, voice or combo—embedded in my smart refrigerator recognizes I’m Steve and enables me to automatically make purchases through my Amazon account to keep that fridge filled with healthy food for my family. Throughout my day, it continually authenticates me for access to this digital service, but it’s a passive, invisible process that adds no friction to my life. It’s also recognizing my children and enforcing appropriate access—which doesn’t include ordering sweets and sugary drinks. A similar thing happens with my smart TV. I don’t have to spend time setting up multiple family user profiles and access levels for Netflix and other streaming services. When we want to watch something, no one has to waste time switching the user profile and inputting PINs.
So you can see, Mike, how this type of continuous authentication could be a real convenience for everybody. But also how it could be a significant security and privacy risk if we don’t get the technology and regulations around implementation right.
What we’re talking about here is decentralized identity verification and access control. The concept is often confused with federated identity, which means that multiple identity systems are interacting with each, and with distributed identity, where you’re pushing data and computation to the network edge. But with decentralized identity, we’re pushing decisioning out to the edge as well.
I believe that decentralized identity decisioning is the right direction for the future. As long as decision making remains centralized, we’re still going to have the ever-growing piles of PII building up on a large number of servers and vulnerable to breaches. Decentralizing decisioning will dramatically reduce that risk, and at the same time I think it will be an important part of giving individuals more control over their own digital identities.
You sound optimistic about the future. Am I reading you right?
Yes, I am optimistic. What gets me really excited is that we have all the technology we need to solve the bias, security and data privacy problems I’ve talked about and achieve a decentralized digital identity ecosystem where owners of identities are in control. Actually, there are some very interesting business opportunities out there for someone to come up with really nice ways for consumers to manage their identities or create marketplaces where people can get their identity attributes verified.
So the good news is this is not a technology problem. It’s more a social and governmental issue. What we need from governments is to provide frameworks that will encourage innovation and guide adoption. I don’t think we want to see governments providing the identity systems themselves, but they should be creating the environment where these decentralized systems can thrive.