Do you know what personal information providers like Google services know about you, and how they’re using your sensitive data to identify you?
Remember the TV series Person of Interest, where people were being tracked by an artificial intelligence program that assigned them each a number? The information technology program displayed live video of street scenes with people going about their business in the normal way, completely unaware that from the program’s point of view, each person was merely a data subject with a number emblazoned across them like labels. Although in the series, the artificial intelligence was originally created with good intent, it still seemed creepy. And, of course, eventually bad guys got unauthorized access and wreaked havoc on the world.
Images from that show came to mind when I read a recent piece on data privacy by New York Times columnist Farhad Manjoo. The author was recruited by The Times’s Privacy Project to be a guinea pig. For several days, as Manjoo engaged in normal, everyday web research and browsing, the Project logged his activity as well as all the web servers that tracked him and the personal data they obtained. The amount of personal data collected “in obscene detail,” even from small amounts of web activity, was, in Manjoo’s assessment, “staggering.”
Here’s what really knocked the guinea pig off his wheel: One of the tracking servers had issued him a 19-digit identifier number—Manjoo thinks of it “as a prisoner tag”—which was shared with nearly a dozen other trackers and advertisers, and used by eight different sites.
With ever increasing usage of the internet across all demographics, everything we do online is being tracked. Even IoT products we purchase, like smart home devices or fitness trackers, provide companies with consumer data that is sold and shared—and hacked. There is a lack of consumer protection, in part because there’s no single, comprehensive federal data privacy law, data privacy regulation, or data protection act overseeing how most companies collect, store, or share consumer data in the United States. According to Mitek’s Fraud trends and tectonics white paper, consumers are aware of this lack of protection as well, with 79% of them being concerned about data security while using connected devices to enable a payment.
Privacy policies are a cluttered mess of rules that differ by state and show up as acronyms like HIPAA, FCRA, FERPA, GLBA, ECPA, COPPA, and VPPA, designed to target only specific types of data under specific circumstances. Currently, only three states in the US have comprehensive consumer privacy laws: California (California Consumer Privacy Act and its amendment, California Privacy Rights Act), Virginia (Virginia Consumer Data Protection Act), and Colorado (Colorado Privacy Act).
Over on the other side of the ocean, Europe’s comprehensive privacy law issued by the European Commission, the General Data Protection Regulation (GDPR), mandates that companies ask users for certain permissions to share data. GDPR compliance also requires companies to give individuals the right to access, delete, or control the use of that personal data.
The United States, in contrast, doesn’t have a sole privacy law or law enforcement that covers consumer privacy of all types of data. In fact, the EU–US Privacy Shield, which was a framework for regulating exchanges of personal data for commercial purposes between the European Union and the United States, was struck down by the European Court of Justice in July 2020 on the grounds it did not provide adequate protection to EU citizens against government spying.
Most of the activities involving the selling and sharing of sensitive data that underpins common products and services is invisible to shoppers. As your personal information gets passed around between countless third parties, not only are these companies profiting from your personal data, but you’re put at exponential risk of your sensitive data being leaked or hacked.
Now, we’re all aware, for the most part, that our user data is constantly being invaded by service providers, Google analytics, and social media platforms for the purpose of serving up targeted advertising. But few of us realize how detailed the personal data being collected on us is for the purpose of identification.
The many ways this is happening are sometimes lumped together under the rubric “behavioral biometrics.” And it’s not just about what we do on the web and in mobile apps. Increasingly, sensors and code layered into web servers, digital devices and apps are also capturing personal data on how we move—press, swipe, scroll, type, etc.—when we do it.
I’m not saying behavioral biometrics don’t have a place in digital identity verification (IDV). But the problem with how they’re currently being used in many cases is that I, as a consumer, am completely unaware of how my personal data is being constructed to represent my identity. There’s no transparency or capability of protection because I don’t even know I have a 19-digital identifier number. Nor do I have any reason to trust the issuing organization. (In fact, I don’t even know who they are.) There’s no control because I certainly am not being given the choice of whether or not to provide this number and the personal data behind it to web sites and vendors.
Compare that to Mitek’s Mobile Verify® solution, where consumers submit a snapshot of a government-issued ID along with a selfie for facial biometric comparison and other AI checks. This is taking something consumers know they have, and giving them the choice of submitting it or not to a particular requestor.
My expectation is that behavioral biometrics may well prove to be a helpful part of the best IDV solutions (including iGaming solutions). I think we’re going to see a technology mix, including additional AI. I’m also optimistic about the prospect of using blockchain distributed ledger technology to improve transparency and control for consumers—so that nobody needs to feel they’re walking around with “a prisoner tag.”
I’ve elaborated a bit more on where I think IDV is headed, the challenges of getting there, and how sophisticated fraudsters are leveraging a person’s identity to skirt past defenses in the white paper Digital identity in a new world - the future came faster. We also did a webinar on Blockchain as the Next Step to Self-Sovereign Identity, which is available to view at your convenience.
About Steve Ritter
Mitek Chief Technology Officer, Stephen Ritter, drives the technical development of Mitek’s award-winning mobile deposit, mobile capture and identity verification solutions. With more than 25 years of experience in machine learning, security, cloud and biometric technologies, Stephen provides an innovative source of technical leadership and expertise. He holds a Bachelor of Science degree in Cognitive Science from the University of California, San Diego and has co-authored eight patents.