How evolving AI enables a new era of identity verification

June 16, 2023

IA silhouette of a young man looking in the distancen 2019 a British executive was on the phone with his German boss. The German asked him to transfer about $243,000 to a supplier, stating that the transfer was urgent. The British executive dutifully executed the transfer, only to find out later he had made a grave mistake. The caller on the other end of the line was not his German boss. It was AI-enabled software mimicking his supervisor’s voice. At the time, the AI attack was deemed a first. Today, criminals leverage AI all too often to thwart fraud-prevention and cybersecurity measures. 

As the current evolution of AI boom carries on, the models continue to be double-edged swords. Criminals adeptly use popular intelligence-infusing modalities like generative AI. Simultaneously, many firms are turning to AI-enabled tools to bolster fraud-prevention and identity-verification practices, hoping to get ahead of the next wave of attacks. 

Generative AI is becoming increasingly popular

Generative AI is effectively synonymous with the burgeoning chatbot industry. ChatGPT, Google Bard and their ilk leverage generative AI. Their foundation is large language models (LLMs), algorithms trained on vast tomes of training data like written internet content. When prompted by humans the models synthesize unique patterns based on their training data to produce responses. The output of LLMs and generative AI are realistic images, videos or human-like text. This can be extremely dangerous in the realm of identity verification and authentication, however, different applications of AI can also have huge benefits when it comes to identity theft and fraud prevention.

 

Gartner report: Market Guide for Identity Verification

 

Synthetic identity can lead to fraud

The story of the unwitting British executive demonstrates one realm AI development has opened wide to hackers. Deepfakes, whether of voice, video, imagery or even documents, are gaining ground as preferred tools of fraudsters. Algorithms can create facsimiles of people that pass for the real thing.

Bad actors also use AI to develop social engineering attacks, such as phishing. The biggest impact of LLMs and generative AI may be productivity enhancement. Humans can use the tools to automate email writing and calendar management, and hackers have caught on to those AI capabilities. It takes only a matter of seconds to prompt generative AI for dozens of email drafts or social media DMs. And, as is the goal of these models, the generated content reads as if a human wrote it.

Another major fraud arena in which generative AI thrives is in the creation of synthetic identities. To create a synthetic identity, attackers string various pieces of real personally identifiable information (PII) together into one cohesive, although fraudulent, identity. Because the identifiable markers, like street addresses or cell phone numbers actually exist, the synthetic identities do a better job fooling anti-fraud measures. AI also speeds the process of collecting and concatenating the information. 

To turn the tables on AI-enabled fraud, organizations can erect their own AI-based anti-fraud fortifications. 

At their heart, AI models are pattern-recognition machines. They analyze training data and match patterns in that information with human requests. Embedding AI into fraud-detection safeguards can help those solutions better identify patterns of fraudulent activity. 

AI can also quickly create sets of synthetic training data that mimic attack patterns. Companies can train AI models on these synthetic data sets to identify attacks before they happen. Creating synthetic data sets also serves another purpose. Previously, firms had to use anonymized datasets of actual PII to train anti-fraud software. Now, they can develop synthetic sets that look and act like real-life data.  

This meta-application of AI models ostensibly enables a more proactive anti-fraud posture. Just as criminal actors can quickly generate realistic attacks with AI, AI-enabled solutions can engineer those attempts before attackers have a chance to unleash them. 

AI can also enhance the current cutting-edge of fraud-fighting measures: identity verification.

Artificial intelligence algorithms are instrumental for modern identity verification

AI technology and identity verification informationPasswordless identity verification is quickly becoming the most popular countermeasure against fraud. By authenticating who a customer is rather than what they know — such as a password, two-factor authentication code or piece of PII — identity verification makes it harder for criminals to gain access to accounts and information they shouldn’t have. 

Biometric authentication, for example, whether through facial or voice recognition or liveness detection, offers more robust, reliable and secure methods of verifying a customer’s identity. Now, thanks to the application of advanced AI technologies, biometric authentication methods have enjoyed tremendous advancements.  

Facial recognition systems can now learn from a vast library of facial features and patterns, significantly improving their recognition capabilities. Modern AI can help other biometric authentication models, from voice to liveness, analyze biometric patterns to more accurately identify individuals based on unique biometric signatures.

AI development is also instrumental in document verification and forgery detection. Automated ID verification tools are now equipped with the capability to detect forgeries in documents such as passports and driver's licenses. Burgeoning AI algorithms can scrutinize minute details, identifying anomalies and inconsistencies that may indicate a forged document. This enhancement in automated identity verification significantly boosts the accuracy and reliability of these systems.

The AI-enabled ID verification process is also becoming more adept at detecting deepfakes and synthetic identities, further bolstering the reliability of the automated identity verification process. By making these authentication experiences more intelligent and accurate, AI makes identity verification less intrusive and more efficient, improving the overall customer experience. 

Machine learning offers real-time risk analysis for fraud prevention

OpenAI, for example, built its ChatGPT product using a deep learning model. Deep learning is a form of machine learning designed to process data much the same way as the human brain. According to AWS, deep learning algorithms “recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions.”

Fraud prevention solutions leverage risk-based authentication (RBA) methods (also known as adaptive authentication) to determine if a given interaction represents fraud. Solutions designate a risk score to a transaction, depending on the likelihood the system is compromised or the person has a risky profile. Risk-based authentication calculates a risk score for account access attempts or transactions in real time then provides an authentication option commensurate with the score. 

RBA can either be user or transaction dependent, where authentication is applied to the user or the given transaction. Common criteria for risk assessment include the location and IP address of the user, login device, number of login attempts and behavioral factors, such as how fast they’re typing and whether they’re acting out of the ordinary. For example, if a user accesses their account from another country, they might be asked to complete additional security steps to log in. 

Machine learning algorithms play a pivotal role in RBA platforms and, as a result, in fraud prevention. By learning from vast data sets that comprise actual-human- or synthetic-human-behavior data, machine learning algorithms can identify patterns and anomalies that occur during account access or transactions. In this capacity, machine learning algorithms can aid a robust and real-time ID verification and fraud-prevention process.

AI capabilities require careful consideration for ethical use

As with any powerful technology, ever-evolving AI technology comes with ethical considerations and challenges. Concerns around privacy, data protection and inherent biases are primary considerations for an ethical AI. AI-enabled user identity verification systems must adhere to stringent data protection laws to ensure privacy while compiling and analyzing libraries of data to analyze for fraud-prevention, anti-money laundering (AML) and know your customer (KYC) processes. 

Algorithm developers must also ensure fairness and do their utmost to avoid bias in the AI systems they build. Given the significant role these systems play in identity verification, biases baked into training data sets and human oversight of model training could lead to discrimination in real-world applications. Moreover, balancing security and user experience is essential to maintain trust in AI-enabled identity verification.

Overcoming adversarial attacks on AI-based and generative AI systems is another challenge. Adversaries may attempt to exploit the system's learning capabilities to manipulate AI and identity verification results. Therefore, constant vigilance and continuous improvement of security measures are crucial.

If AI-based fraud prevention is to proliferate, organizations must balance these safety and privacy features with the user experience. Customers will quickly abandon services or products when they perceive the experience to be frustrating, so organizations need to take care to implement authentication processes that mitigate friction in the user experience.  

The future of fraud prevention and identity verification is in artificial intelligence

The poor British executive who was fooled by AI may have avoided the mistake had his organization employed its own AI-enabled anti-fraud tools. A platform designed to recognize bizarre behavior may have alerted him to the unplanned call from his supervisor or perhaps that the transaction request was anomalous relative to typical asks. 

This example focuses on a hypothetical AI use case but there are real success stories of firms using AI to prevent fraud. One ecommerce company used to manually review transactions for potential fraud. The team wanted to automate fraud detection but was worried about being too cautious and canceling orders from customers with legitimate payment forms. By using AI-based fraud prevention, the company was able to automate and improve its prevention tactics. 

Many ecommerce companies and financial services firms likely also leveraged AI during the pandemic as more people ordered higher-ticket or unusual items relative to their purchase history. One company reduced its chargeback rate by 30% after implementing AI-based fraud detection. 

Real-world use cases make it clear that the burgeoning AI industry holds immense potential for the future of identity verification and fraud prevention. Through AI’s ability to generate, learn, and adapt, identity verification and fraud prevention have ascended to new heights of accuracy and reliability. As this technology continues to evolve, we can expect it to continue revolutionizing identity verification and management, making the process more secure, efficient and user-friendly, while staying a step ahead of emerging threats.

 

Check out this on-demand webinar to learn:

How Mitek leverages machine learning to improve fraud detection

 

 

Sources:

https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

https://www.wsj.com/articles/china-cracks-down-on-surge-in-ai-driven-fraud-c6c4dca0?page=1

https://openai.com/research/gpt-4

https://aws.amazon.com/what-is/deep-learning/