Biometric injection attacks: the hidden threat to face authentication

May 28, 2025 by Anastasia Molotkova - Product Manager at Mitek
Biometric injection attacks: the hidden threat to face authentication

Biometric authentication – especially facial recognition – has become a mainstream method for user authentication in recent years. From unlocking smartphones to validating online banking transactions, faces have effectively become passwords for many everyday functions. This popularity stems from the convenience and robustness of face biometrics: users don’t need to remember secrets or carry tokens, and modern systems can verify a live face with high accuracy. Advanced liveness detection techniques and industry standards (like ISO/IEC 30107 PAD Levels 1 and 2) have further improved security by catching obvious spoofs (for example, someone holding up a photo). In short, facial biometrics offer a frictionless yet secure way to prove “you are who you say you are.” 

However, no security measure is foolproof. Just as passwords face phishing and hacking, facial recognition is encountering its own sophisticated attacks. Fraudsters have become creative at finding ways around facial recognition and liveness checks. One emerging tactic is the biometric injection attack – a method of digitally inserting fake biometric data into the verification pipeline, rather than physically presenting a fake face to the camera. This article explores what biometric injection attacks are, how they target face authentication systems, the types of injection attacks seen in the wild, the kinds of fake face data attackers use (from stolen images to deepfakes), and how a multi-layered defense can protect against these threats. The goal is to demystify this technical attack for business stakeholders and underscore why secure design and layered protections are critical for any face biometric solution. 

From presentation attacks to injection attacks 

In traditional presentation attacks (also known as spoofing attacks), an impostor attempts to fool the system at the sensor – for example, by holding up a printed photo, wearing a mask, or playing a video in front of the camera. Presentation attacks are direct and don’t require deep technical knowledge of the system. To counter these, Presentation Attack Detection (PAD) checks for artifacts of fake content (in the form of screen replay, masks, printed cutouts, etc.) to be presented in front of the camera. 

Biometric injection attacks, by contrast, target the system between the capture device and the verification engine. Instead of fooling the camera with an external fake, the attacker feeds fake data directly into the system’s input, bypassing the camera entirely. In other words, the attacker “plugs” a falsified face image or video into the software as if it came from the camera. The biometric system believes it is receiving genuine live camera data when, in reality, it’s been “injected” with a forgery. This could be done by software tricks, hardware taps, or network interference (we’ll break down the methods shortly). Because the attack happens past the point of capture, standard liveness tests can be sidestepped – the system might unwittingly analyze a recorded or synthetic face that looks lifelike. 

Injection attacks generally require more technical sophistication than presentation attacks. 

The fraudster may need to manipulate device drivers, use virtual camera software, modify an app, or intercept network calls – techniques that demand software skills or tools. Yet, these attacks are rising as cybercriminals share tools and knowledge. For instance, advances in AI have made it easy to generate realistic face-swap videos (deepfakes) that can be injected into verification systems. According to Gartner, injection attacks increased by 200 percent in 2023, highlighting the growing threat of deepfake-related fraud. There are now plenty of easily accessible face-swapping and deepfake tools (many offering free versions), which makes injection attacks increasingly feasible for attackers.  

In short, as organizations shore up defenses against basic spoofing, attackers are turning to injection attacks as a new “invisible” attack vector. 

How biometric injection attacks work (focus on face authentication) 

So, how does one “inject” a fake face into a biometric system? In practice, attackers have devised multiple techniques to slip fraudulent biometric data into the stream between capture and verification. Below are the main types of biometric injection attacks related to face recognition, categorized by where and how the attacker interferes: 

  1. Virtual camera injection: The attacker uses software that creates a virtual webcam device on the system, feeding it with pre-recorded or synthetic video instead of a real camera feed. The operating system and application treat this virtual camera as if it were a legitimate hardware camera. For example, many live streaming tools can act as a webcam source and play a video or static image of the target face. This way, the attacker can stream a fake face video (even a high-quality deepfake) into the biometric app, bypassing the actual camera hardware. 

  1. Hardware-based injection (video capture devices): In this scenario, the attacker leverages physical hardware to fool the system. A common technique is using an external USB video capture stick or HDMI “screen grabber.” The attacker connects one device (the “source”) playing a face video to another device (the “sink”) running the authentication app, via a capture card that pretends to be a camera. To the sink device’s OS, the USB capture dongle registers as a normal camera – but it relies on a video from the source device’s screen. This low-cost hardware trick effectively injects a video feed into the target system under the guise of a camera, and neither the OS nor the app can tell that the feed didn’t come from a real camera lens. 

  1. Operating system driver manipulation: This method involves tampering with the OS-level camera driver or libraries. An attacker with deep access (or malware on the device) can hook into the camera driver software and insert their fake frames at a low level. For instance, by hacking the camera’s device driver, an attacker could replace the video stream from the genuine camera with their own video content. 

  1. Code injection and API hooking: Rather than modifying the OS, an attacker might target the biometric application’s code at runtime. Using instrumentation tools, they attach to the app’s process and intercept function calls – for example, the function that grabs a camera frame. The attacker’s code can then supply a chosen image or video in place of the real camera output. By modifying the app’s behavior on the fly, the attacker essentially tricks the app into authenticating a fake face. This technique is akin to how game cheaters modify memory – it’s direct tampering with the app’s logic in real-time. 

  1. Browser API hooking: When face verification runs in a web browser (for example, a web-based onboarding flow that would normally access your laptop’s webcam), attackers can target the browser’s camera interfaces. An attacker might use a malicious browser extension or debugging tools to override the API calls. For instance, a custom script or plugin could feed a preset video to the browser whenever the page asks for camera input. In effect, the web page thinks it’s getting live video from your webcam, but it’s receiving a pre-recorded feed provided by the attacker’s script. This “man-in-the-browser” style injection can be done via browser developer consoles or add-ons and doesn’t require modifying the underlying OS – only the browser environment. 

  1. JavaScript injection (web context): Like the above, if the biometric check is done through a web page, an attacker can inject malicious JavaScript into that page to alter its behavior. The injected JS might, for example, replace the video stream object that the page is using with one controlled by the attacker, or skip client-side liveness checks. In practice, some fraudsters have used this approach to run spoofing scripts in the browser that feed static images in place of video. By hijacking the web app’s own code, the attacker makes the verification page effectively lie about what the camera sees. 

  1. Emulated device environments: Rather than using the real client device at all, an attacker can run the authentication app in an emulator or virtual machine. Emulators allow feeding arbitrary image files as “camera” input, setting fake device attributes, and even faking motion sensors. Attackers use emulator software to mimic a real phone but with the ability to manipulate data and feeds freely. By making a PC appear as, for example, an iPhone (through emulator configuration), the attacker can bypass checks for genuine camera hardware and inject their video into a controlled sandbox. 

  1. Client application modifications: This attack targets the biometric client app itself (mobile or desktop) by altering its code or configuration permanently. Instead of just hooking at runtime, the attacker decompiles or reverse-engineers the app, and then modifies the binary or script to disable security features or feed in alternate data. Such modifications often require significant expertise and are usually done on rooted devices or emulators, but they exemplify the lengths attackers will go to to inject false biometrics. 

  1. Network-level attacks (manual payload injection): In some cases, attackers skip touching the camera or app altogether and attack the communication between client and server. Many biometric systems send face data (images or biometric templates) over an API to a backend for verification. If this network channel is not properly secured (or if the attacker has the protocol details), the attacker can attempt to forge network requests carrying fake biometric data.  

  1. Man-in-the-middle (MITM) attacks: A more dynamic network attack is to intercept and modify data in transit between the biometric client and server. In an MITM scenario, the attacker positions themselves (or malware) between the user’s device and the backend – for instance, on a rogue Wi-Fi or via a proxy – and injects the fake biometric payload on the fly. If the connection is not encrypted or the attacker manages to break/avoid the encryption (perhaps on a rooted device by undermining certificate checks), they could swap out the legitimate camera feed data with their chosen image data before it reaches the server.  

Attackers can target multiple points in a face authentication flow – from the camera hardware and device drivers to the application’s code, to the browser environment, all the way to the network transmission. Ultimately, all these methods share the goal of feeding a falsified face to the verification system without raising any red flags. The injected content might be a single static photo replayed as video frames, a previously recorded video of the genuine user, or even an AI-generated live deepfake that can mimic movements. Successful injection means the system thinks it’s performing a normal, live check, but it’s really “watching a recording.” If done well, the attack can be nearly impossible to spot with traditional defenses – the log files and process might show everything as normal (“camera opened, face captured, face matched”), leaving no obvious alarm for fraud. This stealth factor is why injection attacks are considered extremely dangerous – if an attacker defeats the system, the breach may go unnoticed until suspicious activity occurs later.  

What do attackers inject? Fake faces galore 

It’s clear that injection attacks are a means to an end – the end being that a false face passes as the real user. But where do attackers get these fake faces, and how convincing are they? The answer: today’s attackers have an arsenal of deceptive media at their disposal, ranging from stolen photos to AI-crafted personas. The injected content in face biometric attacks typically falls into a few categories: 

  • Stolen or leaked images/video of the real person: The simplest case is when a fraudster obtains legitimate photos or videos of the target (the person whose identity they want to assume). These could come from social media profiles, data breaches, or even previous video calls. A clear selfie or a short video of the victim looking at a camera can be enough to bypass a basic face match if injected. While a single photo might not beat a liveness check (due to lack of motion), a trove of images or a stolen video could be replayed. Attackers have even purchased collections of face images on black markets to build such datasets. If the genuine user’s media is used, the biometric system may have trouble distinguishing it from a live capture of that user. 

  • Replay videos and “cheapfakes”: Not all injected media are high-quality deepfakes; some are just cleverly prepared video clips. For example, an attacker might compile a short video of the target blinking and moving (either sourced from a public video or assembled from photos via simple animation). This can be a “cheapfake” – essentially a low-tech doctored video that looks real enough. 

  • Deepfakes (AI face-swapped videos): Deepfakes are the poster child of modern digital deception. Using AI algorithms, attackers can create highly realistic videos where one face is swapped for another or a synthetic face is animated to speak or move. For biometric fraud, a deepfake might involve the attacker’s face (or an actor’s face) being manipulated in real-time to look like the victim’s face. This can produce a realistic live-style video that responds to prompts (for example, “turn your head,” “smile”) because a real person is doing those actions – just with an AI mask of the victim applied. With extremely inexpensive or even free open-source deepfake tools readily available, criminals have started leveraging them for injection attacks. The result is a video that can mimic a live person nearly perfectly, fooling human observers and, if unchecked, the biometric system. As of 2024, deepfake technology has advanced to the point that even subtle expressions and eye movements can be rendered convincingly. Experts predict that by 2026 up to 90% of online content may be synthetically generated, meaning the prevalence and realism of deepfakes will only grow, and distinguishing an AI-generated face from a real one will continue to be a constantly evolving challenge. 

  • AI-generated synthetic faces: Separate from deepfakes (which impersonate a specific real person), attackers can also use completely synthetic faces – ones that don’t belong to any real individual. Generative adversarial networks (GANs) can produce photorealistic human faces that are not real people. For example, as seen on sites like “This Person Does Not Exist”. An attacker might inject such a face to create a synthetic identity – passing the biometric check with a face that matches a fake ID document but doesn’t correspond to a living person. This is a way to fool a system during enrollment/onboarding, potentially creating verified accounts under fictitious identities. Synthetic faces can look perfectly lifelike; however, one challenge for the attacker is ensuring the same synthetic face can be reproduced consistently for future logins. Still, as a one-shot attack (for example, to open a fraudulent account), a GAN-generated face video can be used with an injection to beat a selfie check, essentially outsmarting systems that rely purely on face matching against ID photos. 

  • Face morphs: A morphing attack involves blending the facial features of two (or more) people into a single image or video. This is often mentioned in the context of passport/ID fraud – for example, two accomplices create a morphed photo that resembles both of them, get an ID issued with that photo, and then both can pass as the ID holder. In biometric verification, an attacker might use a morph to ensure the injected face satisfies multiple identity checks at once. For instance, if the system requires matching the selfie to a stored ID photo, a morph of the attacker’s face with the victim’s face might be crafted to fool the face-matching algorithm by looking sufficiently like the victim (to match the ID) while still containing elements that the attacker can reproduce (for liveness). Morphs can be seen as a sub-case of synthetic images – they are partially real. If the attacker already has a stolen ID document image of the victim, they could morph their own face into it and then turn that morph into a deepfake video. While complex, tools for face morphing are accessible, and research shows morphs have been used to bypass automated face match systems undetected. 

Crucially, the availability of such fake content has exploded. Only a few years ago, creating a convincing deepfake or morph required specialized skills and computing power. Today, user-friendly apps and open-source libraries put these capabilities in many hands. A Gartner analyst noted that by late 2023, the surge of AI content generation meant “attackers and organizations alike have massive potential in how they generate content”. 

Would-be fraudsters can download software to generate a deepfake video of a target with just a handful of the target’s photos. There are even online marketplaces and communities where fake videos, synthetic data, or custom AI models can be bought or traded. In summary, attackers have no shortage of fake faces to inject – whether they steal the real one or synthesize a new one – and the bar to create convincing fakes is lower than ever. 

A multi-layered defense: Stopping injection attacks on face biometrics 

Biometric injection attacks are daunting, but they are not unstoppable. Just as the industry responded to presentation attacks with liveness detection, it’s now developing multiple layers of defense to counter injection and the advanced fakes that come with it. However, no single technique is a silver bullet, so the key is to combine software hardening, data analysis, and AI-driven checks to verify not just who the biometric data represents, but also how that data was captured. Below are the critical layers in a robust defense against face injection attacks: 

  • Presentation attack detection (PAD): This is the first layer and refers to the classic liveness and anti-spoofing tests performed at capture time. PAD remains important even as the threat of injected content increases – many attackers might still try simple methods first. PAD methods include active liveness tests (prompting the user to blink, turn their head, or respond to random actions) and passive liveness analysis (detecting signs of life from the video itself, like facial micro-movements, texture analysis, or light reflection checks). In the recent DHS Evaluation of active and passive PAD engines, passive liveness from ID R&D outperformed all active liveness engines. 

  • Injection attack detection (IAD): This layer focuses on detecting the tell-tale signs of an injection in the data stream or environment. Since injection attacks hijack the normal pipeline, they often leave subtle artifacts or anomalies. One approach is software-based checks for tampering – for instance, the system can detect if a known virtual camera driver is in use or if the camera feed is coming from an emulator. Some verification providers have started implementing virtual camera detection as a first step to block fakes. On the device, the app can also check for the presence of hooking frameworks or whether it’s running on a rooted/jailbroken device – indicators that an injection toolkit might be in play. Another approach is analyzing the video feed for artifacts introduced by injection. As Mitek researchers have observed, an injected video (especially via hardware capture or virtual cam) might leave artifacts that a real camera image wouldn’t have. Specialized algorithms (often AI-based) can detect these anomalies. Essentially, the system uses a form of digital forensics on the incoming video to ask, “Does this look like it genuinely came from a camera, or does it have qualities of a screen recording or synthetic video?” If something is off – say, the frame resolution doesn’t match any real device camera, or the metadata claims “iPhone X” but the image dimensions don’t match that phone’s normal camera specifications– the system can raise a red flag. Injection attack detection thus adds a stream integrity check, monitoring the source and consistency of the biometric data. 

  • Deepfake and synthetic media detection: Because many injection attacks will employ deepfakes or other AI-generated content, having dedicated deepfake detection is crucial. This layer uses AI to fight AI by leveraging neural networks trained to recognize subtle cues of deepfake or manipulated media. It’s an arms race – as deepfakes improve, detectors must as well – but combining multiple detection techniques can improve confidence.  

  • Secure architecture and stream integrity measures: Beyond analyzing the content of the video, organizations should strengthen the integrity of the biometric capture process itself. This includes hardening the client application and the data channel against manipulation. For mobile apps, leveraging the device’s secure hardware (trusted execution environment) can ensure the camera feed is handed off in a trusted manner that third-party apps can’t easily intercept.  

All these measures collectively ensure that even if one layer is bypassed, another can catch anomalies. The aim is to make the pipeline secure from end-to-end and resilient to tampering – verifying not just the face, but the context (device, environment, delivery) to sniff out anything that doesn’t add up. 

When these layers work together, they provide what’s known as “defense-in-depth”. For example, if an attacker uses a stolen or highly realistic synthetic image or video injected via a virtual camera on an emulator; the PAD layer might not catch it, but the injection detection might flag the use of an emulator or detect video artifacts, and the deepfake detector might also sense AI-generated inconsistencies. A multi-layered approach acknowledges that no single test can stop 100% of attacks, especially as attackers innovate. But each added layer compounds the difficulty for the attacker – they now need to evade liveness checks, avoid any injection fingerprints, defeat deepfake scrutiny and not trigger any anomalous system behavior. This dramatically lowers the odds of a successful attack slipping through unnoticed. 

Conclusion: Securing the future of facial biometrics 

Biometric injection attacks represent a new chapter in the cat-and-mouse game of cybersecurity. They exploit the very strengths of biometric systems – user convenience and fidelity – by turning the system’s trust against itself. For businesses deploying face authentication, the message is clear: robust biometric security is about more than just matching faces. It requires thinking like an attacker at every stage from capture to verification. The rise of injection attacks (albeit from previously low levels) is a warning that fraudsters will leap at any gap in the process. The good news is that the industry is responding with equal creativity. By combining user-friendly features with behind-the-scenes safeguards, companies can ensure their facial biometric systems remain trustworthy. This means continuing to improve liveness checks, investing in AI to detect forgeries, locking down software against tampering, and monitoring for unusual patterns that hint at an attack. 

In summary, facial biometrics remain a robust authentication factor – but only when implemented with layered defenses that account for both physical and digital threats. Business stakeholders should understand both the promise and the risks; while biometrics can greatly enhance security and user experience, they must be protected like any critical asset. That means staying informed about evolving attack vectors like injections and working closely with security experts to update defenses. With a proactive, multi-layered approach, organizations can outpace the fraudsters and keep their biometric systems safe from even the stealthiest injection tricks. In the ongoing battle between attackers and defenders, preparation and innovation are our best allies – ensuring that the “face behind the camera” is indeed the genuine article, every time. 

Request a demo to see Mitek’s biometric authentication in action