A few years ago, two Chinese fraudsters faked about $75 million worth of tax invoices. They did so by purchasing images of faces and parts of faces on the black market to create synthetic identities. With those identities in hand, they set up a shell corporation that issued the fake tax receipts. The fraudsters then used a technique called "an injection attack" to fool China's facial recognition systems. Unfortunately, this type of fraud is likely to become even more pervasive as online content generated by AI becomes more prevalent. Researchers estimate that, by 2026, 90% of online content may be synthetically generated.
Injection attacks using fake or stolen biometrics or identity documents to fool security systems are just the latest chapter in the push and pull between bad actors and those attempting to guard against them. As technology evolves, attackers and organizations alike have been given massive potential in how they generate content. Injection attacks, as part of this ongoing evolution of digital content, pose a serious cybersecurity threat to organizations and individuals.
Injection attacks are when malicious actors inject fake, pre-created, or altered content into security systems
Injection attacks, particularly in the context of biometric identity verification systems, occur when a malicious actor inserts, or “injects,” unauthentic biometric evidence into a security system to gain unauthorized access to an account or information. An injection attack on a biometric system can be executed in various forms, but the essence is always the input of forged or fraudulent data.
Injection attacks differ from more traditional presentation attacks, in which perpetrators use fraudulent content in front of a sensor like a microphone or camera. With presentation attacks, bad actors can play voice recordings or present non-live reproductions of a face or identity documents to get around security systems.
Injection attacks require a bit more technical sophistication. Attackers might use a virtual camera rather than a physical camera or use browser plugins to apply a fraudulent camera. Some criminals may go to great lengths to create virtual facsimiles that pass for the real thing. Common examples of injection attacks include:
Fingerprint spoofing: One of the more common biometric injection attacks involves spoofing fingerprints. Attackers might lift latent fingerprints from surfaces and recreate them using materials like wood glue, silicone, or gelatin. These artificial fingerprints can then be presented to a fingerprint scanner. Advanced methods involve 3D printing a fingerprint using high-resolution images, which can be obtained from photos or videos of a person's finger.
Deepfake video injection: In facial recognition systems, attackers can inject deepfake videos into the authentication process. Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system's feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.
Voice authentication bypass: Voice recognition systems can be tricked by injecting synthesized voice prints. Attackers use voice conversion technology or deep learning algorithms to mimic a user's voice patterns. With advancements in text-to-speech technologies, it's becoming easier to create voice deepfakes convincing enough to pass as the original voice, allowing unauthorized access through voice-activated systems.
Vein pattern forgery: Some biometric systems authenticate individuals based on the unique vein pattern in their hands. Researchers have shown that it's possible to create a fake hand using materials like wax that can replicate these vein patterns. High-resolution images can be processed to map out the vein structure, which is then physically recreated to deceive vein recognition systems.
Fraudsters are able to spoof these biometric traits by intercepting legitimate biometric data and replaying it, using artificial or altered biometric samples, or by creating synthetic biometric data that algorithms cannot distinguish from authentic biometric templates.
Ensuring the verification process hasn't been tampered with is paramount for stopping injection attacks
One of the key challenges in preventing injection attacks is the need to ensure that the biometric data is genuine at the time of capture. Biometric systems are designed to be highly sensitive to their specific inputs, like a fingerprint or a retina scan, which makes them both powerful and vulnerable. For instance, high-resolution images can be manipulated to create fingerprints that are recognized by scanners, or voice modulation software can be used to mimic a user's voice commands.
Technology advancements, especially in the field of AI, have allowed hackers to greatly enhance the sophistication of these attacks. Deepfakes, a technology that utilizes artificial intelligence to create realistic images and sounds, exemplify the new frontier of injection attacks, making it increasingly difficult for single factor biometric systems to detect fakes. Additionally, once a biometric is compromised, it cannot be easily changed like a password, which makes a successful attack particularly damaging.
To thwart these types of attacks, companies are investing in multimodal biometric authentication, layering multiple types of biometrics as a way to strengthen the security of this form of authentication and making it much harder on fraudsters. Additionally, biometric systems are being developed with liveness detection capabilities, which can differentiate between a live person and a non-present biometric artifact.
Another challenge facing organizations is that fraudsters can more easily fool active facial liveness-recognition technologies. These tools require users to perform an action, such as blinking, turning their heads, or moving their device, to demonstrate that a live person, rather than just a photo, is there. However, fraudsters have found ways around this type of liveness detection by using things like photos with eye holes, rubber masks, or injecting a video stream to trick the system.
What companies can do right now
Advanced machine learning algorithms are also being deployed to analyze the biometric data for signs of tampering or replication. These algorithms can be trained to detect anomalies that may indicate an injection attack. Moreover, companies can implement end-to-end encryption and secure channels for the transfer of biometric data to prevent interception and injection.
Continuous system updates and security patches are also critical in keeping up with the evolving threat landscape.
Companies must also be vigilant and proactive in monitoring their systems for any unusual activities that could indicate an injection attack is being attempted or has occurred.
Furthermore, there is a growing recognition of the need for legal and regulatory measures to protect biometric data and to define standards for biometric systems. This includes ensuring compliance with data protection laws, such as the General Data Protection Regulation (GDPR) in the EU and the California Consumer Protection Act (CCPA) in the U.S., which mandate stringent requirements for the processing of biometric data.
Stopping injection attacks requires a multi-pronged approach
Injection attacks represent a significant threat to biometric identity verification systems. The increasing sophistication of these attacks requires a multi-faceted approach to security, combining technological advancements, robust authentication protocols, continuous monitoring, and compliance with data protection regulations. However, organizations must also be careful to balance anti-fraud measures with the customer experience. Challenge-response techniques put attackers on alert that they are being checked, making it easier for them to tailor their attack to specifically circumvent the anti-fraud measures. Active authentication requirements also slow down the process, increasing abandon rates, and adding friction to the overall user experience.
As biometric systems become more prevalent, the need for enhanced security measures that don’t also compromise the customer experience becomes ever more critical to protect against these invasive and potentially damaging attacks.