Deepfake

Deepfake refers to AI-generated or digitally manipulated media, including videos, images, or audio, that has been designed to realistically mimic the appearance, behavior, and/or voice of a real person. While the creation of your own digital twin can serve legitimate purposes, deepfake media presents many risks for identity fraud, impersonation, misinformation, and social engineering attacks.

Use case / examples of deepfake

BEC attacks: Impersonating executives by using deepfake video or voice in Business Email Compromise (BEC) attacks to authorize or request fraudulent activity or company data. The attacker might follow up an email with a request for a meeting or phone call, using the deepfake to persuade a junior employee to transfer funds or send confidential files.

Biometric verification fraud: Bypassing biometric verification during digital onboarding by submitting deepfake-generated selfies or videos. The generated content might be a real person or part of a synthetic identity.

Call center fraud: Deepfaking customer voices during calls to call centers, to reset passwords, authorize transfers or request a phone SIM swap.

Document fraud: Generating synthetic ID documents with AI-enhanced imagery to pass KYC checks.

Learn more about Generative AI fraud detection