Verifying identities with advanced biometrics was already a challenge—then along came AI-generated deepfakes, digital impersonations of people. Deepfake fraud is a predictable byproduct of the rapid development and release of increasingly advanced AI technology. And as deepfakes grow increasingly easy to create and more difficult to spot, they are becoming more stealthy and dangerous. According to Forbes, in 2024 deepfake attacks happened every five minutes, making up 40% of all biometric fraud. Here at Shufti, we have observed a 244% increase in account takeover (ATO) and identity fraud incidents due to generative AI and deepfakes.
Shufti as a company grew up in the most complex and unforgiving global threat environments where criminals hone their skills. Over the years we’ve put in the hard work of overcoming a wide range of challenges and bad actors around the world. It has made us who we are: a proven global team ready to serve our clients with the most ADAPTABLE, COMPLETE, and PRECISE identity services available.
”This new generation of AI-generated deepfakes is just the latest in a long line of global fraud threats. Most AI detection tools focus on techniques that fraudsters use most heavily, leaving their customers vulnerable to more sophisticated deepfakes. At Shufti, we are constantly evolving our own AI defense approach to help shield our customers and consumers from loss.”
— Frayyam Asif, Chief Technology Officer, Shufti
While this new generation of deepfake threats are greater than ever, cooler heads and strategic approaches are how we all evolve countermeasures to meet the moment. Here is advice from the Shufti Technology Team.
Why You Can’t Trust Your Eyes
Most deepfakes today can’t be detected with the naked eye. While we can see some aspects if we know what to look for, deepfakes are often down to the pixel level. But AI images have tells, just like poker players. Two common methods used to create deepfakes are GANS and diffusion Models:
Generative Adversarial Networks (GANs)
GANs use a pair of competing models. One, the generator, creates new images. The other, the discriminator, evaluates the realism of the images. Based on the outcome of each test, the generator continually refines the realism of the image until the discriminator can no longer see a difference.
Diffusion Models
Diffusion models are deepfakes created by gradually transforming noise into an image. This may result in an image that closely matches another image, or that creates a new image based on a text prompt.
Fighting Deepfakes with Pixel-Level Vision
GANs and diffusion models both leave behind glitches, repetitions, and strange details that Shufti’s AI methods can spot. This includes:
- Digital forensics that catches artifacts generated by diffusion modeling. This includes blurred and distorted edges, mis-matched skin tones, and visible pixelation resulting from compression artifacts.
- Errors in forged documents such as misplaced stamps or distorted holograms (below).

- Liveness anomalies such as inconsistent gaze, blinks, irregular movement, and evidence of replay loops.
Why You Need to Combat Deepfakes at Scale
Fraudsters often repeat their techniques for creating deepfakes, re-using attacks multiple times. This leaves open the opportunity to spot similar templates with a few changes or even no changes at all. This can result in repeated details, such as facial features, which are difficult to spot without specialized technology. Deepfakes typically use a combination of supervised and unsupervised models, and we use both to detect repetitive patterns.
At Shufti, we have layers of defense to ensure that we aren’t fooled by such attacks. We look for repeated attack techniques and learn from what we discover.
Why People are Still A Key Defense Layer
AI and human vision are distinct. Attempts to bypass AI detection models can sometimes end up creating scenarios that look odd and can be detected by the human eye. We’ve seen some ridiculous attacks that involved wigs and creative dressing. A simple human check caught this deepfake attempt that was designed to fool AI detection.
Shufti has a robust and managed approach to populating its shared central intelligence across its global customer base, ensuring access to high-quality and cost-effective fraud fighting resources for all customer fraud and security teams. We can harmonize the best of AI and human intelligence, with robust processes to harden our approach.
Why Agility Matters
Frayyam Asif, Shufti’s CTO shares:
Shufti’s globally-trained AI models detect deepfakes by analyzing attack patterns across 240 countries, 150 languages, and 10,000 ID documents. Fraud tactics vary by region, requiring adaptable detection methods. Lessons learned from one region help us refine fraud prevention strategies worldwide.
The success of deepfake fraud means that cyber-criminals will continue to seek new ways to beat both existing and future detection technologies. Shufti is committed to combating this threat, working in partnership with over 1,200 customers around the world to create a common layered defense. Schedule a demo today.
We’re in this fight. And we’re here to stay.
$(“.entry-meta-author”).each(function() {
if ($(this).text().trim() === “BECKY P.”) {
$(this).text(“Becky Park and Tom Gadsden”);
}
});
jQuery(document).ready(function($) {
$(“.entry-meta-author”).each(function() {
if ($(this).text().trim() === “BECKY P.”) {
$(this).text(“Becky Park and Tom Gadsden”);
}
});
});
The post Deepfakes: AI Fraud Attacks Require Even Smarter AI Countermeasures. Now. appeared first on .