Can you spot a deepfake? 5 telltale signs treasury and payments teams must know
by Pushpendra Mehta, Executive Writer, CTMfile
The global deepfake crisis escalated sharply in the second quarter (Q2) of 2025, with incident levels surpassing all previously recorded metrics.
“From April through June 2025, we documented 487 verified deepfake incidents, representing a 41% increase from Q1 2025's 345 cases and a staggering 312% increase from Q2 2024”, highlighted Resemble AI’s Q2 2025 Deepfake Incident Report.
With 487 incidents in Q2 2025 —up 41% from Q1 and 312% year-over-year —deepfake attacks now appear to be doubling every six months. According to the report, this surge resulted in US $347.2 million in direct financial losses in Q2 alone, surpassing the $200 million reported by Resemble AI in the first quarter of 2025.
What’s driving this explosion in deepfakes?
At the heart of the surge is the democratization of AI. Free, open-source tools have made it shockingly easy for malicious actors to forge highly realistic videos, voices, images, and even text—with little technical skill, minimal effort, and unprecedented speed. The result? Deepfakes are becoming cheaper to produce, more accessible to the masses, increasingly sophisticated, and disturbingly believable.
In fact, Resemble AI’s report notes that “Creating a convincing deepfake now takes just 3.2 hours and costs under $50 in 91% of cases.” This alarming combination of affordability, ease, and speed in generating digitally altered content is a major driver behind the exponential rise in deepfake attacks—and the massive financial fallout that follows.
What’s more unsettling is that only 0.1% of people can accurately and consistently detect AI-powered deepfakes. Research by iProov, released in February 2025, reveals that 99.9% of people cannot “consistently” identify AI-generated fakes. In today’s hyperconnected world, that makes nearly every treasury, finance, and payments executive a potential victim.
If just a minuscule 0.1% of individuals succeed in correctly and repeatedly spotting AI-crafted deepfakes, then 99.9% of us are essentially navigating in the dark in the face of digital deception. This stark reality means that even the sharpest and most experienced finance, treasury, and payments security professionals are highly susceptible to being deceived by synthetic content that looks, sounds, and feels authentic.
Such a situation raises two urgent questions for treasury and payments executives:
- Can I consistently unmask a fake CEO, CFO, or treasurer who, on a video call, urgently instructs me to transfer funds or push through payments?
- Can my treasury and payments colleagues repeatedly detect a perfectly cloned voice on the phone?
If the answer is anything other than an unequivocal “yes,” then in this unsettling new era of payments security, every single payment transaction is at risk.
As Craig Jeffery, Managing Partner at Strategic Treasurer, cautions in his firm’s recent white paper Deepfakes & Payments Fraud: Is Treasury Prepared? ⃰
"When fakes started with the roughly worded language and misspelled words, we all laughed at these attempts. As they became progressively advanced, we stopped laughing and paid attention to the losses that were piling up. Now, every treasurer is highly concerned about the hyper-sophistication and the level of automation being used for deep fakes. Increasing our defensive posture through AI-infused automation, staff training, and improved workflows is of existential importance for asset protection.”
With individuals struggling to accurately and consistently identify AI-driven fakes, companies are increasingly turning to AI-powered deepfake detection tools to verify the legitimacy of digital content in an uncertain business environment. But technology alone isn’t enough.
There is a pressing need for employee awareness, particularly among treasury and payments professionals, on how to spot the indicators of falsified content. This calls for developing a thorough understanding of the telltale signs of deepfakes—a critical countermeasure in the fight against this growing menace, and a vital step toward helping treasury and payments practitioners to serve as the first line of defence against synthetic deceit.
Recognizing the telltale signs of AI-fuelled deepfake content is key to halting financial fraud, thwarting executive impersonation, and shielding treasury from digitally forged sabotage.
Here are five key signs that can help you spot deepfakes:
1. Unnatural eye activity
In deepfake videos, unnatural or restricted eye movement—especially slow, infrequent or unsynchronized blinking (while deepfakes do blink, the blinking often lacks a natural rhythm)—is a major red flag. In essence, if the eyes don’t move naturally or maintain prolonged eye contact without blinking, it’s a strong indication that the video is fake.
2. Mismatched facial expressions and overlay errors
Deepfakes often display unnatural stiffness or involuntary twitches in facial expressions, according to HyperVerge. These inconsistencies arise because the underlying algorithms struggle to replicate the natural fluidity and nuances of genuine human facial movements. HyperVerge also notes that, in some instances, deepfake technology may overcompensate—resulting in exaggerated facial expressions that appear overly intense, unusually symmetrical, or prolonged, deviating from typical human behaviour.
Separately, Josh Thorud, Multimedia Teaching and Learning Librarian at the University of Virginia (UVA) and a contributor to its Generative AI guide, recommends watching for anomalies in facial expressions or mismatches in lighting between the face and its surroundings. Since most deepfakes are created by superimposing a digitally modelled face onto another person, Thorud advises looking for “Blurring, resolution, color or texture differences between the face and the body, hair or neck.” He further cautions observers to check for sharp lines or blurring where the face has been overlaid.
Particular attention should be paid to lip and facial movements—especially when the head turns or when objects move across the face—as any unnatural movement or behaviour that feels subtly “off” may be a telltale sign of digital manipulation.
Suffice it to say, if the skin tone looks abnormal, the emotions don’t match the words being spoken, the shadows on the face don’t align with the surroundings, the glasses a person is wearing show inconsistent glare, the face or smile appears overly smooth—as if digitally edited—or your instinct tells you something about the visual quality or feel of the content is strange, it’s wise to pause and scrutinize, because the truth isn't always what it seems.
3. Awkward positioning of body or posture
Another telltale sign? Most deepfakes focus on the face, overlooking the body or posture, which can often tell a different story. Watch for awkward positioning between the head and body, or any disjointed coordination that seems inconsistent or doesn’t look quite right. If the head appears oddly placed, the posture seems rigid or stiff, or the body moves in a jerky, misaligned way from one frame to the next, it’s a strong signal that the person you’re watching may not be real, or that the video has been technologically distorted.
4. Unreal-looking teeth and hair
If you compare deepfake teeth and hair with those of a real person, you’ll notice a key difference: deepfake teeth typically appear perfect—flawless and unnaturally uniform —yet notably lack the subtle outlines that separate individual teeth. Similarly, the hair often looks exceptionally neat or perfectly styled, with no stray strands or flyaways in sight. These seemingly minor artificial details can be clear indicators that the image or video has been fabricated or synthetically produced.
5. Audio or noise anomalies
Deepfake perpetrators often prioritize visuals over audio, which means the audio can give them away. Listen for robotic-sounding voices, flat or emotionless speech, awkward pauses, or mispronounced words and phrases. If the lips don’t sync up, the background noise doesn’t match the visual setting, or there’s no ambient sound at all, it could be a sign that the content has been tampered with.
The capacity to perceive nuanced irregularities or the smallest details—whether in eye activity, facial expression, body movements, audio glitches, or digitally pristine teeth or meticulously styled hair—can mean the difference between financial compromise and fraud prevention.
While technology remains an essential ally in countering deepfakes, human intuition and the consistent ability to recognize realism hacks will continue to be primary competencies in a treasury and payments professional’s security repertoire.
In conclusion, as deepfakes grow in reach and realism, treasury and payments security professionals must evolve into human firewalls—equipping themselves not only with technology but also with acute observational skills to accurately and repeatedly identify them.
Educating employees about the rise of digital impersonation and likely threat scenarios is vital. Comprehensive training that brings staff up to speed on deepfake tactics—and teaches them how to spot red flags in manipulated media—will be an indispensable frontline safeguard against intelligent forgeries.
In an era where deepfakes are becoming more disruptive and harder to detect, readiness is no longer optional—it’s crucial. Technology, training, and human vigilance must work together to keep treasury and payments safe from deepfake-enabled fraud. To deepen your preparedness, download and review Strategic Treasurer’s white paper Deepfakes & Payments Fraud: Is Treasury Prepared?. It offers valuable insights into the evolving deepfake landscape and pragmatic countermeasures to help your organization stay ahead of digital fabrications.
⃰ Disclosure: Strategic Treasurer owns CTMfile.
Like this item? Get our Weekly Update newsletter. Subscribe today