Imagine you are a treasury or finance executive with a multinational corporation, unwittingly duped into transferring a staggering US$25 million to criminals who employ deepfake technology to masquerade as the organization’s chief financial officer (CFO) during a video conference call.
Without suspicion and acting upon the CFO’s instructions, you initiate the money transfer, oblivious to the fact that you, your department, and your corporation have become victims of a deepfake cybercrime, resulting in the diversion of funds to illicit accounts scattered worldwide.
Welcome to the evolving and disturbing reality of “deepfake” fraud where bad actors utilize artificial intelligence (AI)-based sophisticated and accessible deepfake techniques to digitally clone or replicate multiple company executives, including high-ranking officials, within a simulated or fabricated video conference setting. These meticulously constructed deepfake replicas convincingly mimic the appearance and voice of the actual company executive, creating a false sense of authenticity.
The South China Morning Post reported on Feb 4 that the Hong Kong branch of a multinational company lost $25.6 million because of a deepfake video conference call. According to the South China Morning Post, a finance department employee of the company at the Hong Kong branch was invited to a staged video call full of deepfaked (digital re-creations) company executives that included the organization’s London-based CFO. Barring the finance division employee, every colleague in the video call was fake.
Responding to the escalating urgency conveyed by the fake CFO to conduct fund transfers, the finance executive (the victim) adhered to the instructions given during the call, ultimately transferring about $25.6 million (HK$200 million) to five different bank accounts, spread across 15 transactions.
The finance employee complied with the instructions given during the video meeting because the CFO and other colleagues in attendance looked and sounded like people he recognized. The digitally manipulated representation of the victim’s colleagues craftily imitated the behaviour and gestural mannerisms of the real CFO and the rest of the employees, which made it difficult to distinguish the real from the fake.
This scam marks “Perhaps the biggest known corporate fraud using deepfake technology to date”, reported Bloomberg in an article published on February 5. While this $25 million AI-generated heist may be one of the most significant corporate frauds involving deepfake technology, it likely will emerge as the most substantial payment security threat for CFOs and treasurers in 2024. Here’s why:
Deepfake fraud attempts have witnessed an unprecedented increase
Deepfake fraud attempts have surged in 2023 compared to 2022. In 2023, there were 31 times the volume of deepfake attempts that there were in 2022. That’s an astounding 3,000% increase year-over-year, according to Onfido’s report Identity Fraud Report 2024.
Furthermore, with the increasing availability and utilization of AI, face-swapping technology (which seamlessly swaps or replaces one person’s face in an image or video with another person’s face), and lip synced videos, the creation of deepfakes is becoming cheaper, easier, more accessible, and scalable.
This trend is expected to be exploited over the next 12 months and beyond to facilitate payment fraud and coerce company employees into activities such as divulging confidential corporate information and customer data.
Deepfakes becoming harder to detect pose a real threat to corporations
Picture yourself receiving an email (a phishing email) containing a deepfake video of your CEO or CFO instructing you to click on a malicious URL. Ask yourself this question: would you be able to identify or detect the fake or digitally altered video?
Research suggests that many people mistake deepfake videos as authentic videos. In fact, in a survey conducted by iProov in 2022, which involved 16,000 respondents across eight countries (the US, Canada, Mexico, Germany, Italy, Spain, the UK and Australia) and posed various questions about deepfakes, 43% of respondents admitted they would not be able to tell the difference between a real video and a deepfake. Equally concerning is the fact that just 29% of people initially knew what a deepfake was, as revealed by the survey.
Moreover, findings from a study published in June 2023 in the Journal of Cybersecurity highlighted that participants, when asked to differentiate between AI-generated and real human faces, attained an overall accuracy rate of 62%.
As deepfakes becomes more advanced and pervasive, it will become increasingly difficult to spot fake images, text, audio and video and prevent their misuse for malicious purposes.
With previous scams demonstrating the effectiveness of deepfakes in defrauding organizations of substantial amounts of money, and criminals using digital fabrications of audio and video to humiliate or harass business leaders, including spreading false information to damage the reputation of a company, deepfakes pose a growing danger that could be the next big security threat to global corporations, particularly for treasury departments.
“Treasurers have been warned to be on their guard against the increasing use of deep fake technology that allows fraudsters to manipulate payment systems. Lower costs, increasing sophistication and greater ease of use are combining to create a real threat to treasury functions.”
This warning was issued at the recent Association of Corporate Treasurers (ACT) Treasury Forum, where experts from HSBC outlined how generative AI technology is being utilized to defraud corporate treasurers.
At the forum, Mark McDonald, Head of Data Science and Analytics at HSBC Global Research, shared his recent experience of generating a short video of himself speaking fluent Mandrin, with lip movements precisely synchronized to his voice. “In answer to the question of how we can know what is true and what is not, there is a real risk that what we call deep fakes are becoming a big problem.”
In August 2022, I authored an article for CTMfile highlighting the security and reputational risks that deepfake technology presents to CEOs, CFOs and treasurers, including safeguards against malicious deepfake threats.
This article serves as a cautionary reminder that the payment security and reputational risks associated with deepfakes, used to impersonate real business leaders and finance chiefs, are proliferating and can usher in a new wave of financial fraud, while also damaging the credibility of corporations.
For now, it is essential for treasurers, considered as the superintendents of payment security, to develop a comprehensive understanding of the financial and reputational harm stemming from deepfake attacks and adopt preemptive and preventive payment security measures to shield their organizations and employees.
Like this item? Get our Weekly Update newsletter. Subscribe today