Treasury News Network

Learn & Share the latest News & Analysis in Corporate Treasury

  1. Home
  2. Fraud Prevention
  3. Minimizing Payment Fraud

Deepfake technology poses security and reputational risks for CEOs, CFOs and treasurers

Imagine a voice over the phone that sounds just like your CEO asking for a money transfer. You know the CEO, and you’ve spoken to him before.

You take your CEO at his word and initiate the money transfer, which soon gets routed to criminal accounts all over the world, only for you to realize later that you (the CFO or the treasurer), the CEO and your organization have fallen victim to cybercrime. In essence, cyber criminals cloned your CEO’s voice and perpetrated a heist. Welcome to the scary world of “deepfake” technology.

In 2019, the CEO of a UK-based energy firm reported receiving a phone call from someone who sounded like his German boss at his parent company asking him to transfer funds to a Hungarian supplier within an hour. Little did the CEO realize that the instruction of the voice on the phone was allegedly a voice-spoofing attack, as was reported in the Wall Street Journal. He then wired an estimated $243,000 to a bank account in Hungary.

In early 2020, a bank manager in Hong Kong received a call from the director of a company whose voice he knew, someone he had spoken to on several occasions. The director had good news. His company was about to acquire another company, so he needed the bank to authorize transfers worth US$35 million to enable the acquisition.

The director mentioned he had hired a lawyer named Martin Zelner to coordinate the acquisition. The bank manager could see in his inbox emails from both the director and Zelner confirming what money needed to move where.

Unaware that deep fake (voice cloning) technology was being deployed, and with written confirmation in front of him, the bank manager found that everything appeared legitimate, so he began making the transfers that got routed to several accounts across the world. In the blink of an eye, $35 million was stolen and vanished with at least 17 people involved.

Deepfake technology: definition, proliferation and risks

The word “deepfake” is composed of the terms “deep learning” and “fake” and refers to the use of artificial intelligence (AI) and machine learning (ML) to create, replicate, impersonate or doctor someone’s audio (voice cloning), video, image or text. It does so by taking existing text, picture, video, audio, facial expressions or body movements of a person and then “fakes” or forges it to create entirely new content that seems real and is quite difficult to distinguish as false.

Deepfake technology first appeared in 2017 and has since proliferated. According to VMware’s 2022 Global Incident Response Threat Report that surveyed 125 cybersecurity and incident response professionals, “Two out of three respondents in the report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method.”

“Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls,” said Rick McElroy, principal cybersecurity strategist at VMware.

Given the sophistication and scale of the technology involved, almost anyone with a computer can fabricate fake videos to look, speak and act like you. This deepfake or falsified content (also known as synthetic content or synthetic media) is increasingly being used for payment fraud, manipulating company employees to perform actions such as sharing confidential or private corporate information and customer data.

Deepfake also poses other risks. It has been used to make CEOs, CFOs, treasurers and business executives appear to say or do things that they never said or did to humiliate or harass them. It can be utilized for corporate disinformation campaigns or misleading people. The impacts of such misuse can be financial, psychological and reputational, potentially influencing or affecting the stock price of an organization, including consumer behaviour.

Safeguarding against deepfake threats

CEOs, CFOs and treasurers have a direct line into an organization’s finances, so they are likely to be a target of deepfake attacks. Here’s how they can prepare and guard their colleagues and organization against malicious deepfake threats:

1. Employee education and training: deepfake exploits, effects and detection (tell-tale signs)

Combatting deepfakes will require a combination of technology and education. Your employees will be your first line of defence against deepfakes. It is important to raise your employees’ and key team leaders’ awareness of the most likely threat scenarios or risks posed by deepfakes. Understanding the danger is half the battle, and educating your employees about the proliferation of digital impersonation and how to deal with them is equally important.

Offer adequate training to bring your employees up to speed on deepfake exploits and effects, including how to spot or identify a deepfake to help mitigate the threat. Employees ought to be trained to expect that cyber thieves will produce deepfakes of high-level company executives, so they must verify or confirm any suspicious or urgent money transfer requests that are made without prior notice or that deviate from the norm. Criminals will always try to create a high sense of urgency to get their targets to act without thinking.

There are other notable, tell-tale characteristics that can help employees spot deep fakes that should be a part of their training.

Vladislav Tushkanov, lead data scientist at Kaspersky, a global cybersecurity and digital privacy company, says, “Among signs of a deepfake, there are unnatural lip movements, poorly rendered hair, mismatched face shapes, little to no blinking, skin colour mismatches, errors in the rendering of clothes, or a hand passing over the face. However, an adversary may intentionally lower the video quality to hide these artifacts. To minimize the chance of hiring a fake employee, break job interviews into several stages involving not only HR managers but also people who will be working with a new employee. This will increase the chances of spotting anything unusual.” 

Panda Security, a cybersecurity company, also gives some tips to identify a potential audio or video deepfake. These include unnatural speech cadence, a robotic tone or poor audio or video quality, and lip movements that are out of sync with the speech or the voice.

Companies need to take a highly proactive approach to lock down their cybersecurity vulnerabilities, and education and training will be an important medium to help employees think critically about suspicious synthetic or manipulated content to minimise their chances of falling for a deepfake scam. Employing the use of multi-factor authentication remains a must.

2. Response plan

Having a deepfake incident response plan in place that can be activated or set in motion when a deepfake is detected is recommended. The plan should detail steps for security and communications remediation once an incident occurs. It must also define individual responsibilities and required actions when a deepfake arises.

Developing a response plan ensures that your organization is proactively and appropriately ready to respond to a deepfake attack. Remember, it is important to act quickly and prudently, particularly when your organization is publicly targeted with a deepfake. Triggering a coordinated public relations response to control the narrative is the best way to protect your organization’s brand and reputation.

3. Technology and detection model: AI pitted against AI

Ironically, the same ML and AI used to create a deepfake is the same powerful technology companies are working on to detect audio and video deepfakes. As AI programmes learn the critical cues associated with forged video content, those lessons are quickly absorbed into the creation of new deepfake content. In the battle against deepfakes, AI is being pitted against AI.

Early identification of audio and video content with a high probability of alteration and manipulation will become a crucial battleground for the truth. As the algorithms and approaches improve, companies will continue to develop even more robust models and systems to meet the need for more trustworthy detection solutions. Detecting deepfakes will only evolve from here.

Conclusion

Deepfakes can be created using your conference presentations, media interviews or earnings calls, or from any audio or video footage that is already in the public domain. Remember, you don’t have to be a high-profile CEO or a public figure to become a victim of deepfake crime. Even a lesser-known senior leader or professional can be the subject of deepfake attacks, particularly if they manage large financial transactions.

For now, it is important to become acutely aware of deepfake security and reputational risks and to take proactive action to protect our organizations and employees. The onus is on us to remain vigilant and cognizant of the serious and damaging cyberattack vector – deepfake.

Like this item? Get our Weekly Update newsletter. Subscribe today

About the author

Also see

Add a comment

New comment submissions are moderated.