Treasury News Network

Learn & Share the latest News & Analysis in Corporate Treasury

  1. Home
  2. Fraud Prevention
  3. Minimizing Payment Fraud

Part 1 (Payments Fraud): Top areas of focus for corporate treasury in 2026

This is the first of a three-part article series

As 2026 unfolds amidst a backdrop of varied uncertainties—an AI-fuelled bubble, U.S. interest rates, geopolitical tensions, and tariffs and trade—CEOs, CFOs, treasurers, and payments executives find themselves at a decisive moment. Reflecting on the focus areas and challenges that shaped 2025 is essential, as many of these themes will continue into the new year, accompanied by new complexities and opportunities.

Throughout 2025, CTMfile remained a trusted source for payments and treasury management insights, offering newsworthy roundups, engaging articles, thought-provoking interviews, and industry-specific videos and podcasts that resonated with our audience.

In this first part of our three-part series, we examine the escalating threat of payments fraud—an area that will demand heightened attention from corporate finance, treasury, and payments professionals in 2026. As the threat becomes increasingly technology-driven, organizations must prepare for a landscape where prevention and detection require far more than traditional controls.

Payments fraud is no longer a simple transactional risk. It has become sophisticated, pervasive, intricate, deceptive, expensive, and relentless—a hidden force that operates in the shadows and often goes unnoticed until the fraud has been perpetrated. What once appeared as isolated incidents has evolved into a coordinated, cross-channel threat that exploits human behaviour, digital vulnerabilities, and operational blind spots across the treasury and finance function.

With cybercriminals becoming increasingly inventive—and unconstrained by geography, cost, or technology—2026 is expected to bring a significant rise in both established and emerging forms of payments fraud. From AI-generated social-engineering attacks to deepfake-enabled impersonation, threat actors are accelerating their capabilities. This year will require corporates to rethink their defences, strengthen their controls, and build more resilient payment environments to stay ahead of the growing threat.

FraudGPT and other malicious chatbots to accelerate payments fraud

Malicious generative AI tools such as FraudGPT, WormGPT, and jailbroken chatbots are rapidly reshaping the payments fraud landscape. These models circumvent safety guardrails and give criminals the ability to generate polished phishing emails, convincing vendor change requests, and highly targeted social engineering scripts with unprecedented accuracy.

FraudGPT, sold on the dark web, provides cybercriminals with an accessible toolkit for producing undetectable malware, phishing pages, exploit guides, and scam content. This marks a significant evolution from earlier business email compromise (BEC) attempts, enabling bad actors to scale attacks and tailor them far more effectively to treasury, accounts payable (AP), procurement, and vendor management teams responsible for payment processing.

Beyond phishing, attackers are using malicious tools to jailbreak legitimate chatbots, automate malware development, and create deepfake visuals designed to bypass onboarding and identity-verification checks. Criminal adoption may be gradual, but the trajectory is unmistakable: these tools are lowering the barrier to entry while increasing the realism and volume of attacks. With AI-powered chatbots becoming harder to detect, treasury, AP, and payments teams must assume that future fraud attempts will be hyper-personalized and increasingly advanced. Strengthening controls around vendor onboarding, payment instruction changes, and real-time anomaly detection is now essential to stay ahead as harmful chatbots accelerate payments fraud.

Cybercriminals will intensify efforts to breach the personal digital lives of corporate executives

As noted in the Ponemon Institute’s Digital Executive Protection Research Report 2025 (commissioned by BlackCloak), corporate executives “Often transition from their homes to the office environment using personal devices that are not covered by corporate security.”

Because this vulnerability presents an attractive entry point, cybercriminals are ramping up attacks on executives, board members, and key employees—shifting their attention toward their personal digital spaces by infiltrating home networks and unsecured personal or family devices with malware and ransomware to extract sensitive, high-value information and turn it into a financial advantage.

The Ponemon Institute’s research illustrates the extent of the issue. Between 2023 and 2025, reported cyberattacks on executives’ personal digital lives rose from 43% to 51%. Among those affected in 2025, 22% faced seven to ten such attacks—a 32% rise compared with 2023. Concern remains elevated, with 62% of respondents anticipating continued or increased targeting of executives’ personal digital assets.

BEC, account takeover (ATO), and deepfakes poised to be greatest fraud risk

BEC, account takeover (ATO), and deepfakes are emerging as the most serious fraud threats facing corporates and banks heading into 2026. According to SecureTreasury™ (securetreasury.com), the cloud-based payments security training programme for corporate treasury teams, BEC involves bad actors exploiting legitimate business email accounts to initiate unauthorized fund transfers.

The Association for Financial Professionals® (AFP) 2025 Payments Fraud and Control Survey Report (sponsored by Truist) reinforces this risk, with 63% of respondents reporting their organization had been targeted by  BEC fraud.

Strategic Treasurer’s newly released 2025 Treasury Fraud & Controls (TF&C) Survey Report (underwritten by Bottomline) mirrors this view from a banking perspective—67% of banks cite BEC as one of their top three fraud risks over the next two years. ⃰ 

ATO is escalating just as rapidly. According to the 2025 TF&C Survey, check fraud (54%), ATO (51%), and deepfakes (31%) follow BEC as the top fraud concerns among banks. On the corporate side, a staggering 83% of organizations surveyed by Abnormal Security experienced at least one ATO incident in the past year. Additionally, Sift’s Q3 2025 Digital Trust Index revealed that ATO attacks targeting the fintech and finance industries have surged 122% year-over-year. With generative AI lowering the barriers to social-engineering scams and credential theft, ATO is expected to rise significantly in 2026.

Deepfakes represent the newest—and potentially most destabilizing—dimension of payments fraud. They now impersonate executives, colleagues, and business partners with alarming realism. Losses linked to deepfake-enabled fraud have exceeded $1.56 billion, with more than $1 billion occurring in 2025 alone, according to fresh data compiled from the AI Incident Database and Resemble.AI, released by Surfshark. With voice cloning, face-swapping, and image-to-video tools freely available, treasury teams operating under time pressure and within digital workflows face an increasingly deceptive threat landscape in the year ahead.

Fraud attempts involving AI-generated content will increase

The Strategic Treasurer 2025 Treasury Fraud & Controls Survey shows a clear and emerging trend: fraud attempts leveraging AI-generated content—such as fabricated documents, cloned voices, and manipulated videos—are no longer theoretical. They are already appearing across the financial ecosystem, and more importantly, the frequency of these attempts is beginning to rise. The report notes that “a total of 30% of respondents said they have seen fraud attempts using AI-generated content,” with 14% confirming that such attempts are occurring and increasing within their organizations. Another 16% have encountered AI-driven fraud, though they have not yet seen a rise in frequency.

Equally concerning is the uncertainty that surrounds this threat. Thirty-three percent of respondents were unsure whether they had encountered AI-generated fraud attempts, highlighting the stealth and realism of these attacks. Many organizations may already be exposed without realizing it. As the survey notes, “Of those that ‘know,’ the percentage that saw attempts leveraging AI stands at 45%.” This suggests that once organizations gain visibility into AI-based fraud, the numbers are significantly higher than general observations indicate.

Meanwhile, 37% of respondents reported not yet seeing AI-powered fraud attempts—but this likely reflects the current maturity and adoption curve of attackers, not long-term security. As AI tools become more accessible and more adept at producing hyper-realistic impersonations, corporates and financial institutions should expect these figures to shift rapidly. The report’s own cautionary note issues a pointed warning: “This is an area to watch in the next few years as AI access increases and AI capabilities grow in sophistication.”

Overall, the findings indicate an approaching inflection point. As generative AI evolves, the barriers to creating highly convincing fraudulent artifacts will continue to fall—making AI-enabled fraud a rising threat to treasury teams, banks, fintechs, and corporate payment operations. The combination of increasing incidents, high levels of uncertainty, and rapid technological advancement supports one overarching conclusion: AI-generated fraud attempts will escalate in 2026 and open the door to new vulnerabilities that threat actors can exploit to perpetrate payments fraud.

Corporate end users sending ACH payments must comply with NACHA’s 2026 Fraud Monitoring Rule.

Nacha’s upcoming 2026 Fraud Monitoring Rule represents one of the most consequential compliance shifts for corporate treasury teams in recent years. Although financial institutions remain the primary compliance gatekeepers, all corporate end users that send ACH payments must adopt risk-based processes and procedures to identify potential fraudulent transactions—including business email compromise, vendor impersonation, and unauthorized payment attempts.

The rule will be rolled out in two phases. Phase 1, effective March 20, 2026, applies to corporate end users that send 6 million or more ACH transactions annually. Phase 2, beginning June 22, 2026, extends the requirement to all corporate end users that send any ACH transactions annually.

For treasury teams, these obligations reshape key components of payment operations, from vendor onboarding and payment instruction changes to AP/AR workflows and payments fraud monitoring. Treasury leaders sit at the centre of every corporation’s banking relationship, and these sweeping requirements for ACH “push payments” place corporate treasury squarely at the forefront of the fight against fraud. Preparing for compliance will require close coordination across treasury, accounting, IT, compliance, and legal teams.

Nacha emphasizes that failure to comply could expose organizations to increased fraud, financial losses, regulatory penalties, reputational damage, or even disruptions in banking services. For corporations of all sizes that rely on the ACH network for payroll, vendor payments, collections, or disbursements, the 2026 Fraud Monitoring Rule is not merely a regulatory update—it is a critical step toward strengthening payment security and reducing ACH-based fraud risk.

The payments fraud landscape has entered a period of profound transformation—defined by faster attacks, AI-engineered deception, and a widening range of vulnerabilities across both corporate and personal digital environments. As cybercriminals adopt generative AI and increasingly advanced social engineering tactics, traditional fraud defences are no longer sufficient. The escalation of BEC, ATO, deepfake impersonation, and harmful chatbots—combined with new compliance mandates such as Nacha’s 2026 Fraud Monitoring Rule—signals a pivotal moment for corporate treasury.

In this environment, payments fraud prevention must shift from reactive controls to proactive intelligence. Treasury teams will need to enhance fraud-risk governance, modernize vendor and payment instruction verification, and deploy technologies capable of detecting anomalies in real-time.

Equally critical is strengthening the human layer of defence. Focused payments security training can equip treasury, AP, finance, and executive leadership to identify AI-driven deception, validate digital communications, and respond decisively to unusual or high-risk requests.

Payments security has become a defining measure of a treasury’s foresight and resilience. Organizations that modernize their payments fraud prevention techniques—blending advanced technologies with disciplined processes and informed teams—will be best positioned to outpace increasingly intelligent fraud attempts. As 2026 unfolds, corporate treasuries that adapt with speed and vigilance will not only reduce losses but reinforce trust across the payments landscape and safeguard the financial integrity of their organizations.

 

 ⃰ Disclosure: Strategic Treasurer owns CTMfile.

Like this item? Get our Weekly Update newsletter. Subscribe today

About the author

Also see

Add a comment

New comment submissions are moderated.