Treasury News Network

Learn & Share the latest News & Analysis in Corporate Treasury

  1. Home
  2. Connectivity
  3. Operations

CFOs must get ahead of four enterprise ‘AI stalls’

Investments in AI are at an all-time high, and the market is set to double to US$300bn by 2027, but according to Gartner Inc., CFOs adopting the technology are unlikely to realise the anticipated enterprise benefits unless they mitigate four common stalls that hinder AI adoption.

“Gartner has been working with 80,000 executives around the globe to figure out the right use cases, to improve data, and to get teams ready for the new era of AI,” said Clement Christensen, Senior Director Analyst, Research, in the Gartner Finance practice. “However, as enterprises continue to pursue AI we see cracks emerging: four enterprise-level organisational challenges in particular that we call the ‘AI stalls’.”

AI stalls are common problems with the ways organisations use AI rather than problems with the technology itself. They can cause significant delays in the adoption and return on investment of AI. The four AI stalls are cost overruns, misuse in decision-making, loss of trust, and rigid mindset.

“These stalls will be pervasive across most organisations of all sizes and industries from now through 2030,” said Nisha Bhandare, Vice President Analyst, Research, in the Gartner Finance practice. “The time to course correct these stalls is now, and CFOs have a vital role in the enterprise in identifying and counteracting these stalls before they become a reality.”

1. Cost overruns

“There’s a uniqueness to AI costs,” continued Bhandare. “Given how new AI is, CFOs don’t really know how much it costs: they are learning as they go, driving cost estimates off by 500-1000%. Initial rollout costs, such as infrastructure, user licenses, hiring new talent and implementation costs, are something CFOs are aware of and are not different to other technologies.”

However, Bhandare explained that AI initiatives introduce two new buckets of costs, and CFOs must uncover these with each new investment.

First, there will be the ongoing cost of maintaining the AI models: keeping it running and ensuring it’s compliant. It’s the cost of data cleansing. There are also some surprise costs, such as the environmental costs of running large language models. Gen AI comes with its own issues - usage cost per query, per employee. This is where most of the volatility in cost projections arises - especially as organisations mature from basic to more advanced AI use cases.

The second bucket of costs new with AI initiatives is the “cost of experimentation” or sunk cost. Unlike other technologies, AI follows an experimentation process: start small and keep training the AI model. With experiments, there are failures due to low adoption or from choosing the wrong use case.

2. Misuse in decision making

“Most of the CFO’s enterprise colleagues – such as business decision-makers in marketing, sales, and supply chain are excited about the benefits of automation, and they will likely overestimate AI’s intelligence,” commented Bhandare. “They’ll want to go to an automation solution right away, instead of a trial period using more of a decision support or augmentation approach.”

Good CFOs must pace their organisation’s adoption of AI to avoid the disillusionment that can arise from inflated expectations. There is a natural maturity progression from decision support, to augmentation, to automation with nearly any AI use case. Automating decisions too fast is likely to lead to bad results. It’s also important to establish a process to periodically review the performance of automated decisions because AI systems need to be retrained and tinkered with regularly.

3. Loss of external trust

As a significant point of contact for investors, regulators, and customers, CFOs have an important role in managing how an organisation’s use of AI is perceived externally. CFOs must ensure that the investments their company is making do not break the trust built with external parties. 

“When the data that AI is using to interact with external parties is biased or insecure, when the model is not updated to reflect current regulations, or when employees are not skilled to explain AI results to their customers: these failure points can lead to AI providing information that is incorrect, biased, or simply contrary to the company’s culture. This will erode the trust organisations have built with their stakeholders,” noted Christensen.

4. Rigid mindset

When properly implemented, AI will perform some tasks better than humans. Framing this as simply a set of lower-value tasks that human employees will no longer carry out is frightening for employees, who tend to perceive it as replacing humans with machines and exhibit change resistance.

“The mistake CFOs often make is that while they tell employees what they wanted them to stop doing, they don’t properly identify what they wanted them to start doing, or provide any support for new ways of working,” concluded Christensen. “Rather than just asking ‘Is the tool easy to use?’ ask, ‘How will staff react to the use of AI, and how are we planning for their response?’”

Like this item? Get our Weekly Update newsletter. Subscribe today

Also see

Add a comment

New comment submissions are moderated.