Treasury News Network

Learn & Share the latest News & Analysis in Corporate Treasury

  1. Home
  2. Risk Management
  3. Financial Risk Management

AI must be questioned to avoid next financial crisis

New technologies such as artificial intelligence (AI) are set to bring automation and greater analytic capabilities to global financial systems. Many jobs will be lost – approximately 20 per cent of jobs in the UK and 26 per cent of jobs in China, according to PwC research. But PwC predicts that new technologies will actually create as many jobs as they replace and in China it estimates that AI could create as many as 90 million more jobs than it displaces over the next two decades.

Nonetheless, businesses are right to take a cautious approach to their adoption and use of AI and new machine learning (ML) technologies. Research shows that seven out of 10 firms say they are also putting safeguards in place to ensure they use these technologies safely and responsibly. According to a study by Accenture, most AI adopters – which now account for 72 per cent of organisations globally – conduct ethics training for their technologists (70 per cent) and have ethics committees in place to review the use of AI (63 per cent).

Big data could mean big problems

It's just as well organisations are taking a cautious approach to AI – blind faith in the ability of algorithms and big data sets to give us an accurate picture of reality could be misguided. In his article, Big Data and Machine Learning Won’t Save Us from Another Financial Crisis, Stephen Blyth, professor of the Practice of Statistics at Harvard University, says “it is imperative we question the confidence placed in the new generation of quantitative models, innovations which could, as William Dudley warned, 'lead to excess and put the [financial] system at risk'.”

Blyth also draws a parallel between the recent advances in ML and algorithmic trading and “the explosive growth of financial engineering prior to the crisis”. In fact he argues that we might be at risk of repeating some of the same mistakes and gives an example of how even an 0.1 per cent discrepancy rate in a very large data set can produce significant errors. In short, he says that “big data does not preclude big problems”.

He concludes that those using AI technologies will also have to use common sense, humility and judgement in assessing the reliability of large data sets: “More than ever, judgment – necessarily subjective and based on experience – will play a significant role in moderating over-reliance on and misuse of quantitative models. The judgment to question even the most successful of algorithms, and to retain humility in the face of irreducible uncertainty, may prove the difference between financial stability and the 'horrific damage' of another crisis.”

Strong ethical framework

The report from Accenture found that AI leaders, such as chief information officers, chief technology offers, and chief analytics officers, recognise that “oversight is not optional for these technologies”. The report notes that “AI now has a real impact on peoples’ lives which highlights the importance of having a strong ethical framework surrounding its use.”

Rumman Chowdhury, of Accenture Applied Intelligence, underlines the need for those using AI technologies to be accountable and be able to understand and explain the underlying system. He says: “Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people. These are positive steps; however, organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’. They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.”

Like this item? Get our Weekly Update newsletter. Subscribe today

Also see

Add a comment

New comment submissions are moderated.