It’s no secret that the implementation of Artificial Intelligence (AI) in the Financial Services sector is seeing rapid growth as competition builds to develop innovative products, optimise efficiency, and ultimately gain a competitive advantage.
A 2020 study conducted jointly by the World Economic Forum (WEF) and Cambridge Center for Alternative Finance (CCAF) reveals that most financial companies expect AI to play a significant role in their business in the near future. Of the 151 global companies participating in the survey, over three-quarters of respondents said they anticipate AI will be of “high or very high overall importance” to their business within two years, with 85% of respondents already implementing AI in some form.
In the Financial Services sector, AI primarily appears in the form of Machine Learning (ML) and Natural Language Processing (NLP). By harnessing massive amounts of data, AI can generate accurate predictions, inform better, data-driven decisions, and even interact with humans through various mediums. AI appears in Robo advice, chatbots, trade matching in investment banking, algorithmic trading, and data anomaly detection, to name a few. While applications of AI in Financial Services are wide-ranging, the WEF’s survey data reveals Risk Management as the “usage domain with the highest current AI implementation rates in the Financial Service sector,” with 56% of firms already implementing Artificial Intelligence solutions.
Implementation of AI in Risk and Compliance
Within Risk and Compliance functions, the most common use cases of AI are fraud detection and surveillance, preventative pattern analysis to find potential exploits in new datasets and AI-enabled conduct risk management. AI can not only speed up these processes but do so with fewer errors and at a lower cost. For instance, London-based Fintech ComplyAdvantage uses AI to identify adverse media reports used in AML compliance. Livia Benisty, head of ComplyAdvantage’s financial crime unit, estimates their AI is capable of processing 150 million articles per month, or 6.5 million articles per day. For context, 50 traditional bank researchers working a full day (without breaks) can cover just 24,000 articles per day, just 4% of what the AI tool is capable of.
Statistics like these raise questions about how AI will impact employment in the Financial Services sector. It is logical to expect the automation of human tasks will lead to job reductions, but it turns out this is an oversimplification. In the short term, both incumbents and FinTechs anticipate job creation as a direct result of AI adoption (+3% among incumbents and +8% among FinTechs by 2022). However, by 2030, incumbents expect AI to shrink their workforce by 9%, while FinTechs expect AI to grow their organizations by almost 20%.
Investments in AI are generally profitable
Despite growing attention to AI in the Financial Services Industry, only 40% of respondents in the WEF & CCAF survey currently allocate over 10% of their R&D expenditure towards AI. This number is likely to grow seeing as firms that did invest in AI were more likely to indicate a significant AI-induced increase in profitability, which show no signs of diminishing returns, but rather growing returns to investment. The study identifies a critical point of R&D expenditure in AI of 10%, after which firms saw “constant perceived increase[s] in associated pay-offs.” Among firms that allocated over 40% of R&D spending on AI, over half experienced significant AI-induced increases in profitability. This apparent link between AI investment and profitability is likely due to scale effects of AI investment (technical infrastructure, applications, and data), and will likely encourage firms that are already high spenders in AI to invest even further.
Hurdles to AI Implementation
While AI implementation has the potential to transform Risk and Compliance functions, it does not come without risks and challenges. In the CCAF and WEF’s joint study, they identified the three most commonly perceived hurdles to AI implementation:
Quality of Data and Systematic Bias
91% of respondents perceived data quality to be a hurdle to AI implementation. Data that is poorly structured, high dimensional, noisy, unbalanced, or simply incomplete takes longer to make use of, and may ultimately lead to inaccurate or biased results.
Access to Data
82% of respondents considered access to data to be an obstacle to AI implementation. Some firms cite the cost of data as a limiting factor in their ability to implement AI, while others lacked the necessary infrastructure to collect data that is otherwise unavailable in the public domain.
Access to Talent
As applications of AI grow in number across a wide range of industries, access to AI experts is becoming sparse and competition is growing. 84% of the surveyed firms consider access to talent to be a hurdle to AI implementation.
New Considerations in Risk and Compliance
While these are concerns Financial Services firms have as a whole, the implementation of AI introduces new challenges that revolve specifically around Risk and Compliance. One is the potentially growing scope of responsibilities firms may have due to better predictive analytics. If a company is harnessing AI to make more accurate predictions, this may widen the range of things that are “reasonably foreseeable” and widen the scope of what such companies may be liable for.
Another issue that arises is the explainability of models. AI models often run into the “black box” problem, where a model’s outputs are very difficult or even impossible to explain. This has the potential to leave AI implementation in a grey area, where the satisfaction of legal and regulatory requirements becomes a huge challenge. At Aveni, this issue is tackled with a ‘Human+’ approach to AI adoption. By wrapping AI outcomes around human action one can achieve material efficiency gains whilst keeping the ultimate decision making within the human domain. This approach solves many of the ‘black box’ challenges within AI solutions and enables Human and AI solutions to work in concert. Firms looking to implement complex models will need to avoid these models that are difficult to explain to the vast majority. Overall, the lack of legal precedent and regulation around the use of Artificial Intelligence will be a challenge for Risk and Compliance teams to manage as their organisations look to implement newer technologies without overstepping any boundaries.
While there is clearly much uncertainty around just how AI will end up inserting itself in the Financial Services landscape, one thing is for sure: It’s coming, and it’s coming fast. As companies continue to explore AI, incumbents will see competition not only from up and coming FinTechs but also from technology giants like Google that already have a foothold in big data and have some of the leading experts in the field of AI. The winners and losers of this race will largely come down to who is able to capture the best talent, and who is positioned best in the ever-evolving regulatory environment.