The BIS report notes that AI use is becoming prevalent in financial institutions and now touches on many of their business activities. And in the financial ecosystem it is increasingly being used not only to enhance internal productivity, but to support critical business areas.
This widespread adoption of AI is driving regulatory concern about the explainability of the underlying AI models, particularly for activities that are client facing or those, such as underwriting or capital requirements, that represent core functions of financial institutions.
The report includes an illustration of a small sample of potential use cases for AI tools.
| Function | Complexity of model | Level of explainability needed |
| Decision-tree-based email classification | Low | Low |
| Customer support chatbots | Low-Medium | Low-Medium |
| AML / CFT / Fraud detection | Medium-High | Medium-High |
| Document summarization / classification | High | Low |
| Credit / insurance underwriting | High | High |
| Financial forecasting | Low | High |
The lack of explainability for most of these applications would trigger regulatory concern, and for at least some of them could have either systemic or prudential consequences. In addition, the firms’ responsibilities towards their customers, including issues touching on fairness and financial inclusion, could be affected by models and model outcomes.
According to the BIS, supervisory authorities “generally expect firms to be able to explain AI models” and are “unlikely to trust the results of an AI model if its results cannot be understood.”
The existing model risk management standards and requirements already explicitly or implicitly cover explainability issues. But they do this at a very high level,and a clearer articulation of explainability in connection with AI would be helpful.
The paper points to the following examples of MRM guidelines from around the globe.
| Jurisdiction | Authority | Document | Issued |
| Canada | Office of the Superintendent of Financial Institutions (OSFI) | Draft guideline E-23 – Model risk management | November 2023 |
| Japan | Financial Services Agency of Japan (FSA) | Principles for model risk management | November 2021 |
| UAE | Central Bank of the United Arab Emirates (CBUAE) | Model management standards | November 2022 |
| UK | Prudential Regulation Authority (PRA) | Model risk management principles for banks | May 2023 |
| US | Federal Reserve Board/Office of the Comptroller of the Currency (FRB/OCC) | Supervisory guidance on model risk management | April 2011 |
| OCC | Model risk management (Comptroller’s Handbook) | August 2021 |
The paper draws attention to a shared focus between these standards, including on:
- governance;
- transparency (in model development / documentation);
- validation;
- deployment and monitoring;
- independent review / internal audit.
Explainability challenges are often amplified when third-party AI tools are used, and the BIS cites the 2024 BoE and FCA firm survey, which found that half of respondents had “only a partial understanding of the AI technologies they use due to the use of third-party models.” In this context the BIS points out that regulatory expectations around explainability remain unchanged when third party tools are used.
The paper outlines some of the limitations of existing explainability techniques and suggests that these mean that a requirement for an AI model to be fully explainable could prove challenging for complex models that involve large number of parameters interacting in a non-linear way. This is because conclusive attribution of the model output to specific combinations of data input are simply impractical.
| Limitation | Description |
| Inaccuracy | Explanations may not faithfully represent an AI model’s actual decision |
| Instability and sensitivity | Small changes to data input can lead to drastically different explanations |
| Inability to generalize | Explanations may not hold true when generalized to a broader population/data set |
| Non-existence of ground truth | There are no universally accepted metrics to assess the correctness or completeness of explanations |
| Misleading interpretations | Misleading explanations can appear plausible |
The authors of the report suggest that a good regulatory approach could involve tailoring specific explainability required to the different levels of risk present in specific AI use cases.
A part of this approach might be an “explicit recognition of possible trade-offs between explainability and model performance.” Where the enhanced performance of complex models should be “weighed against the consequences of insufficient explainability.” And this should be coupled with the introduction of adequate safeguards which may include:
- frequent testing;
- ongoing monitoring;
- alternative enhanced risk management measures;
- data governance;
- circuit breakers; and
- human oversight.
The report concludes by suggesting that action by financial authorities is “imperative” in order to manage and mitigate the risk associated with AI adoption. Such action could include new requirements, safeguards and standards. But so long as the intrinsic risks and their consequences are recognized and effectively managed, there may ultimately be a need to recognize “trade-offs between explainability and model performance.”
And finally, of course, there is the human resource challenge, which is one also being faced by those adopting AI in all industries and areas: securing specialists or upskilling existing employees is “no trivial task.”

