Advances in artificial intelligence (AI) and machine learning (ML) have led to increased adoption in the financial services sector. A prominent use for this technology is to assist in key compliance and risk functions, including the detection of fraud, money laundering, and other financial crimes, as well as trade manipulation collectively raeferred to as “Risk AI/ML.”

As the use of these models grows, so do questions about managing risks associated with the models. In particular, regulators, financial institutions, and technology service providers have been looking at whether existing model risk management guidance2 (“MRM Guidance”)—which has traditionally been the regulatory regime applicable to managing model risk in the financial services industry—continues to be relevant for AI/ML models and, if so, how the guidance should be interpreted and applied to this new technology.

This research seeks to address that question, with the aim of fostering thought and dialogue among agencies, financial institutions, risk model vendors, and other entities interested in the performance, outputs, and compliance of models used to identify, mitigate, and combat risks in the financial services industry. However, it does not purport to address specific issues that may arise with other applications of AI/ML, such as consumer credit underwriting, or models incorporating the recent advances in generative AI technology.

Taking into account those unique aspects of AI/ML models, this research offers specific observations and recommendations regarding the application of MRM Guidance to Risk AI/ML models, including:

• Risk assessment

In assessing the risk presented by a model, it is important to recognize that all AI/ML models are not inherently more risky than conventional models. A risk-tiering assessment must consider the targeted business application or process for which a model is used, as well as the model’s complexity and materiality. To assist in these assessments, regulators could clarify that the use of AI/ML alone does not place a model into a high-risk tier and publish further guidance to help set expectations regarding the materiality/risk ratings of AI/ML models as applied to common use cases.

  • Safety and soundness

    Due to the dynamic nature of Risk AI/ML models, reliance on extensive and ongoing testing focused on outcomes throughout the development and implementation stages of such models should be primary in satisfying regulatory expectations of fitness and soundness. To that end, the development of technical metrics and related testing benchmarks should be encouraged. Model “explainability,” while useful for purposes of understanding specific outputs of AI/ML models, may be less effective or insufficient for establishing whether the model as a whole is sound and fit for purpose.

  • Model documentation

    The touchstone for the sufficiency of model documentation should be what is needed for the bank to use and validate the model, and to understand its design, theory, and logic. Disclosure of proprietary details, such as model code, is unnecessary and unhelpful in verifying the sufficiency of a model and would deter model builders from sharing best-in-class technology with financial institutions.

  • Industry standards and best practices

    Regulators should support the development of global standards and their use across the financial services and regulatory landscape by explicitly recognizing such standards as presumptive evidence of compliance with the MRM Guidance and sound AI/ML risk mitigation practices. In addition, regulators should foster industry collaboration and training based on such standards.

  • Governance controls

    Regulators should use guidance to advance the use of governance controls, including incremental rollouts and circuit breakers, as essential tools in mitigating risks associated with Risk AI/ML models.

We invite a discussion of additional considerations, including the importance of examiner and industry training and collaboration, as well as openness by regulators to continue to refine the MRM guidance as AI/ML technologies develop and standards emerge. 

Implementing our recommendations would advance several goals. It would help regulators, financial institutions, and technology providers work together to better serve their shared purpose of protecting the safety and soundness of the financial system. At the same time, implementing the recommendations and continuing work in this space would promote the adoption of cutting-edge technologies in the industry, including those that combat such scourges as money laundering, illicit finance, and fraud.

6 thoughts on “AI/ML in financial services

  1. aeroslim says:

    I just could not depart your web site prior to suggesting that I really loved the usual info an individual supply in your visitors Is gonna be back regularly to check up on new posts

  2. glucoslim says:

    I do believe all the ideas youve presented for your post They are really convincing and will certainly work Nonetheless the posts are too short for novices May just you please lengthen them a little from subsequent time Thanks for the post

Leave a Reply

Your email address will not be published. Required fields are marked *