Responsible and Explainable AI: Exploring the Future of Trading

Published January 6, 2023 | 5 min read

AI has enormous potential to add business value, but it can also carry ethical and safety risks

Key learnings

  • While AI has enormous potential to add business value, it can also carry ethical and safety risks.
  • Factors such as bias or low-quality, unrepresentative, sparse data can lead to sub-optimal outcomes, e.g., inadvertent discrimination by AI against individuals and groups.
  • Responsible AI is being encouraged through principles-based regulation and government guidelines. New regulations are coming down the line.
  • Explainability is a crucial component to trust in machine learning. But it is also context-dependent and may only be required for some algorithms.
  • It takes a variety of strategies to develop and deploy more usable AI models. A thoughtful approach to explainability, setting up proper governance, model validation, data collection processes, and ensuring model robustness and reliability all go a long way to helping humans trust the output of an AI.

When deployed at scale, AI has the potential to deliver significant business value. It can improve efficiency and performance, reduce business costs, and accelerate research and development. But the enormous potential of AI systems, which are often complex and opaque, can also translate into ethical and safety risks.

As AI is increasingly used for financial decision-making, it is vital to ensure that intelligent systems are built and deployed responsibly. Trustworthy or responsible AI promotes positive outcomes for everyone, from clients to the broader community, employees, and the business.

 

Developing AI systems responsibly: Practical considerations

The path to implementation of responsible AI principles is not always obvious. There are many potential considerations, ranging from identifying relevant definitions of key concepts (such as fairness) to dealing with bias – which could engender inequalities in decision-making, to privacy and competition concerns. View our infographic for more details on the top ethical risks surrounding AI. 

The challenge is that, as humans, we inevitably make biased decisions. As such, there are risks of bias being reflected in the data sets leveraged by machine learning. For example, there are cases where racial bias has been detected in AI systems1. Therefore, the training data used to build AI is carefully curated by data scientists. Machine learning engineers design algorithms to help mitigate inadvertent discrimination against individuals and groups or certain types of investments and/or transactions in the financial world. Without these safeguards, bias can erode trust in AI outputs.

 

Spotlight on model risk

Another critical AI risk, especially finance, is model risk management. Financial institutions that are committed to responsible AI tend to take a two-pronged approach to model risk management:

  • The “three lines of defense” strategy. To increase oversight and accountability across an organization, the “three lines of defense” approach involves having three sets of eyes on each model and empowering independent teams across the organization to scrutinize their inner workings. When applied to models, the ‘three lines’ are: model development, model validation, and internal audit. The model development team is responsible for building and testing models. The validation team is responsible for scrutinizing the work of the development team. Meanwhile, the audit team checks the work of other teams’ to ensure they fulfill their responsibilities.
  • Following principles-based regulations. The external validation piece for model risk management in financial institutions relies on rules prescribed by government bodies. In Canada, for example, The Office of the Superintendent of Financial Institutions (OSFI) has issued formal guidance describing model risk management principles by which banks must abide2. In June 2022, the Canadian government also proposed Bill C-27, also known as the Digital Charter Implementation Act3, to strengthen Canada's private sector privacy law and create new rules for the responsible development and deployment of AI.

Other governments and independent bodies are also developing their own rules for the safe development and deployment of AI. The OECD, for instance, has already outlined five principles for the development of AI. Meanwhile, the European Commission has issued ethics guidelines for trustworthy artificial intelligence, recently turned into a draft AI Act, detailing seven essential requirements:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6. Societal and environmental well-being
  7. Accountability
 

Building trust in AI systems

Meaningful accountability is critical to building trust. A focus on good testing and monitoring, a thoughtful approach to explainability, establishing proper governance, model validation, and data collection processes help ensure model robustness and reliability.

According to IBM4, “Explainable AI is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI describes an AI model, its expected impact, and potential biases.”

The need for explainability may vary depending on the business context and use case. When AI is applied to make trading decisions, explainability enables end users to feel confident engaging with predictions. Technical teams may also need to understand why certain decisions have been made, to troubleshoot, regulate and monitor by oversight, and better understand the model’s capabilities.

Explainability challenges

According to IBM’s AI Adoption Index 20225, organizations face numerous barriers when it comes to developing explainable and trustworthy AI:

- 63% cite a lack of skills and training to develop and manage trustworthy AI

- 60% struggle with AI governance and management tools that don’t work across all data environments

- 56% are building models on data that has inherent bias

 

Cutting through the jargon

While there are challenges to building responsible AI (see box), having top-tier in-house talent to develop AI systems is critical. Given their size, scale, and history in innovation, banks are often well-positioned to recruit and retain this talent, and many financial institutions are advancing the adoption of ethical AI.

At RBC Capital Markets, the Borealis AI research team leads the charge in responsible AI.

It took a diverse set of skills to successfully develop and launch AidenTM, our AI-powered electronic trading platform that uses deep reinforcement learning. This experience and knowledge came from – and continues to be deployed in new ways by – a team of scientists, product, business experts, and engineers, all of whom operate within an extensive risk management framework. A targeted explainability layer to further enhance client insights and to help traders provide feedback about the accuracy and usefulness of the system’s explanations is currently in the pipeline.

This combination of talent and strong practices around responsible AI and model governance, enables RBC to leverage cutting-edge innovation to deliver improved trading results and insights for clients.

Our journey to develop AidenTM remains ongoing (AI systems have learned to keep learning), but critical elements of responsible AI have been identified. Borealis AI has released its RESPECT AI framework to advance the adoption of responsible AI in 2020.

According to that framework, the essential ingredients for responsible AI include:

  • Robustness testing
  • Fairness analysis
  • Proper model governance, including model monitoring, risk mitigation, and validation practices
  • Sophisticated data privacy safeguards
  • A thoughtful approach to explainability

Working in tandem, these elements help ensure accountability and therefore trust in AI outputs and systems.

To learn more about how Aiden TM delivers powerful trading intelligence at RBC Capital Markets, please visit rbccm.com/aiden. And to discover more about the research behind Aiden TM from the Borealis AI team, head to https://www.borealisai.com/research-blogs/aiden-reinforcement-learning-for-order-execution/.


1 https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software
2 https://www.crisil.com/en/home/our-analysis/reports/2017/11/canada-aligns-osfi-e-23-guidelines-on-model-risk.html
https://www.nortonrosefulbright.com/en/knowledge/publications/55b9a0bd/bill-c-27-canadas-first-artificial-intelligence-legislation-has-arrived
4 https://www.ibm.com/watson/explainable-ai
5 https://www.ibm.com/downloads/cas/GVAGA3JP


AIAidenEthicsExplainabilityResponsibility