Ensuring AI Remains a Force for Good

By Foteini Agrafioti & Alex LaPlante
Published January 8, 2021 | 3 min read

A new survey found that 93% of Canadian businesses experience barriers to implementing AI in an ethical and responsible way, but Canada’s banking sector and financial regulators are well positioned to support society on safe and ethical AI development that can benefit all industries.

This has been a landmark year for artificial intelligence. In 2020, AI has become a deeply ingrained part of society, providing us everyday, useful tools in our personal and professional lives. We are comfortable using it to recommend a new TV show, assisting with our in-home fitness routines or enabling its facial-recognition technology to access our smartphones.

On a broader scale, AI is helping us manage a national public-health crisis. From early-warning detection signals to contact tracing and Canada’s COVID-19 chatbot, AI provides a vital service, ensuring our communities are more livable and resilient through challenging times.

For businesses that create and deploy this technology, AI has become a driver of growth and productivity. According to PricewaterhouseCoopers, AI will add as much as US$16 trillion to the global economy by 2030 – almost as much as the United States produces today. It’s also helping firms better understand and predict customer behavior during tumultuous economic environments.

Yet, many have rightfully questioned how data is being harnessed by multibillion-dollar enterprises, or whether AI will further entrench systemic bias, discrimination and misinformation, creating further divisions within our societies. Even more worrisome is the public perception that businesses are not upholding their core responsibilities of accountability and transparency.

This rings especially true for heavily regulated industries such as healthcare, transportation and banking, where the stakes are high, and trust, reliability, data protection and security are paramount.

“For Canadian banks, who have the responsibility of upholding fair lending standards, ensuring that models treat underrepresented groups fairly is an important part of the model-testing process that has been in place for years.”

Canada – and more specifically, its financial institutions – have an integral role to play in showing the world that businesses, governments, and the scientific community can work together as one. Our nation recently joined an international partnership to promote the responsible advancement of AI.

With their extensive experience validating the safety of data models, Canada’s banking sector and financial regulators are in a prime position to support society on safe and ethical development of AI that can benefit all industries.

This is because model validation – the process of ensuring that AI models are safe to use – has been a regulatory requirement in the computerized finance industry for years. It helps the industry ensure that models, such as those detecting fraud, analyzing cyberthreats or calculating lending risk, perform as expected. It also identifies limitations of these models and assesses their potential adverse effects.

For Canadian banks, who have the responsibility of upholding fair lending standards, ensuring that models treat underrepresented groups fairly is an important part of the model-testing process that has been in place for years.

A new survey found that 93% of Canadian businesses experience barriers to implementing AI in an ethical and responsible way, with many citing cost, time and lack of understanding as the fundamental issues.

“For Canada’s business sector to cultivate an AI ecosystem that’s free of bias, there must also be an ecosystem of responsibility, where accountability and trust are delivered in every decision.”

To improve understanding about ethics in AI and remove long-standing barriers, Royal Bank of Canada and Borealis AI recently launched Respect AI, a program that includes open-source research code, tutorials, academic research and lectures that are available for individuals and businesses.

At the forefront of this program are guidance and research tools around issues such as bias and fairness, which in the form of algorithms and data collection models, can be subtle, yet more impactful than the most overt and outrageous forms of blatant discrimination.

The good news is that, unlike our own unconscious bias, we can uncover bias in models by auditing them in detail and at every step of the way. For Canada’s business sector to cultivate an AI ecosystem that’s free of bias, there must also be an ecosystem of responsibility, where accountability and trust are delivered in every decision. This includes:

  • Sharing industry knowledge and best practices, especially from regulated environments such as financial industries or health care, on how to approach responsible AI development.
  • Using technology to expose bias. It is imperative that companies audit their AI for bias not just on launch day, but every step of the way.
  • Diversifying the industry and our data. Bringing more diverse voices into the industry, and vigilance in ensuring that data is representative of our population, will help fuel the creation of technology that works for everyone.
  • Educating the public. People need to be aware of the positive implications – and risks – that AI will have on their lives. The more our future generations understand AI and its societal and ethical implications, the better prepared they will be to ask tough questions of our leaders.

AI has the potential to help move our society forward, but only if it’s being done ethically and responsibly. Solving bias in machines will take a human touch, and the help of our social scientists, business and governments – and there’s no country better positioned than Canada to take the reins.

The article “Canada’s Opportunity to Ensure AI Remains a Force for Good” was originally published on December 23, 2020 in the Globe & Mail.


Foteini Agrafioti & Alex LaPlante


Artificial IntelligenceCOVID-19Reinforcement Learning