AI is one of the most transformative technologies impacting the world, but organizations need to protect themselves and their clients from fraud and other security risks. That was the strong message from panelists at RBC’s Sustainable Business Conference, where Bobby Grubert, Head, Digital Solutions & Client Insights, chaired a panel on the social and governance implications of the technology.
Europe takes the lead on AI legal framework
The panel discussion took place a day after Canadian Prime Minister Justin Trudeau announced a $2.4 billion investment in the country’s artificial intelligence sector.
Ann Cavoukian, Executive Director at Global Privacy and Security by Design welcomed the Canadian government’s investment, but expressed concern that it included no specific mention of measures to ensure privacy.
While she was Information & Privacy Commissioner of Ontario, Cavoukian created Privacy by Design, a framework to embed privacy into systems engineering, which subsequently became an international standard.
Cavoukian points out that the European Union recently introduced an AI Act, the world’s first comprehensive AI law designed to address the risks the technology poses to users while positioning Europe to play a leading role in its development and deployment. The U.S. is now seeking to implement a similar framework.
“I’m hoping Canada will follow suit in bringing that here as well, so we can ensure that AI can grow, but in a non-privacy invasive manner,” she said.
“We want a world with AI. There’s no question it’s here to stay. The advances in the medical field, for example, are enormous. But you need to get privacy into the design of the operations before you get them up and running on a large scale.”
“We want a world with AI, but you need to get privacy into the design of the operations before you get them up and running on a large scale.”
Ann Cavoukian, Executive Director, Global Privacy and Security By Design
Safeguards by design, not afterthought
Foteini Agrafioti, Senior Vice President, Data and AI and Chief Science Officer at RBC, stressed that responsible data practices and consistent defence against security risks must be present at all times.
“The approach of many technology companies is to use any and all data that exists under the sun to train these models,” she says. “Any sensitive information embedded in that data could then be released through a future request to that model.”
RBC has recently established a set of Responsible AI pillars to help ensure the bank’s use of AI respects diversity and human integrity, and also guides how teams can consider, explore and build tools and bring them to market safely when the time is right.
“It doesn’t come as an afterthought. You don’t wait until there is an issue to pull back on a service. You design that way from the get-go,” says Agrafioti.
Client privacy and data security are a top priority for RBC which remains committed to responsible data practices – from how data is used to how it is protected. It is also committed to being transparent and explicit around customer consent.
The behaviour of machines, as well as how humans design and treat machines, are subject to bias and other ethical issues like privacy, accountability, fairness, explainability and transparency.
“We always assume that any practice that has been in place for a long time that collects data has been biased in some way,” she adds. “You expect that bias will be propagated downstream, if you don’t take care of it upfront, so you build models that can pass very detailed validation tests.”
“You don’t wait until there is an issue to pull back on a service. You design that way from the get-go.”
Foteini Agrafioti, Senior Vice President, Data and AI and Chief Science Officer, RBC
Unlocking productivity gains from generative AI
Through the use of ethical and responsible AI, RBC is able to derive insights from data to help its 17 million clients improve their banking experiences. However, there is an opportunity to use foundational AI models to make predictions across different downstream banking applications.
In the latter case, says Agrafioti, the bank is currently exploring internal use cases for generative AI with the potential to realize operational efficiencies, drive shareholder value and create new client offerings in the future. For example, RBC is testing generative AI applications in its equity research portfolio and its retail Advice Centre to speed up processes and enable employees to focus on the right types of tasks. The presence of “a human in the loop” makes these strong test cases for AI, she adds.
“You’ve got someone there with high intellectual ability, to look at the outputs from these systems and judge, do they pass our thresholds of quality and accuracy,” she explains.
With potential future productivity gains across the bank, RBC Capital Markets has recently set up an Office of AI, acting as a ‘front door’ to provide governance and standardization for potential use cases for the Capital Markets business.
Leaders must stay ahead of the rules
Businesses must prepare for regulation, says Cavoukian. “Industries have to become aware of the fact that this is only the beginning, in terms of the regulatory acts that have been introduced very recently.” She urges organizations to embed privacy and security into their operations. “Leaders of the company should be thinking, what can we do here that reflects the intention of the regulation, but protects my customers and their data?”
Cavoukian also rejects the false dichotomy sometimes presented that security and privacy are mutually exclusive. “It’s nonsense that you can’t have both – you must have both,” she insists. “In fact, if you don’t have strong security, you’re not going to have any privacy.”