Managing Bias and Risk at Every Step of the AI-Building Process

Published January 10, 2020 | 1 min read

Business leaders are increasingly looking to machine learning to improve business outcomes or create new products and revenue streams. But while the field of machine learning is still young, many developers lack experience in building real-world enterprise solutions, and many business stakeholders have insufficient knowledge of machine learning to know what questions to ask as they scope to manage projects. With this in mind, how should companies go about building an AI-solution for their company?

In a recent Harvard Business Review article, Kathryn Hume and Alex LaPlante from Borealis AI (an RBC Institute for AI Research), discuss the complexities of building a machine learning system in an organization and explain why it’s critical to ensure frequent cross-functional communication at every step.

They argue that in order to build a machine learning system effectively, project owners should engage the business, end-user, technical, and governance teams in iterative dialogue throughout the system development process. For example, before deployment, the algorithmic model must be reviewed by the governance team to ensure that risk management requirements are met.

Lack of communication and coordination can result in frustration and wasted efforts. Making machine learning work in a business context often requires a series of decisions and trade-offs that can impact model performance. Sometimes inherent error rates make it such that it is best to cut losses early.

Project owners need to know what trade-offs and decisions they will face while building a machine learning system, and when they should assess these trade-offs to minimize frustration and wasted effort.

 

Borealis AI breaks down the process into five different phases:

  • 1. Design: define the business goal, and render it to a measurable objective. Assess gaps and factors that could impact solutions.
  • 2. Exploration: determine whether the available data are biased or imbalanced and discuss the business’ need for explainability.
  • 3. Refinement: address fairness and privacy. Train and test the model or model variants.
  • 4. Build and ship: implement a production-grade version of the model.
  • 5. Measure: document and learn from the model’s ongoing performance. Discuss how to manage errors and unexpected outcomes.

Ultimately, it is sometimes difficult to make a call on whether to trade off accuracy for privacy, fairness, or explainability until you assess the impact on the system’s performance. As such, technical teams can build a few candidate solutions and present options to a business leader to make the call on which system to advance to production.

Read the full article here.

Alex LaPlanteArtificial IntelligenceBorealis AIKathryn HumeMachine LearningTechnology