Idea in Brief

The Problem

Offerings that rely on machine learning are proliferating, raising all sorts of new risks for companies that develop and use them or supply data to train them. That’s because such systems don’t always make ethical or accurate choices.

The Causes

First, the systems often make decisions based on probabilities. Second, their environments may evolve in an unanticipated way. Third, their complexity makes it difficult to determine whether or why they made a mistake.

The Solutions

Executives must decide whether to let a system continuously evolve or introduce locked versions at intervals. In addition, they should test the offering appropriately before and after it is rolled out and monitor it constantly once it’s on the market.

What happens when machine learning—computer programs that absorb new information and then change how they make decisions—leads to investment losses, biased hiring or lending, or car accidents? Should businesses allow their smart products and services to autonomously evolve, or should they “lock” their algorithms and periodically update them? If firms choose to do the latter, when and how often should those updates happen? And how should companies evaluate and mitigate the risks posed by those and other choices?

A version of this article appeared in the January–February 2021 issue of Harvard Business Review.