Risk, Uncertainty and the Governance Dilemma for Artificial Intelligence

Last week the Public Policy Forum released a short report that I wrote on AI governance in Canada: Governing AI: Navigating Risks, Rewards and Uncertainty. It was accompanied by a shorter piece in the Institute for Research and Public Policy’s Policy Options: Navigating the risks and rewards of governing AI.

In both, I note that the main challenge is to find an appropriate balance between supporting the development AI technologies that promise social, economic and other benefits, while ensuring that the risks to the rights and well-being of Canadians are minimized. Complicating a solution, however, is the fact that AI is an emerging technology with a wide range of applications and that the nature of benefits and risks is highly uncertain. Decisions about how to balance innovation against risk must be made with imperfect and incomplete information.

In light of this, I argue that Canada’s approach to AI governance should focus on specific AI applications, not AI research in general. AI risks will manifest in the context of concrete uses in specific activities and sectors, such as health diagnosis, loan assessments, predictive policing or benefits eligibility assessment. Risk assessment and management should focus on what is appropriate in those contexts, while leaving the theoretical development of AI largely unhindered.

Implicit in both pieces is my sense is that, in trying to identify and manage the potential risks of AI, we are dealing with a particular case of the general Collingridge dilemma. In The Social Control of Technology, David Collingridge noted that while it is easier to regulate technologies when they are new—because they are not yet reflective of sunk costs and vested interests—uncertainty about their effects makes it hard to know exactly what to do. By contrast, when technologies are more developed and diffused throughout society, their consequences may be clearer but efforts to regulate them will be harder. In the early days of emerging technologies, we have power but insufficient clarity to act. In later days, we have more clarity, but declining power. As Collingridge himself writes:

When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.” 

Faced with the dilemma, we can adopt a laissez faire approach and simply hope for the best; a precautionary approach which prevents risks from emerging, but also robs us of the benefits new technologies might bring; or—as I argue in the case of AI—an incremental, risk management approach. This approach would have us focus on specific applications of AI in specific contexts, monitor and assess risks as they emerge, and address them before the technologies diffuse too widely and before the costs of doing become prohibitive.  

To be sure, an incremental risk management approach focused on specific applications requires substantial resources to monitor developments and the social and political will to address challenges as they emerge. But if we sincerely want to manage, both effectively and fairly, the risks and rewards of new and emerging technologies under conditions of uncertainty, these are investments we should be prepared to make.