top of page

First Do No Harm: What AI Governance Can Learn from Medical Research

  • Marian Hill
  • Aug 7, 2024
  • 3 min read

Updated: Aug 14

With each new advance in AI technology, concerns about the potential for widespread harm continue to grow. Much of this panic stems from the lack of a national AI governance structure, especially in high-risk industries like healthcare, where consequences can be devastating. 

Unfortunately, we’ve seen how innovation without ethical guardrails can harm vast populations. In the U.S., medical research has a long and painful history of unethical experimentation. As documented in Medical Apartheid,  “exploitative, abusive involuntary experimentation [occurred] at a rate far higher than other ethnic groups.” While public outrage over the Tuskegee Syphilis Study finally spurred government action, by then the harm had already been done. The effects still reverberate today, contributing to the racial healthcare gap that we see in practice and in outcomes.


We must learn from this history and do better to prevent harm before it happens. So, how do we innovate while protecting people from harm?


One answer lies in the 1979 Belmont Report, which was able to drive federal regulations and policies on medical research through an enduring ethical framework.  Its development offers a model for how we might build thoughtful, effective AI governance, even as science and technology evolve.



A Framework for Flexible Governance: The Belmont Report


Rather than addressing specific situations, the Belmont Report established core principles and definitions to guide decision-making. Crucially, these principles weren’t abstract; each is connected to practical applications, with attention to nuance and evolving contexts.

Let's take a look at the framework itself: 


Definitions


The report begins with definitions of research and standard clinical practice. This may seem academic, but its clarity enabled consistent interpretation and enforcement across decades of scientific progress.

In AI, clear definitions will be just as essential. Much of today’s vocabulary is vague, inconsistent, and constantly changing. Words matter: shared language will be the first step in progress towards any future governance framework. 


The Three Core Ethical Principles


1. Respect for Persons


Every individual deserves the autonomy to make their own decisions about their healthcare. To do so, people need information, context, and freedom from pressure.

As the report states: 

“The extent and nature of information should be such that persons, knowing that the procedure is neither necessary for their care nor perhaps fully understood, can decide whether they wish to participate in the furthering of knowledge.”

Informed Consent is how we operationalize this principle. It requires: 


  • Complete, accessible information

  • Time to process and ask questions

  • Protection from coercion or undue influence


2. Beneficence


This principle focuses on the overall benefit to the individual. The report draws a hard moral line: 

“brutal or inhuman treatment of human subjects is never morally justifiable.”

The report explicitly informs us how to put this principle into practice through formal assessment of risks and benefits for each study. The assessment incorporates the chance (probability) and severity (magnitude) of possible harm and anticipated benefits. Psychological, physical, legal, social, and economic impacts are taken into account for both individuals and society at large.


3. Justice


Justice ensures fairness in how burdens and benefits of research are distributed. The report calls for the determination of individuals as potential research subjects: 

“based on the ability of members of that class to bear burdens and on the appropriateness of placing further burdens on already burdened persons.” 

Even when procedures are fair on paper, systemic bias can skew outcomes. That’s why both the process and the result must be monitored.


A Collaborative Model for Governance Development

The Belmont Report didn’t emerge overnight. It was created by a multidisciplinary commission with experts in medicine, ethics, law, psychology, and behavioral science, representing a wide range of backgrounds and perspectives.

They spent four days in deep discussion, then met monthly over four years to refine their thinking. The result was a clear, concise, and enduring framework, now embedded in U.S. medical research policy.



Applying the Belmont Report to AI Governance

I’m not suggesting we adopt these specific principles for AI Governance (although you could argue all three are relevant). AI Governance deserves the same thoughtful consideration that was applied to medical research:


  • A panel of diverse, multidisciplinary experts

  • Time for deep dialogue, reflection, and iteration

  • A concise output of clear principles with tools for enforcement


AI governance isn’t just a technical issue; it’s a human one. If we want to move fast and protect people, we need a framework that encourages innovation but demands humanity.


We’ve done it before. We can do it again.


 
 
 

Comments


bottom of page