Naming and Diffusing the Understanding Objection in Healthcare Artificial Intelligence

Authors

DOI:

https://doi.org/10.7202/1114958ar

Keywords:

artificial intelligence, clinical ethics, consent, clinical decision-making, radiology, understanding, human-in-the-loop

Language(s):

English

Abstract

Informed consent is often argued to be one of the more significant potential problems for the implementation and widespread onboarding of artificial intelligence (AI) and machine learning in healthcare decision-making. This is because of the concern revolving around whether, and to what degree, patients can understand what contributes to the decision-making process when an algorithm is involved. In this paper, I address what I call the Understanding Objection, which is the idea that AI systems will cause problems for the informational criteria involved in proper informed consent. I demonstrate that collaboration with clinicians in a human-in-the-loop partnership can alleviate these concerns around understanding, regardless how one conceptualizes the scope of understanding. Importantly, I argue that the human clinicians must be the second reader in the partnership to avoid institutional deference to the machine and best promote clinicians as the experts in the process.

Downloads

Published

2024-12-02

How to Cite

[1]
Wadden JJ. Naming and Diffusing the Understanding Objection in Healthcare Artificial Intelligence. Can. J. Bioeth 2024;7:57-63. https://doi.org/10.7202/1114958ar.

Issue

Section

Articles