Addressing Health Disparities in The Real World: Lessons Learned From AI
Novel metrics can help clinicians better understand—and alleviate—bias in technology and clinical encounters.
Michael D. Abramoff, MD, PhD
Retina Today
AT A GLANCE When first encountering autonomous AI, clinicians raised many concerns, including job loss, potential bias, and the effect on health equity. With AI, the creators can measure how much each bioethical principle is being met through the principle of metrics for ethics. The goal of any analysis is to provide transparency about potential sources of bias and health inequity, and the sustainability of mitigation efforts. Improving health equity has become a driving force within the medical community, US Congress, and the Department of Health and Human Services, and is even starting to affect reimbursement.1 While there are many reasons for avoidable health inequities, lack of equitable access to diagnosis and treatment are prominent in disease states ranging from breast cancer to depression and diabetic eye disease.2-7 Currently, fostering health equity is a goal of all health care stakeholders: patients, providers, ethicists, payors, regulators, legislators, and even AI creators. Autonomous AI—where the medical decision is made by the AI without human oversight or clinician input—has received broad stakeholder support, including from retina specialists, considering the first device cleared by the FDA provides a diabetic retina examination.8 Where rigorously validated and appropriately implemented in real-world clinic workflows, AI tools can improve clinician productivity, health equity and efficacy, and clinical outcomes, all while reducing cost.9-13 THE PROBLEM When first encountering autonomous AI, clinicians raised many concerns, including job loss, potential bias, and the effect on health equity, even though such issues already affect non-AI-related health care processes and interactions.14 This is especially true when the autonomous AI (eg, LumineticsCore, Digital Diagnostics) claims that it is intentionally designed to improve access, outcomes, and health equity for underserved populations—and paves the way, ethically, for other autonomous AI systems on the market. Such concerns have led to an explosion of studies on the risks and benefits of AI and how to address them. In response, we and others created an ethical framework for AI as the foundation upon which autonomous AI regulation and AI reimbursement is built.13,15,16 Provider concerns of bias, patient benefit, cost, liability, and effect on health equity led to the reexamination, from an ethics perspective, of all health care interactions and processes, even those performed solely or mostly by specialists. Using our ethical framework as a foundation, we, together with the FDA and other health care stakeholders, recently completed a careful analysis of how bias can be introduced—and mitigated and addressed—during the conceptualization, design, engineering, training, deployment, regulation, and monitoring of AI in the real world, and it easily translates to any health care process.17 MEASURING ETHICS The three central bioethical principles are beneficence/maleficence (patient benefit or “do no harm”), justice (ie, equity), and autonomy (Figure).18 Any provider, medical process, or treatment is unable to meet each bioethical principle fully. Rather, everything requires a balance between each ethical principle. For example, maximizing outcomes for lung cancer (beneficence) may be reached by banning smoking, which negatively affects the bioethical principle of patient autonomy.15 Figure.