Friday, Nov 3, 2023

4 Ways We Think About Health Equity and AI

Ivor Horn, MD, MPHDirector, Health Equity & Product Inclusion, Google

HLTH

I became a physician because I knew that healthcare should be better than what my family experienced as we fought for better quality care for my father when I was growing up. That experience made me work to make sure that everyone could access care that is provided with dignity and respect. 


As Google’s Chief Health Equity Officer, I see firsthand how AI technologies have the potential to identify and address existing biases in healthcare and advance equity in health. But if not done responsibly, these innovations also have the potential to exacerbate inequities. To make sure that doesn’t happen, we’ve identified four ways we think about embedding health equity into our work and pushing AI forward in a bold and responsible way to help people live a healthier life. 


Taking foundational approaches to equity research 


To reflect the experiences of historically marginalized people and communities, we first integrate foundational health equity approaches — like Community-based Participatory Research (CBPR) — into our design and evaluation methods. It is equally important to understand the social context of our users like the cultural, historical and economic circumstances to help us build solutions that work better for everyone. One example of bringing our years of experience of building more equitable AI models across products is the work we have done using AI to see more skin tones which in turn create camera features that work for everyone. Getting it right takes intention, but getting it wrong can easily result in propagating unfair biases.


Prioritizing diverse representation in data 


Historically, there has been a lack of diversity in clinical trials research which excludes groups of historically marginalized people from an important step in medicine when it comes to finding new ways to prevent, detect or treat disease. That is why we strive to make our data collection and curation process inclusive and equitable, and think deeply about the model development and evaluation process where we consider what data goes into a large language model and how to evaluate its performance. The issue today is that there is no standard when it comes to diverse representation in data which is why we are partnering with the broader AI research community to identify best practices. One way we are working to understand and better treat disease is through genomic sequencing but the map we have been using for decades is a single genome sequence and does not represent the diversity of humanity. Today we are working with the National Institutes of Health (NIH) and others on the Pangenome project to expand our view of the code that makes us all uniquely human and different. The first Pangenome release includes 47 people of diverse ancestries, and we’re working with NIH towards a goal next year of 100 people with as high quality sequences as possible. 


Considering health equity in real-world use cases 


The historical use of incomplete and biased data can exacerbate the risk of harm and bias among historically marginalized populations. To correct this, we need to carefully consider how an AI system will be used in practice. Grounding the evaluation of large language models (LLMs) in specific real-world use cases that can be used to reflect the experiences of marginalized populations is an important element to reducing these issues and hopefully increasing equity. Across Google, we’ve been working to improve fairness, reduce risk of bias and drive towards equity as we continue to enhance our model performance. Some of this work, highlighted in a Nature article, describes how we are applying these approaches for our Med-PaLM LLM in the medical domain. 

Fostering inclusive collaboration


Where a person lives, works, or goes to school can affect their health. In order to create useful generative AI models, we need to be able to recognize and understand these social drivers. To do so depends on our collaboration with experts across different areas — like social and behavioral science, policy or education. Partnering with Google’s Responsible AI Team and their Equitable AI Research Roundtable (EARR) Program, we are able to take a multidisciplinary approach to understanding the impacts of AI on historically marginalized communities and apply those insights to our work. 


Our work at the intersection of AI and health equity is an ongoing journey that we recognize requires responsibility and accountability. We must intentionally center those efforts around marginalized populations in order to build solutions that make healthcare more equitable and address historical biases. This work takes time and intention. Our aim is not to move fast, but to get it right; the alternative is not an option.


HLTH does not sell or provide any personal data (including email, phone, address) to any third parties and we never will. Any communication that pretends to be HLTH or any third parties selling purported lists, discounted rooms, or any product/services are NOT AFFILIATED with HLTH and are to be considered FRAUD.

Upcoming Event Dates

2024 | ViVE: Feb 25-28; HLTH Europe: Jun 17-20; HLTH US: Oct 20-23

2025 | ViVE: Feb 16-19; HLTH US: Oct 19-22

2026 | HLTH US: Nov 15-18

2027 | HLTH US: Oct 17-20

© 2024 HLTH, INC. All Rights Reserved