Improving health equity through artificial intelligence

Updated 3 months ago on July 26, 2024

Artificial intelligence (AI) for clinical decision support (CDS) refers to systems and tools that use AI to help healthcare professionals make better-informed clinical decisions. These systems can alert physicians to potential drug interactions, suggest preventive measures, and recommend diagnostic tests based on patient data. Inequities in AI SRSs pose a significant challenge to health systems and individuals, potentially exacerbating health disparities and perpetuating an already inequitable health care system. However, efforts to create equitable AI in healthcare are gaining momentum with the support of various government agencies and organizations. These efforts include significant investments, regulatory initiatives, and proposed changes to existing laws to ensure fairness, transparency, and inclusivity in the design and implementation of AI.

Policy makers have a critical opportunity to effect change through legislation, implementation of AI governance standards, auditing and regulation. We need regulatory frameworks, investments in AI accessibility, incentives for data collection and collaboration, and rules for auditing and governance of AI systems used in CDS systems/tools. By addressing these challenges and taking proactive steps, policymakers will be able to harness the potential of AI to improve care delivery and reduce disparities, ultimately promoting equitable access to quality care for all.

Challenges and opportunities

AI has the potential to revolutionize healthcare, but its misuse and unequal access can lead to unintended dire consequences. For example, algorithms may inadvertently favor certain demographic groups, disproportionately allocating resources and deepening inequalities. Efforts to create equitable AI in healthcare have received significant momentum and support from various government agencies and organizations, especially for medical devices. The White House recently announced significant investments, including $140 million to the National Science Foundation (NSF) to establish institutes dedicated to evaluating existing generative AI (GenAI) systems. Although not relevant to health care, President Biden's draft "AI Bill of Rights" outlines principles to guide the development, use, and deployment of AI to protect people from its potential harms. The Food and Drug Administration (FDA) has also taken steps by releasing a beta version of its regulatory framework for AI medical devices used in healthcare. The Department of Health and Human Services (DHHS) has proposed changes to Section 1557 of the Patient Protection and Affordable Care Act, which explicitly prohibits discrimination in the use of clinical algorithms for decision support in subordinate organizations.

How inequalities in CDS AI are hurting health care

Exacerbating and perpetuating health inequalities

The inequitable use of AI can exacerbate health disparities. Research has shown that population health management algorithms that match health needs to costs allocate more services to white patients than to black patients, even when health needs are taken into account. This disparity arises because a target correlated with access to and utilization of health services tends to identify frequent users of health services who are disproportionately less likely to be black patients because of existing disparities in access to health services. Unequal AI perpetuates data bias when trained on skewed or incomplete datasets, inherits and reinforces bias through algorithmic decisions, thereby deepening existing inequalities and hindering efforts to achieve equity and equality in health care delivery.

Increase in expenditure

Algorithms trained on biased datasets can exacerbate inequalities by misdiagnosing or overlooking diseases prevalent in marginalized populations, leading to unnecessary tests, treatments and hospitalizations and increasing costs. Health inequities, which are estimated to result in $320 billion in excess healthcare costs, are exacerbated by the uneven adoption of AI in healthcare. Unequal access to AI-driven services widens the health care spending gap: affluent communities and health systems with sufficient resources often pioneer AI technologies, while underserved areas are left behind. As a consequence, untimely diagnosis and suboptimal treatment increase health care costs due to preventable complications and late-stage disease.

Decreased confidence

The unequal distribution of AI-assisted health care services has raised skepticism in marginalized communities. For example, in one study, an algorithm demonstrated statistical fairness in predicting healthcare costs for black and white patients, but disparities emerged in the distribution of services: despite the same incidence of disease, more referrals were made to white patients. Such disparities undermine confidence in AI-based decision-making processes, ultimately increasing distrust in health care systems and providers.

How bias is creeping into CDS artificial intelligence

Lack of data on diversity and inclusion

The datasets used to train AI models often reflect inequalities in society and health care, resulting in the propagation of biases present in the data. For example, if a model is trained on data from a healthcare system where certain demographic groups receive substandard care, it will internalize and perpetuate these biases. Compounding the problem is the fact that limited access to healthcare data forces AI researchers to rely on multiple public databases, which contributes to homogeneous datasets and a lack of diversity. In addition, while many clinical factors have evidence-based definitions and standards for data collection, the attributes that often account for differences in health care outcomes are less well-defined and less collected. Therefore, efforts to define and collect these attributes and promote diversity in training datasets are critical to ensure the effectiveness and equity of AI-driven health care interventions.

Lack of transparency and accountability

While AI systems are designed to optimize processes and improve decision-making in healthcare, they also risk inadvertently inheriting discrimination from their human creators and the environments from which they draw data. Many AI-based decision support technologies also suffer from a lack of transparency, making it difficult to fully understand and appropriately utilize their findings in complex clinical settings. Transparency identifies and addresses any inherited biases, while accountability encourages careful consideration of how these systems may negatively or disproportionately impact certain groups. Both are necessary to build public confidence that AI is being developed and used responsibly.

Algorithmic biases

The potential for algorithmic bias to infiltrate AI in healthcare is significant and multifaceted. Algorithms and heuristics used in AI models may inadvertently encode biases that further disadvantage marginalized groups. For example, an algorithm that places greater weight on variables such as income or education level may systematically disadvantage people from socioeconomically disadvantaged backgrounds.

Data scientists can adjust algorithms to reduce AI bias by adjusting hyperparameters that optimize decision thresholds. Thresholds for highlighting high-risk patients may need to be adjusted for specific groups to balance accuracy. Regular monitoring ensures that thresholds account for emerging biases over time. In addition, equity-aware algorithms can apply statistical parity when protected attributes such as race or gender do not predict outcomes.

Unequal access

Unequal access to AI technologies exacerbates existing inequalities and exposes the entire health care system to increased bias. Even if the AI model itself is designed without inherent bias, unequal distribution of access to its findings and recommendations can perpetuate inequities. When only those healthcare organizations that can afford advanced AI for CDS use these tools, their patients benefit from improved care that remains unavailable to low-income populations. Federal policy initiatives should prioritize equitable access to AI through targeted investments, incentives, and partnerships for underserved populations. By ensuring access to AI technologies for all health care providers, regardless of financial resources, policymakers can help mitigate bias and promote equity in care delivery.

Misuse

The potential for bias in healthcare due to AI misuse extends beyond the composition of training datasets to the broader context of AI application and use. Ensuring generalizability of AI predictions across healthcare settings is as necessary as fairness in algorithm development. This requires a comprehensive understanding of how AI applications will be used and whether predictions derived from training data will be effectively applied in different healthcare settings. Failure to consider these factors can lead to misuse or misuse of AI knowledge.

Opportunity

Urgent policy action is needed to eliminate bias, promote diversity, increase transparency and ensure accountability of AI CDS systems. With responsible oversight and governance, policymakers can harness the potential of AI to improve quality of care delivery and reduce costs, while ensuring equity and inclusion. Regulations mandating bias audits of AI systems and requiring explanation, audit, and review processes can hold organizations accountable for the ethical design and implementation of health care technologies. Additionally, policymakers can establish guidelines and allocate funding to maximize the benefits of AI technologies while protecting vulnerable populations. With human lives at stake, eliminating bias and ensuring equal access should be a top priority, and policymakers should seize the opportunity to make meaningful change. The time for action is now.

Action Plan

The federal government should develop and implement AI governance and auditing standards for algorithms that directly impact diagnosis, treatment, and access to care for patients. They should be flexible enough to accommodate advances in AI technology while ensuring that ethical considerations remain paramount.

Regulating the audit and management of artificial intelligence

The federal government should implement a detailed system for auditing AI in healthcare, beginning with rigorous pre-implementation assessments that require thorough testing and review to ensure compliance with established industry standards. These assessments should scrutinize data privacy protocols to ensure that patient information is handled and protected securely. Algorithm transparency should be prioritized, requiring developers to provide clear documentation of AI decision-making processes to facilitate understanding and accountability. Strategies to reduce bias must be scrutinized to ensure that AI systems do not perpetuate or exacerbate existing health care inequities. Reliability of performance should be continuously monitored through real-time data analysis and periodic reviews to ensure that AI systems remain accurate and effective over time. Regular audits should be mandated to verify ongoing compliance, with a focus on adapting to changing standards and incorporating feedback from healthcare providers and patients. AI algorithms evolve due to changes in underlying data, model degradation, and changes in application protocols. Therefore, a routine audit should be conducted at least once a year.

With nearly 40% of Americans receiving benefits under Medicare or Medicaid, and the tremendous growth and emphasis on value-based care, the Centers for Medicare and Medicaid Services (CMS) can be a catalyst for measuring and managing equitable AI. Since many health systems and payers are using models that apply to multiple other populations, this could positively impact much of patient care. Both critical decision-making companies and technology developers should be required to assess the impact of decision-making processes and provide CMS with documentation of sample impact assessments.

For healthcare providers participating in CMS programs, this mandate should be included as a condition of participation. Through the same audit process, the federal government can obtain information about the performance and accountability of AI systems. This data should be made available to healthcare organizations across the country to improve transparency and the quality of interactions between AI partners and decision makers. This will help the Department of Health and Human Services (HHS) fulfill the focus of its AI strategy, "Promoting the Robust Use and Development of AI" (Figure 1).

Congress should enforce these accountability systems for advanced algorithms. Such work can be done by amending and passing the Algorithm Accountability Act of 2023. This proposal requires companies to assess the impact of automating critical decision-making processes, including those that are already automated. However, it does not make these results visible to organizations using these tools. An extension should be added to make the results available to governing bodies and member organizations, such as the American Hospital Association (AHA).

Invest in the availability and improvement of AI

AI that integrates social and clinical risk factors that influence prevention may be useful for managing health outcomes and resource allocation, especially for facilities that care for predominantly rural populations and patients. While organizations serving a large proportion of marginalized patients may have access to new AI tools, it is highly likely that they are inadequate because they were not prepared based on data that adequately represent this population. Therefore, the federal government should allocate funds to support AI access for healthcare organizations serving a higher percentage of vulnerable populations. Initial support should come from subsidies to AI providers that support safety nets and rural health care providers.

The Health Resources and Services Administration should direct strategic funding for innovation to federal health centers and rural health care providers to contribute to the creation and use of equitable AI. This could include funding for academic institutions, research organizations, and partnerships with the private sector aimed at developing AI algorithms that are fair, transparent, and unbiased specifically for these populations.

Large Language Models (LLMs) and GenAI solutions are rapidly being incorporated into the CDS toolkit, providing clinicians with instant second opinions in diagnosis and treatment scenarios. Despite the power of these tools, they are not infallible and pose a risk without the ability to evolve. Therefore, research into AI self-correction should be the focus of future policy. Self-correction is the ability of an LLM or GenAI to detect and correct errors without external or human intervention. Mastering the ability to recognize life-threatening errors in these complex mechanisms will be critical to their adoption and application. Health care agencies, such as the Agency for Healthcare Research and Quality (AHRQ) and the Office of the National Coordinator for Health Information Technology, should fund and oversee research on AI self-correction using clinical and administrative data. This should be an extension of one of the following:

  • 45 CFR Parts 170, 171, because it "promotes the responsible development and use of artificial intelligence through transparency and improves patient care through policies ... that are central to the Department of Health and Human Services' efforts to promote and protect the health and well-being of all Americans."
  • AHRQ funding opportunity in May 2024 (NOT-HS-24-014), Exploring the Impact of Artificial Intelligence on Healthcare Security (R18)

As with the Breakthrough Device Program, AI that can prove that it reduces health disparities and/or increases accessibility can be fast-tracked through the audit process and labeled "best-in-class."

Incentivizing data collection and collaboration

The recently published roadmap, "Innovation in Artificial Intelligence in the U.S.," identifies healthcare as an area of high impact for AI and makes specific recommendations for future "legislation that supports the further adoption of AI in healthcare and implements appropriate guardrails and safety measures to protect patients, .... and promoting the use of accurate and representative data." By validating and ensuring the availability of AI in healthcare, the government must ensure that the path to implementing equity in AI solutions does not remain an obstacle. This involves improving data collection and sharing so that AI algorithms are trained on diverse and representative datasets. As the roadmap states, there is a need to "support NIH in the development and refinement of AI technologies.... with a focus on making medical and biomedical data available for machine learning and data science research while carefully addressing the privacy issues raised by the use of AI in this area."

This data exists throughout the healthcare ecosystem, so decentralized collaboration may allow for a more diverse data set for AI training. This may require incentivizing healthcare organizations to share anonymized patient data for research purposes while ensuring data privacy and security. This incentive could take the form of increased reimbursement from CMS for certain services or conditions in which collaborating parties participate.

To ensure that diverse perspectives are considered in the design and implementation of AI systems, any regulations promulgated by the federal government should not only encourage, but also value the diversity and inclusiveness of AI development teams. This will help mitigate bias and ensure that AI algorithms are more representative of the diverse patient populations they serve. This should be evaluated by accrediting parties such as The Joint Commission (a CMS-approved accreditation organization) and their certification for health equity.

Conclusion

Achieving health equity through AI in SRH requires concerted efforts from policymakers, healthcare organizations, researchers and technology developers. The enormous potential of AI to transform health care and improve outcomes can only be realized if it is accompanied by measures to remove bias, ensure transparency, and promote inclusivity. As we navigate the changing landscape of health technology, we must remain committed to equity and equality to ensure that AI serves as an empowerment tool rather than perpetuating inequality. Through collective action and awareness, we can build a healthcare system that truly leaves no one behind.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype