Navigating the Future: Medicare Advantage Plan Rules on the Use of Artificial Intelligence

6 min

The use of technology in healthcare services and payment for healthcare is evolving at an unprecedented pace, reshaping how medical services are delivered, managed, and paid for. The use of artificial intelligence systems (AI), particularly in the area of reimbursement for care, is a technological advancement that has garnered significant attention.

On February 6, 2024, CMS released a FAQ Memo to all MAOs, providing guidance on the April 5, 2023 MAO Final Rule and clarifying how MAOs can use AI and algorithms in making coverage determinations. The MAO Final Rule is applicable to coverage beginning January 1, 2024.

As a threshold matter, CMS explains in the FAQ Memo that it is not prohibiting the use of AI or algorithms in coverage decisions. It explicitly provides that "[a]n algorithm or software tool can be used to assist MA plans in making coverage determinations, but it is the responsibility of the MA organization to ensure that the algorithm or artificial intelligence complies with all applicable rules for how coverage determinations by MA organizations are made."[1] However, CMS also adds that MAOs' compliance with relevant regulations on determinations of medical necessity, 42 CFR §422.101(c), is required, "including that the [MAO]… base[s] the decision on the individual patient's circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant with § 422.101(c)."[2] Furthermore, use of AI or algorithms to make medical necessity determinations must be "based on the circumstances of the specific individual…as opposed to using an algorithm or software that doesn't account for an individual's circumstances."[3]

The MAO Final Rule emphasizes transparency and fairness in coverage decisions. It requires MAOs to disclose how AI algorithms inform clinical decisions, including data sources, methodology, and potential biases. MAOs must also ensure that AI-driven recommendations align with evidence-based practices and clinical guidelines. Additionally, the rule underscores the importance of protecting patient privacy and mitigating bias in AI algorithms. Compliance with CMS regulations governing the use of AI in healthcare, including billing, reimbursement, and quality reporting, is also mandated. Overall, the rule is intended to foster trust, equity, and quality in the integration of AI technology within MAOs. The MAO Final Rule covers five major issues in the use of AI.

Data Privacy and Security: AI algorithms rely heavily on patient data to generate insights and recommendations. Therefore, CMS expects MAOs to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements in the use of AI. This includes obtaining patient consent to use protected health information (PHI), implementing robust encryption measures, access controls, and data anonymization techniques to prevent unauthorized access to or disclosure of sensitive data. Additionally, MAOs need to assess on a case-by-case basis whether the use of PHI to train AI models is permissible.

Transparency and Explainability: AI-driven decision-making processes must be transparent and explainable to both healthcare providers and patients.

Equity and Bias Mitigation: CMS notes that AI algorithms have the potential to perpetuate or exacerbate existing biases within healthcare systems. To mitigate bias, CMS provides that MAOs should regularly audit and validate AI algorithms to ensure fairness and equity across diverse patient populations. This may involve incorporating demographic variables, such as race, ethnicity, and socioeconomic status, into algorithmic models to prevent discriminatory outcomes.

Clinical Decision Support: AI-powered clinical decision support systems can assist healthcare providers in diagnosing illnesses, predicting patient outcomes, and recommending personalized treatment plans. However, CMS expects that these systems should augment, rather than replace, human judgment and expertise. MAOs must ensure that AI-driven recommendations align with evidence-based practices and clinical guidelines to deliver high-quality care.

Regulatory Compliance: MAOs must comply with CMS regulations governing the use of AI in healthcare, including those related to billing, reimbursement, and quality reporting. This may require extensive documentation and validation of AI algorithms to demonstrate their clinical utility and effectiveness. Additionally, MA plans should stay abreast of evolving regulatory requirements and industry standards to adapt their AI strategies accordingly.

The FAQ Memo provides additional guidance on how the MAO Final Rule applies to MAO use of AI as it relates to coverage determinations. Important takeaways:

  • MAO coverage decisions must be based on the "individual patient's circumstances," including medical records, patient history, and physician recommendations and analysis, and not only on data sets using AI and or algorithms.
  • Coverage denials must be based on publicly available coverage criteria. MAOs may deny coverage for basic benefits based solely on publicly available coverage criteria or other permissible bases. CMS is concerned about the potential for AI use to shift criteria as it is trained or used, which could result in altering coverage criteria over time. The FAQ Memo provides that any algorithm or software tool should "be used to ensure fidelity with the posted internal coverage criteria … [b]ecause publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time. And, predictive algorithms or software tools cannot apply other internal coverage criteria that have not been explicitly made public and adopted [by CMS]."
  • MAOs have to address the potential for discrimination or bias. The FAQ Memo reminds MAOs to remain vigilant for discrimination and bias. It spells out the nondiscrimination requirements of Section 1557 of the Affordable Care Act, which "prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in certain health programs and activities." According to CMS, MAOs should, "prior to implementing an algorithm or software tool, ensure that the tool is not perpetuating or exacerbating existing bias, or introducing new biases."

The integration of AI into MAO coverage decision processes offers exciting ways for health plans to incorporate technology and efficiency into coverage decisions. However, CMS, the White House, and Congress have all acknowledged that there are inherent and unique risks and challenges related to data privacy, transparency, equity, and regulatory compliance with the introduction of AI systems. The MAO Final Rule and the FAQ Memo make clear that by adhering to established guidelines and best practices, MAOs can harness the power of AI to create efficiencies in coverage decisions and ultimately in how more personalized, equitable, and effective healthcare services are delivered to their beneficiaries.

If you have any questions about the legal, regulatory, or legislative landscape relating to the use of artificial intelligence (AI) in healthcare, need assistance interpreting federal or state healthcare regulations, or need policy guidance on healthcare-related matters, please feel free to contact the authors of this alert or your Venable relationship professional.

This alert is brought to you by Venable's Healthcare Industry, Technology, Media and Commercial, Privacy and Data Security, and Cybersecurity Services teams.

You can find information on our Healthcare Industry team here.

You can find information on our Technology, Media, and Commercial (TMC) team here.

You can find information on our Privacy and Data Security team here.

You can find information on our Cybersecurity Services team here.

[1] See FAQ 2,

[2] Id.

[3] Id.