The use of artificial intelligence (AI) in the administration of group health plans is nothing new: AI has been used for a number of years to analyze data, improve risk assessment, identify fraud, and streamline claims administration. AI can automatically review and approve or deny claims based on medical codes, reducing manual processing time with the goal of improving efficiency and accuracy in claims adjudication. In some cases, plan sponsors are making AI tools that provide plan participants with personalized healthcare recommendations available at enrollment, identifying, for example, which of the plan sponsor’s benefit options is the best choice for the plan participant and any dependents.
AI’s ability to process vast amounts of data quickly and accurately can be very beneficial for group health plan administration. However, while there are clear benefits of using AI, there are also risks, including litigation risk. Several lawsuits have recently been filed against large insurers that use AI for claims administration, arguing that its use leads to an increase in claim denials.
Plan fiduciaries will want to pay close attention to these lawsuits and others that relate to the use of AI in group health plan administration and take steps necessary to ensure the plan is adequately protected.
Navigating the Fiduciary Landscape
The application of ERISA fiduciary duties to this new AI technology is largely unknown. However, group health plan fiduciaries should look at AI tools through the lens of their fiduciary obligations under ERISA, including how the use of AI aligns with their duties of prudence, loyalty, and diversification.
Some fiduciary considerations and risk mitigation strategies include the following:
- Vendor Monitoring – Auditing the vendor and AI tool to ensure that the AI system and the algorithms it uses are unbiased and provide reliable information or recommendations. The audit should ensure, for example, that there is not a pattern whereby the AI tool is consistently recommending the benefit option that is least costly to the plan sponsor. AI algorithms can perpetuate or even amplify existing biases in healthcare data, leading to discriminatory outcomes. Fiduciaries must carefully monitor AI systems for bias and take corrective action as needed. In addition, while AI can automate many tasks, fiduciaries cannot simply delegate their responsibilities to machines. Maintaining human oversight is crucial for ensuring that AI tools are used appropriately, and regular audits and performance reviews are essential.
- Engaging Experts – Plan fiduciaries should engage experts to understand how the AI tool functions in order to evaluate the appropriateness of the tool and how it is being used on their benefit platforms.
- Data Privacy and Security – AI systems rely on vast amounts of sensitive member data. Protecting this data is paramount. The fiduciaries should be sure that they understand how the AI tool stores and uses the data that participants input into the system and how the data is secured and that such AI tool is HIPAA-compliant if it is using data that included protected health information.
- Participant Communication – Where AI tools are made available to participants, it is important to ensure that the participants are educated on the AI tool’s capabilities and limitations.
- Insurance Considerations – Plan fiduciaries may want to review fiduciary insurance policies to ensure the use of AI and any potential liability that results from it is covered under those policies.
- Compliance Issues – AI tools must comply with all applicable regulations, including HIPAA, ERISA, and other relevant state and federal laws. Fiduciaries should carefully vet AI vendors and ensure their systems meet these requirements.
The Future of AI in Health Plan Administration
AI is poised to revolutionize the way group health plans are administered. By embracing AI’s potential while also carefully managing the associated risks, plan fiduciaries can unlock significant benefits for both plan participants and administrators. The key is to approach AI implementation strategically, with a focus on compliance, data privacy, and ongoing oversight. As AI technology continues to evolve, staying informed on and adapting to the changing landscape will be critical for success in the ever-evolving world of group health plan administration.