Artificial intelligence (AI) is one of the most promising breakthrough technologies of the modern healthcare era, yet it also has the potential to be one of the most dangerous.
AI algorithms trained on limited or poorly representative data sets can exhibit signs of bias in their results, skewing decision-making and possibly leading to ethnic, gender, and social discrimination and other unintentional consequences for the patients they serve.
Unfortunately, research shows that bias is already creeping into the nascent field of AI and machine learning. In 2019, one study found that a widely used algorithm was underrepresenting the illness burden of black patients compared to white patients, meaning that Black individuals had to be much sicker to get a recommendation for the same level of care as their white counterparts. It was also well documented that in many cases, Watson, IBM medical AI was affected by bias, recommending therapies not accessible to the population using the software.
Concerns over bias create distrust in AI and often keep healthcare leaders from fully embracing the technology. It is imperative that we address the rising risks of AI bias before the ecosystem becomes even more established. We must find better ways of connecting with more diverse and representative patients to ensure trust by ensuring algorithms are trained with large and diverse datasets.
Remote patient monitoring (RPM) can be one key to achieving this goal. By reaching more patients in different geographies and reducing barriers to patient access, RPM can help build trust in AI and improve health equity by broadening the diversity of datasets used to train AI algorithms. Progress is already being made in remote cardiac monitoring.
Unbiased Prediction of Heart Failure
The first step to reducing bias in AI tools is to increase the diversity and representation of data. Given the growing use of cardiac remote monitoring, an increasing volume of patient data is being gathered from connected devices.
Moreover, in 2019, as part of its national strategy on AI, the French government created the Health Data Hub. The platform combines all nationwide sources of data, including all resource utilization such as hospitalization and follow-up, but also medications and causes of death. Since France is a centralized single-payer system, this data is gathered from across the country. The database was made available to selected organizations, but Implicity was the only cardiac remote monitoring platform to gain access. to Implicity is now using the nationwide database to develop research and algorithms with better performance and less bias.
The Health Data Hub provides access to anonymized patient health information from more than 3.7M people. Implicity has combined this data with data collected from remote cardiac monitoring devices, creating a unique dataset that is the foundation for developing an innovative algorithm that can reliably predict acute heart failure episodes in patients with cardiac implant monitoring. Because of the robust data sets, this algorithm can potentially eliminate or drastically reduce bias and improve health equity.
Benefits Beyond the Algorithm
Aside from eliminating bias in AI, RPM is also changing how clinical research is performed by broadening patient access to studies. For example, equipping cardiac patients with RPM devices in their homes can reduce the necessity to come into the clinic for routine checks for things like blood pressure, weight, cardiac rhythms, or blood sugar. This could make participation in research more viable and attractive for more diverse patient groups, including those with limited access to centralized trial sites.
Today, research is often conducted in urban areas at large academic medical centers (AMC), which can be hard to reach for rural populations and those facing other transportation barriers. Trials demand regular attendance at frequent appointments, which can be problematic for
people who cannot afford time off work, the expenses of childcare, or the risks of leaving other
family members at home without a caregiver.
As a result, only the patients who have adequate time, money, and social support can participate in research or contribute their data to AI tools and similar projects. These patients tend to be less likely to have significant burdens of chronic disease, are more likely to have
higher health literacy rates – and due to the nature of systemic oppression in the United States, are more likely to be white than members of other racial and ethnic groups.
We know the same therapy can act differently in people of diverse genetic backgrounds. And we know that socioeconomic burdens can significantly affect a patient’s ability to access and adhere to recommended care. But we are not doing enough to extend the healthcare
system to places where underserved populations live, work, and play. By digitizing home health-related data from the source, RPM contributes to less selection bias in research.
Creating a More Equitable Future
RPM also offers the advantage of continuous data collection in many cases, giving researchers a much richer and more accurate picture of a person’s health unaffected by “white coat syndrome,” which can alter certain readings. Real-world data that is collected as part of everyday life is extremely valuable for identifying the efficacy and safety of new therapies and devices.
Developing a strong feedback loop between RPM and AI to support continuous improvement is especially important since many RPM devices rely on AI algorithms to perform their basic functions to begin with. Ensuring that developers are learning from the experiences of actual patients using their devices outside of tightly controlled research settings can help to identify hidden biases and course-correct them before any issues arise.
As AI becomes more sophisticated, we must invest in patient recruitment strategies and data governance guardrails that prioritize equity and take advantage of RPM and other technologies to reduce barriers to accessing representative data.
Studies and algorithm development projects should include perspectives from diverse points of view in the design phase, including clinicians and patient participants with varying backgrounds. Institutions sponsoring research projects, or companies developing algorithms, should establish minimums for diversity and inclusion in their training data sets to ensure algorithms start off on the right foot.
Meanwhile, researchers should explore the potential role of RPM devices in these initiatives to make projects more accessible to traditionally underserved patients and provide detailed training and coaching for patients using these tools in the home setting. And algorithms available in the market should be continually evaluated for their accuracy, applicability, and equity among real-world groups, including gender, racial, ethnic, and age-related categories of users.
By integrating RPM devices into clinical trials and the AI research and development process, the healthcare industry can avoid unintentional bias, support greater health equity, and give more patients the chance to achieve better outcomes with the help of cutting-edge technologies.