AI-Powered Medical Assistants: A Recipe for Disaster for Low-Income Patients?
A private company, Akido Labs, is now operating clinics in southern California, where patients with low incomes and those who are unhoused are being treated by medical assistants aided by artificial intelligence (AI). The AI system is designed to "pull the doctor out of the visit", making diagnoses and suggesting treatment plans.
Critics argue that this trend is a recipe for disaster. AI has been shown to generate inaccurate diagnoses, particularly for patients from marginalized communities. Studies have found that AI algorithms can systematically under-diagnose Black and Latinx patients, as well as those recorded as female or with Medicaid insurance.
Moreover, the use of AI in healthcare raises serious concerns about informed consent. Medical assistants are using AI to make diagnostic recommendations, without disclosing this information to their patients. This echoes a dark period in medical history where Black people were experimented on without their consent.
The impact of AI on low-income patients goes beyond diagnosis accuracy. A recent report by TechTonic Justice estimated that 92 million Americans with low incomes have aspects of their lives decided by AI, including Medicaid eligibility and Social Security benefits.
In federal courts, cases are now emerging where Medicare Advantage customers are suing insurance companies for denying coverage due to AI-driven decisions. These cases highlight the need for caution when implementing AI in healthcare settings, particularly those serving vulnerable populations.
As experts Leah Goodridge and Oni Blackstock warn, relying on AI-powered medical assistants can disempower patients by removing their decision-making authority over the technologies used in their care. Instead of "pulling the doctor out of the visit", AI should augment human healthcare providers who listen to patients' needs and priorities.
The future of healthcare must prioritize patient-centered care with a human touch, rather than relying on AI-driven systems that can exacerbate existing health inequities.
A private company, Akido Labs, is now operating clinics in southern California, where patients with low incomes and those who are unhoused are being treated by medical assistants aided by artificial intelligence (AI). The AI system is designed to "pull the doctor out of the visit", making diagnoses and suggesting treatment plans.
Critics argue that this trend is a recipe for disaster. AI has been shown to generate inaccurate diagnoses, particularly for patients from marginalized communities. Studies have found that AI algorithms can systematically under-diagnose Black and Latinx patients, as well as those recorded as female or with Medicaid insurance.
Moreover, the use of AI in healthcare raises serious concerns about informed consent. Medical assistants are using AI to make diagnostic recommendations, without disclosing this information to their patients. This echoes a dark period in medical history where Black people were experimented on without their consent.
The impact of AI on low-income patients goes beyond diagnosis accuracy. A recent report by TechTonic Justice estimated that 92 million Americans with low incomes have aspects of their lives decided by AI, including Medicaid eligibility and Social Security benefits.
In federal courts, cases are now emerging where Medicare Advantage customers are suing insurance companies for denying coverage due to AI-driven decisions. These cases highlight the need for caution when implementing AI in healthcare settings, particularly those serving vulnerable populations.
As experts Leah Goodridge and Oni Blackstock warn, relying on AI-powered medical assistants can disempower patients by removing their decision-making authority over the technologies used in their care. Instead of "pulling the doctor out of the visit", AI should augment human healthcare providers who listen to patients' needs and priorities.
The future of healthcare must prioritize patient-centered care with a human touch, rather than relying on AI-driven systems that can exacerbate existing health inequities.