The Battle Over Healthcare: How AI is Changing the Game
A new technology, AI-powered bots, has emerged as a way to fight insurance companies that deny care to patients. The healthcare industry has become increasingly reliant on AI tools to process claims and make decisions about patient care.
This shift in power dynamics has sparked concerns among lawmakers and medical professionals who argue that AI is being used to replace human decision-making with algorithms. "We don't want to get on an AI-enabled treadmill that just speeds up," said Carmel Shachar, assistant clinical professor of law at Harvard Law School.
One company, Sheer Health, has created an app that allows patients to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays and covered benefits. The program uses both AI and humans to provide the answers for free. However, patients who want extra support in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Health to handle those for them.
In North Carolina, a nonprofit organization called Counterforce Health designed an AI assistant to help patients appeal their denied health insurance claims and fight large medical bills. The free service uses AI models to analyze a patient's denial letter, then look through the patient's policy and outside medical research to draft a customized appeal letter.
While some experts see AI as a game-changer for healthcare, others are warning about its limitations. "AI can be difficult for a layperson to understand when it is doing good work and when it is hallucinating or giving something that isn't quite accurate," said Shachar.
The trend of using AI-powered chatbots for health information has also raised concerns. According to a poll by the health care research nonprofit KFF, 25% of adults under age 30 use an AI chatbot at least once a month for health information or advice. However, most adults are not confident that the health information is accurate.
State legislators on both sides of the aisle have stepped up oversight, with several states banning insurance companies from using AI as the sole decisionmaker in prior authorization or medical necessity denials.
Lawmakers like Dr. Arvind Venkat, a Democratic Pennsylvania state representative and an emergency room physician, are pushing for regulations that would force insurers and healthcare providers to be more transparent about how they use AI and require human oversight.
"The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against," said Venkat. "We need to make sure we're applying artificial intelligence in a way that looks at the individual patient."
The increasing reliance on AI in healthcare has sparked a debate about the role of technology in decision-making. While some experts see AI as a valuable tool, others argue that it should be used in conjunction with human judgment.
"It's not an answer," said Mathew Evins, who worked with his physician to appeal a denied claim using an AI chatbot before turning to Sheer Health for assistance. "AI, when used by a professional that understands the issues and ramifications of a particular problem, that's a different story."
The use of AI in healthcare has sparked concerns about bias and accuracy. Research has shown that AI can reinforce discriminatory patterns, and lawmakers are taking steps to address these concerns.
Ultimately, the future of healthcare will depend on finding the right balance between technology and human decision-making. As one expert said, "There's a huge opportunity for AI to improve the patient experience and overall provider experience. But we need to be careful not to replace human interaction with machines."
A new technology, AI-powered bots, has emerged as a way to fight insurance companies that deny care to patients. The healthcare industry has become increasingly reliant on AI tools to process claims and make decisions about patient care.
This shift in power dynamics has sparked concerns among lawmakers and medical professionals who argue that AI is being used to replace human decision-making with algorithms. "We don't want to get on an AI-enabled treadmill that just speeds up," said Carmel Shachar, assistant clinical professor of law at Harvard Law School.
One company, Sheer Health, has created an app that allows patients to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays and covered benefits. The program uses both AI and humans to provide the answers for free. However, patients who want extra support in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Health to handle those for them.
In North Carolina, a nonprofit organization called Counterforce Health designed an AI assistant to help patients appeal their denied health insurance claims and fight large medical bills. The free service uses AI models to analyze a patient's denial letter, then look through the patient's policy and outside medical research to draft a customized appeal letter.
While some experts see AI as a game-changer for healthcare, others are warning about its limitations. "AI can be difficult for a layperson to understand when it is doing good work and when it is hallucinating or giving something that isn't quite accurate," said Shachar.
The trend of using AI-powered chatbots for health information has also raised concerns. According to a poll by the health care research nonprofit KFF, 25% of adults under age 30 use an AI chatbot at least once a month for health information or advice. However, most adults are not confident that the health information is accurate.
State legislators on both sides of the aisle have stepped up oversight, with several states banning insurance companies from using AI as the sole decisionmaker in prior authorization or medical necessity denials.
Lawmakers like Dr. Arvind Venkat, a Democratic Pennsylvania state representative and an emergency room physician, are pushing for regulations that would force insurers and healthcare providers to be more transparent about how they use AI and require human oversight.
"The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against," said Venkat. "We need to make sure we're applying artificial intelligence in a way that looks at the individual patient."
The increasing reliance on AI in healthcare has sparked a debate about the role of technology in decision-making. While some experts see AI as a valuable tool, others argue that it should be used in conjunction with human judgment.
"It's not an answer," said Mathew Evins, who worked with his physician to appeal a denied claim using an AI chatbot before turning to Sheer Health for assistance. "AI, when used by a professional that understands the issues and ramifications of a particular problem, that's a different story."
The use of AI in healthcare has sparked concerns about bias and accuracy. Research has shown that AI can reinforce discriminatory patterns, and lawmakers are taking steps to address these concerns.
Ultimately, the future of healthcare will depend on finding the right balance between technology and human decision-making. As one expert said, "There's a huge opportunity for AI to improve the patient experience and overall provider experience. But we need to be careful not to replace human interaction with machines."