Opinion: Healthcare AI Agents Can Help Patients—If We Remember Health Is Human

Healthcare AI Agent

Artificial Intelligence (AI) continues to make strides in healthcare, proving that the algorithms we aren’t quite used to yet, and may despise on some level, are here to stay. Specifically, OpenAI recently launched ChatGPT Health, and Anthropic followed with Claude for Healthcare. Each touts itself as a dedicated space for discussing your health with a HIPAA-compliant AI chatbot and with features for integrating with electronic medical records (EMR) and wellness apps.

The end goal: more informed patients about their own health and more efficient care possible from clinicians.

Pros and Cons of a Healthcare AI Agent

There are a few upsides and downsides of putting your health information in the hands of an AI agent.

Cons

The biggest of cons is the concern around data privacy and security. Healthcare cybersecurity attacks of the recent past (e.g., Change Healthcare) taught us a hard lesson on keeping information secure and safe. Currently, AI agents are not formally regulated; even with “HIPAA compliant” safeguards, it’s worth considering what could happen if you got hacked…or if the AI agent company got hacked.

Another downside to consider is that these agents cannot provide medical advice. That fact needs to be crystal clear for patients who opt to use the tools. The medical legal implications of taking advice that has not been vetted by a professional can have serious consequences.

Finally, if these agents can be used by clinicians and patients, it’s safe to assume that insurance companies will use them too. On that end, consider that they may use the tools to “quickly review” medical records to determine prior authorization, reimbursements, and coverage. We all have seen how well this approach has worked for United Healthcare (think: rapid coverage denials that have cost patients crucial time in getting care).

Pros

If you can get past the downsides, consider the upsides. As a patient, using these tools can help make sense of a complicated medical history by summarizing the main points, providing insights into lab results and other tests. Plus, these agents can help patients prepare for doctor’s visits by generating personalized lists of questions to ask.

For physicians, having patients use these tools could potentially help streamline appointments and bridge the communication gap when explaining medical care and treatment plans.

Both ChatGPT Health and Claude for Healthcare offer integrations with wellness apps to provide feedback about health trends and related insights. Being able to review recorded patterns is another avenue by which clinicians could be able to assist in patients’ medical care and make decisions in a timely fashion.

Since insurance companies could use the tool as mentioned above, patients should absolutely leverage the agents to gain clarity around their personal coverage and, during open enrollment, compare and contrast different plans to see what works best for their needs.

Healthcare and Outsiders

Healthcare is a multi-trillion dollar problem with multiple players (insurance, hospitals, clinics, industry product developers) and stakeholders (patients, payors, hospitals), and the product is human.

History has shown us that anyone on the outside of healthcare who tries to get in usually doesn’t do as well as they hope. CVS has tried with Minute clinics and they bought Oak Street Health (and now they’re scaling back); Walmart made an attempt at providing access to primary care (that was short-lived); Private equity has gotten involved (for the worse), and now AI developers are here to play.

When it comes to an outsider trying to establish a foothold within the healthcare industry, it’s clear that it’s a very big gamble. If you’re on the outside, you are keenly unaware of what’s required to find success on the inside.

Healthcare does not operate like other industries. You cannot streamline a patient visit without ignoring a living, breathing, independently thinking, and likely frustrated-with-healthcare human being. You cannot set limits and protocolize every single thing related to patient testing, diagnosis, or treatment. You absolutely cannot cut corners and eliminate budgets for backup equipment, safety equipment, or operational needs without compromising the integrity of care.

A mistake in healthcare does not lead to a faulty product. It leads to patient harm. Every healthcare worker who took an oath can tell you that that’s a big no-no.

What makes AI developers like ChatGPT and Anthropic different? They’re still outsiders, but instead of tackling access to care or operational issues, they’re empowering patients to understand their health better. By assisting in communication and explanation, AI agents could potentially save clinicians a lot of time and make patient visits more productive and effective.

If it works (still a big IF), it actually could make a difference for all stakeholders (provided that privacy is maintained).

Conclusion

These tools still can’t replace your physician, and I wouldn’t recommend that you allow them to. Health is human. Tools should be used as tools. Patients and clinicians should take the opportunity to use the tools to enhance their human interactions.

At the rate the world is changing, not using AI could mean that you fall behind the curve. For anyone interested in taking a chance, go into it fully aware of the possibilities and shortcomings.

Sanjana Vig, MD, MBA
Sanjana Vig MD, MBA
Scroll to Top