Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Notes from Day 1 of the 44th Annual J.P. Morgan Healthcare Conference

By Eric Klein on January 13, 2026
Email this postTweet this postLike this postShare this post on LinkedIn
Corporate-Corporate-2-Blog-Images-660x283

At the 44th Annual J.P. Morgan Healthcare Conference, Jamie Dimon (CEO of J.P. Morgan) told the audience that we should be teaching healthcare in school from kindergarten to 12th grade. He was right…but we don’t. And, therefore, artificial intelligence will come to dominate the U.S. healthcare system, which spent over $5.6 trillion in 2025. Wait, you might say, how did you jump from kindergarten straight to artificial intelligence (AI)? What’s the connection between healthcare education, rising healthcare costs which put financial pressures on American citizens and employers, and economic transformation? 

The answer is framed by the competing announcements in the past week by OpenAI of ChatGPT Health (which is consumer-facing) and the two business-facing enterprise offerings launched by OpenAI of ChatGPT for Healthcare and by Anthropic of Claude for Healthcare, two large language model (LLM) HIPAA-compliant offerings for healthcare enterprises to assist with billing, coding, eligibility determinations, pre-authorizations, care coordination and other administrative tasks. Anthropic’s leadership has said that it is focused on supporting AI for healthcare enterprise customers rather than consumer health applications. What is the OpenAI ChatGPT Health LLM intended to do for Americans? As announced, and subject to the current launch pilot being continued as initially publicized, ChatGPT Health users can seek health information, connect their medical records and wellness apps so that ChatGPT Health can walk them through test results, understand or prepare for medical visits, and, most importantly, manage their health.

So, we thus have the dichotomy established now between what healthcare businesses need from AI and what the American public needs from AI. Let’s take a look at the second question first. How is America using AI for healthcare now? In the first week of January 2026, OpenAI reported that more than 40 million people each day ask ChatGPT healthcare questions, 200 million regular users of ChatGPT submit a healthcare-related prompt each week, and more than 5% of ChatGPT’s messages globally are about healthcare. People try to manage their health, to check or explore symptoms, to understand medical terms or instructions, and to understand treatment options or costs.

This is not surprising, as a February 2025 study of Google searches suggested that 5% of Google searches are about healthcare and offer opportunities for early disease detection and diagnosis prior to a visit to a physician or medical facility.[1] This type of questioning has been going on for years with Google, as people try to become better educated about their own health and their family’s health. They also are trying to figure out the cost of healthcare and what costs could arise in various disease and treatment scenarios. Not an easy lift in our healthcare system today, and hard even to discuss with your doctor at a sufficient level of detail in the time-limited consultation common in our healthcare delivery system.

Ann Somers Hogg wrote in an article in 4sight Health this month that the real “job” of the healthcare delivery system, from an educational perspective, could be identified in two parts:

  1. “When I’m unable to maintain my health on my own, help me get customized guidance and support to get on track and stay on track for my future.”
  2. “When I am scared about my future, provide a trusted solution so I can address my problem(s) ASAP and be there for my family.”

She breaks it down further to abbreviate this as “personalized, proactive and preventive.”

So, where in our healthcare system today does a person get this type of education and support? Arguably, this is the job of a value-based or risk-based reimbursement system, where goals are better aligned and providers and plans are more profitable when patients maintain better health. But, as Intermountain Health CEO Rob Allen said today at the JPM Healthcare Conference, “we lost our way as an industry with value-based care; we talked about it as a financing mechanism,” not as an alignment with our patients that prioritized their education and support. Rob Allen also said that 25% of healthcare spend in the United States is preventable. But achieving that cost reduction requires proactive management of a person’s health, ideally both by the healthcare system and by the person themselves, taking agency with education and appropriate support.

Wouldn’t it be nice if we could return to the local general practitioner (the GP) who knew you, your family, and everyone in your small town, was always available and who gave you advice you could rely on? I remember my GP chasing me around the exam room when I was five, trying to give me a vaccination if I only would hold still for a second. Ah, the good old days…

Most Americans today don’t have their own primary care physician (PCP) who spends time with them and really gets to know them, but the model still remains in our societal memory as an effective one. Given the shortage of primary care physicians, access and cost issues and the rising tsunami of chronic conditions (per Baylor Scott & White CEO Pete McCanna today, three-quarters of the US population has at least one chronic condition), the PCP model is unfortunately not likely to be a short-term viable fix for Americans’ needs for healthcare education and guidance, even after the hopes of effective population health management. Remember what Ann Somers Hogg said above about “personalized, proactive and preventive?” Pete McCanna shared the following concern about American healthcare today – “Our [American] health system has some of the most personal data, but we treat people impersonally. We have to move from being paternalistic to recognizing it’s about you.” Healthcare is incredibly personal – it’s our lives and our bodies and our minds. As Inova Health CEO Dr. Stephen Jones said today in the conference, “for our patients, we are in the trust business,” not just the healthcare business.

So, perhaps the always-available, “good enough,” and intimate relationship-based mechanism of a healthcare chatbot really is the lowest cost, least difficult solution for people to learn about their own health and to try to obtain guidance. As NVIDIA Vice President of Healthcare Kimberley Powell said, open reasoning models are more relatable to humans and can break down complex tasks.

It was interesting to note in the OpenAI report about ChatGPT healthcare inquiries that there was particularly heavy usage of ChatGPT healthcare inquiries in rural and underserved areas, in hospital “deserts,” or after normal clinic hours (such as when your child spikes a fever at night). Many health plans for years have sponsored an after-hours nurse triage line to help members evaluate and seek care, but frankly their utilization always has been underwhelming. Again, is it because it is not “personal” and without a true relationship? People really do develop relationships with AI LLMs, with articles abounding online and in the media about people falling in love with their chatbot or sharing deep, dark secrets no other human knows. It has to be the tone, the conversation, the responsiveness of the AI models, as the same information is available through a Google search. But, Google’s effort years ago to build a consumer-facing electronic health records system failed amid an uproar over the threat to patient privacy. Why is this different now? Has the American population changed, trained by years of social media disclosures and changing circumstances? Or is it instead that a minimum level of personal relationship needs to be built up prior to the “gift of trust” being given by a person to a technology-enabled recipient? 

Is it simply the authenticity of the chatbot’s conversation, their use of tone or sass, that conveys the sense of personhood and sufficient reliability that a “corporate” offering could not? There is much to think about here, as many health systems or health plans are thinking about offering LLM AI products to their patients for interaction, as part of a patient engagement program, and the lack of effective connection or authenticity could doom to failure the large investment of time or capital by healthcare enterprises. 

Also, does the consumer want to interact and share with more than one AI “person” at a time? If they are going to share with Claude, Gemini or ChatGPT, then what is the role of the health system’s patient engagement LLM? Are Anthropic, Google and ChatGPT offering a service (like a health system or health plan would offer through their app) or are they offering a continuing relationship and engagement to the consumer?

We also hear the concern raised by many in the healthcare industry that the AIs are not “good enough” today to take on the role of healthcare educator or diagnostician. We saw that concern recently when Google this past weekend removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information. But ultimately, it’s not the choice of healthcare providers where people obtain their healthcare information and what is “good enough” information. These days, people will make their own choices as to information sources. Two points can be made. First, the information provided doesn’t have to be perfect; it just has to be “good enough.” The real test of what is “good enough” is not just assessed today, in light of people’s lack of options, cost pressures and access issues, but realistically over the next several years as the technology improves. As Hartford Healthcare CEO Jeff Flaks said at the conference today, “this is the beginning of the beginning, similar to the first iPhone.” He then reminded us of the many perceived weaknesses of that first iPhone model and contrasted them with where the iPhone is today. This technology will progress as well, and early adopters of AI will both suffer its limitations and gain from its benefits in the first part of the innovation adoption curve.

Remember, too, the disruptive innovation paradigm taught by the former Harvard professor Clayton Christensen. As he taught, disruptive innovations are NOT breakthrough technologies that make good products better; rather, they are innovations that make products and services more accessible and affordable, thereby making them available to a larger population. In healthcare, the information needed by the American population is there already, but it is effectively behind the “paywall” of today’s healthcare system. The AI LLM models are making that healthcare information more accessible and affordable.

Note that the above relates to the education and guidance of people as to their healthcare status and needs, and not to the delivery of care itself. So, it is perfectly consonant for people to go to a ChatGPT, Gemini, Claude or Google for information, but then to go to a doctor or a hospital for care. The question, however, is which doctor or hospital and when? With the cost of care, many people are Googling or asking ChatGPT how long can they defer care for their particular situation? That unfortunately bodes poorly for outcomes for the people deferring care, but, if we remain in a fee for service environment, it means more revenue for healthcare providers as they deal with more acute situations and the effects of “deferred maintenance.” Healthcare costs will continue to increase, but it is suggested that healthcare costs will continue to rise no matter what in the next five years absent massive systemic or political change, as insurance premiums will increase and healthcare industry disruption will continue. Why is that? Multiple factors are creating the perfect storm – the aging of America; the higher disease burden in America today from obesity, diabetes, musculoskeletal (MSK) issues and other chronic conditions; the shocking rise in cancer incidence in people under the age of 50; the cost of innovative new technologies such as GLP-1s; and the fact that better pharmaceuticals and medical devices are turning former “death sentence” diseases into chronic conditions that can be managed but that incur ongoing costs to the system. 

If costs continue to rise and Medicaid and ACA insurance coverage are reduced, American patients will face increased pressure to find alternatives and work-around solutions, the most significant of which will be using AI as their new PCP. Just like some health plans offered in the last five years “digital first” or “virtual first” health insurance products (with multiple health systems in the conference today, such as ChristianaCare, talking about virtual first primary care offerings), in which the patient first went to a telehealth provider or triage process, the next step for the healthcare industry may well be AI-first models of care where the AI LLM does the triage work and “minds” the patient over time, looking for key indicators of disease progression or acute episodes, at which point the AI connects the patient back to human-driven care (whether by telehealth or in-person). With the proper reporting back to the PCP or the coordinated care oversight program, this could allow for much greater scaling of existing resources, much like anesthesiologists nowadays supervise multiple CRNAs, rather than just having one doctor providing anesthesia in only one operating room. To do this right, though, there will need to be LLMs trained on appropriate clinical pathways and protocols and greater standardization, combined with greater personalization.

Healthcare is incredibly personal – it’s our lives and our bodies, and minds. The key to many successful treatments is really understanding the patient – not just their statistics, but also their family history, their history of trauma (which can impede normal development and create a predisposition for various illnesses, both physical, mental and psychosomatic), their current life circumstances and the way in which they frame and address their own health and life goals. This allows for effective diagnosis and engagement, with care that really works for that patient rather than care that ultimately does not make a difference for the patient. We see this all the time in so-called “medical” specialties, where the diagnosis is made and the treatment is brilliantly prescribed by the doctor, but the patient doesn’t comply, often because of resistance, lack of understanding, resources or education, or inability to incorporate the prescribed treatment into their life circumstances. How many patients remain sick and how many dollars get wasted on prescriptions written and filled, but not fully complied with by patients? To really get to this level of understanding requires multiple visits and many hours with a patient – it’s effective and can make a difference, but our healthcare system is not set up for this – and, as importantly, doesn’t pay for it. As we say in music, how do you get to Carnegie Hall – practice, practice, practice – which multiple iterations don’t work well in our resource-constrained healthcare environment. If we want to give effective care, it has to be personal – and each of us has a long story and a lot to say. Who is there today to listen and to partner with us?

So, perhaps AI really is the answer to our healthcare cost and efficacy conundrums here in the United States? All we have to do is to perhaps come together to create one or more LLMs that can really be the true front line of medical care and that can utilize the vast learning of medical science today to analyze, sort and predict when the intervention of precious healthcare resources best can be applied to improve quality of life, outcomes and cost. Perhaps we can even have affinity-branded healthcare LLMs that increase identification and trust – not necessarily with a hospital’s brand, but perhaps one offered by your favorite sports team, fashion brand, political party or musician? Trust is needed for success, so options must be analyzed and evaluated.

Turning quickly now to the question of what healthcare businesses need from AI, we can draw good insight from today’s top-notch healthcare executives who shared in great detail at the J.P. Morgan Healthcare Conference what their businesses were doing with AI and where it was working. The winners so far clearly are:

  • Ambient listening, where patient/provider interactions are transcribed and appropriately entered into the electronic health record, allowing the physician to actually look at and interact more with the patient than with the keyboard and providing a large amount of incremental productivity and pajama time back to the healthcare employer and the physician. 
  • Call center automation, with AI agents being able to effectively interact with many patients on scheduling and other routine matters, with improved NPS scores, fewer dropped or abandoned calls, and lower rates of patient “no shows.”
  • Revenue cycle management, to reduce errors, process denials faster, increase revenue through appropriate billing code substitutions and over time reduce workforce headcount. This may become an AI arms race between payors and providers.
  • Faster identification of strokes and pulmonary embolism, with AI radiology algorithms currently saving about 40 minutes on average, allowing faster proper treatment and better outcomes.
  • Sepsis identification and reduction. Cleveland Clinic recounted its successful use of Bayesian Health for sepsis reduction.
  • AI-enabled imaging reviews to screen for other health risks (there was a recent article in the New York Times about this occurring in China and finding cancers).
  • Pancreatic cancer detection, normally a very difficult cancer to diagnose early.
  • Reduction of “excess days” in hospital, increasing CMI and better managing bed inventory.
  • Supply chain management to reduce costs and purchasing variation.

Dr. Richard Gray, CEO of Mayo Clinic shared a datapoint, an example and a story. He reported that Mayo has 155 AI applications in use currently, with a further 310 in development. He also shared that Mayo has been working with an AI avatar of a patient’s surgeon who will educate the patient on post-surgery care, expectations and follow-up, with good reception by patients to this time-saving and engaging modality. The story related to glioblastoma, a very deadly form of brain cancer, with very low survival rates. Mayo used AI to evaluate all of the variables associated with brain cancer surgery, including the drugs administered post-surgery. It found unexpectedly that one specific antiseizure drug – and only that specific drug – doubled the survival rate as compared to the administration of other drugs. Mayo is now doing further studies to validate these results, but if confirmed, this AI analysis could save many lives of glioblastoma patients. 

So, while clinical applications hold much promise, the first fruits of AI return on investment will be administrative task and cost reduction, with many health systems moving through their wish list of tasks to be automated. Depending on the balance sheet and profit generation of each health system, this may be an accelerated process. Per ChristianaCare CEO Janice Nevin, by 2028, any administrative task that can be automated with AI will be automated.

Quick Hits:

Tis the Season…for raising financing. A large number of the nonprofit health systems will be going to market to raise bond financing, many in this first quarter. There is a lot targeted for growth funding, as well as restructuring existing financing that had been entered into earlier at higher cost levels.

Turkey Bingo Phrase of the Day: “Meeting people where they are” as used to describe a move into ambulatory care, into digital-first strategies, into digital apps, into community based imaging or clinics, etc. We heard this over and over today – which is good as it would get awfully lonely trying to meet people where they are not.

On to Day 2, with more updates on the health of the largest hospital systems and other fun topics.

FOOTNOTES

[1] “Harnessing Internet Search Data as a Potential Tool for Medical Diagnosis,” published February 2025 in JMIR Mental Health. 

Photo of Eric Klein Eric Klein

Eric Klein is is a partner in the Century City office and Leader of the Healthcare Team.

Read more about Eric Klein
  • Posted in:
    Health Care
  • Blog:
    Healthcare Law Blog
  • Organization:
    Sheppard, Mullin, Richter & Hampton LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo