How AI/LLMs Are Impacting Healthcare & What It Means For The Future

AI/LLMs impacting healthcare

Key Takeaways

  • Large Language Models, like GPT, represent a natural advancement of existing AI technologies, which has been enabled by the ready availability of large-scale computation and storage.
  • These tools excel at the management, analysis, and summary of textual or semistructured data, and companies that embrace these tools to amplify work productivity will have an advantage over those that don’t.
  • Beyond these enhancements, the long-term impact will be difficult to assess until we see how LLMs enhance business and technical workflows and decision-making. 

Digital tools like ChatGPT have been all over the news recently, and questions are emerging about how these types of AI systems and models can impact healthcare and healthtech companies. The global artificial intelligence (AI) in healthcare market size in the US was valued at $1.3 billion in 2021 and is expected to be $1.87 billion in 2023 and grow to $6.7 billion by 2030.

The fast adoption of AI illustrates the pressure on healthcare companies to streamline various operational aspects of their businesses. Examples include the drug discovery process, managing large data sets, predicting optimal drug candidates, diagnosing major diseases, managing autonomous robotics, and analyzing personalized medicine. 

A big part of AI is Large Language Models (LLMs). The most popular version of this we’re all familiar with is ChatGPT. While popular LLMs showcase answering questions and generating text, the capabilities go far beyond summarizing documents and auto-completing text. 

Healthcare LLMs have made significant advances over the last couple of years by creating newer, more powerful tools. With upwards of hundreds of billions of parameters that extend what current tools can do with sophisticated text creation, curation, and connection, these tools are already being deployed and incorporated into forward-thinking healthcare companies.

What Are Large Language Models (LLMs)?

LLMs are computer programs that build a dataset based on the probability of word sequences. This can fundamentally change how computers use text and data to generate unique new content. 

Let’s explain.

In the past, computers required structured and system-specific data before man-made computer codes could manipulate and extract information. In healthcare, structured data include lab values, medication dosages, and patient histories. LLMs can go beyond this structured data and allow unstructured text analytics, manipulation, and data and information creation. This helps the computer become sharper, more agile, and adept at its tasks without formal software engineering. 

The size of these systems is substantive. There are tens of gigabytes of code with hundreds of terabytes of text data used to train them. NVIDIA’s BioMegatron has 1.2 billion parameter variants and trained on 6.1 billion words from PubMed, the standard repository of abstracts and full-text journal articles on biomedical topics. ChatGPT, with its 175 billion-parameter model, created from more than 45 terabytes (TB) of text, extends what can be done with text and data: reading, translating, summarizing, rewriting, classifying, searching, generating, and even clustering. 

In other words, LLM technologies create a unique opportunity for data in a healthcare domain-specific area.

Do LLMs Think & Reason?

LLMs are excellent at prediction but do not comprehend the same way that humans think and understand. Instead, LLMs take advantage of the probability distribution of strings of words in sufficiently large numbers to give surprisingly reasonable answers to questions and problems. 

“While LLMs may look and act like humans in terms of output, it is a surface resemblance in the same way a counterfeit looks and feels like the original. It is similar in what it is like, but not in how it is made or reasons

Mark Adams

It’s important to note that these answers are not always correct but generally sound correct. LLMs become more fluent as their data model increases to include more robust and complete data sets. While LLMs may look and act like humans in terms of output, it is a surface resemblance in the same way a counterfeit looks and feels like the original. It is similar in what it is like, but not in how it is made or reasons.

An Example From Healthcare

Healthcare companies all generate large documents that follow consistent formats and language. These documents include patient records, hospital operations data, lab results, etc. Since LLMs can produce high-quality documents from structured data, they are an ideal tool for further streamlining and automating these documents. 

However, since LLMs cannot reason, these polished-looking documents can mask critical mistakes. Therefore, the scientific/technical context upon which the documents are written demands a more structured and formal review analogous to software code review and quality assurance and quality control (QA/QC) approaches before release. 

Real-World Implications

Such a review step may seem obvious, but since people generally associate high-quality written materials with accuracy, the temptation to eliminate the engagement of experts in the review process is a risk. How companies create and streamline the QA/QC process will be critical. 

Business areas outside software engineering have struggled to effectively incorporate other proven software engineering practices (think Agile management techniques), so expect documentation review oversight to have similar challenges. 

This is also an issue beyond healthcare, where LLMs could potentially be deployed: medicine, legal, government, and engineering all have similar demands and are likely to be early adopters of LLM-derived technologies but share the same risks.

Why Are Healthcare Companies Investing In LLMs?

There is an outsized opportunity for LLMs within healthcare. Work within the field is sophisticated, with highly complex data sets and processes that require computer and automation tooling to understand and complete. 

LLMs are most effective at performing repetitive tasks at scale. They can evaluate, analyze, extract, and identify opportunities in areas such as drug discovery or automation. They can also provide tooling and capabilities that allow organizations to deploy resources effectively and efficiently.

Healthcare, with its dependence on large data sets and the associated complexity of analyzing and reporting on that data, is a particularly important customer, as LLMs may reduce the need for certain types of labor and save some healthcare dollars.

How Can LLMs Decrease Organizational Costs?

In the past, software engineers wrote almost every single line of computer code. Over time, computers have taken on an increasing amount of these functions. Artificial intelligence tools allow computers to take more responsibility for translating requirements into code – a function formerly limited to human programmers. 

GitHub’s Copilot is an example of how software engineers’ productivity can be improved. By reading and analyzing large volumes of historical source code, it can make suggestions for new lines of code, application programming interfaces (API) calls, and even full functions, all based on the preceding lines of code (along with the model trained on massive volumes of code formerly written by other programmers.) 

One of the key challenges for software engineers is keeping up with software updates and releases on new frameworks and APIs. LLMs, like Copilot, improve engineers’ productivity by reducing the time and effort required to master new commands, structures, and functions. Copilot is trained on the technology and aids in code creation. 

The LLMs code base can also continuously refine and update its coding pattern capabilities, allowing for efficient execution speeds and memory tradeoffs. This aspect impacts the time and costs to run systems, further impacting control cloud storage and other traditional information technology costs. Additionally, LLMs can reduce time-to-market for software products and reduce cycle time for releases. 

While the advantages of improved productivity, saved time, and higher efficiency are worthwhile pursuits, it also means that fewer engineers are needed to deploy a product. 

Even though it is a nascent technology, it requires changing QA and compliance processes to ensure that the algorithms provide reliable data. The curation of data models and ontologies is part of the investment in LLMs. When effective, the LLMs become significant intellectual property for organizations and allow the data to be mined for targets, and novel insights.

This need for agility and speed with large ontologies and similar data components makes the healthcare industry a particularly strong potential consumer of emerging LLM technologies like Copilot.

Which Areas Of Healthcare Are Deploying LLMs?

Initiatives across the healthcare industry are underway using LLMs to aid in computer engineering, automated registry reporting, drug discovery target, and documentation. 

In addition to automating text mining on unstructured data, LLMs also have the promise of computer-assisted coding across from automated data processing to software programming. 

What Are the Drawbacks To LLMs?

As mentioned, LLMs do not think and reason, which means all LLM output may have trust and reliability issues. This is particularly true in regulated settings like healthtech and drug development. Appropriately leveraging testing, QA/QC, and compliance are essential to managing the effectiveness of any AI/LLM initiative. 

Organizations that wait to incorporate LLMs into their workflows will shortly find themselves at a disadvantage. However, doing so safely will be critical for effectively using these new tools and capabilities. 

Standing up these technologies in a production environment requires federated partnerships that involve data ownership, tooling, and projects that are beyond the scope of traditional software implementations. Users and experts must learn with the LLMs as they grow and expand. Organizations with oversight by federal agencies, such as the FDA, have increased complexity and therefore need to ensure that the LLMs comply and provide reliable and valid information. 

All of these checks and balances will require both proactive efforts by early adopters and clear guidance and monitoring from the regulatory entities to ensure the process goes smoothly.

What Does The Future Hold For AI/LLMs?

Companies that invest in these approaches are likely to see a benefit in time and cost to market for their products – this becomes more pronounced for companies who develop and deploy significant amounts of software as a part of their business. Later entrants may find themselves compelled to follow along to stay competitive. 

There are enticing possibilities for companies that quickly adapt to these new realities and risks for those who don’t. The most interesting potential opportunities are around leveraging AI/LLM tools themselves as a means to transition to new technologies, like AI-assisted training and data management. 

Tools like NVIDIA’s BioMegatron represent important emerging uses of AI/LLM capabilities, which will likely continue to grow. Successful healthcare companies will likely partner with industry experts – Microsoft, for example, is making substantial investments in AI/LLM, as evidenced by its contribution to OpenAI, Copilot, and other projects. Life science companies can leverage this expertise to pull ahead of rivals.

The 10-Year Point of View

The next ten years will be fascinating as LLMs and associated technologies mature. The impact is tough to predict, as it depends on the degree to which an industry embraces these tools – and the regulatory environment for doing so. Provided that the regulatory environment remains relatively open and access to funding continues, we will likely see a class of workers arise whose primary responsibility is supervising LLM-derived tools in the rapid development of the pre-release version of many common work products across various industries. 

Examples within healthcare include:

  • Generation of clinical trial planning materials
  • Consent documentation
  • Regulatory documentation
  • Experimental data, and many other similar materials. 

In other areas, like finance, we expect to see a significant portion of analysis/summarization and similar tasks done (at least in the first draft) by AI systems. Costs may shift from headcount to tooling in the short term, and in the longer term, you may see a higher demand for experienced rather than entry-level employees. 

The industry’s challenge will be ensuring that the appropriate level of supervision for these “hybrid” efforts is maintained. The arms race to reduce costs through this kind of automation will be intense. Still, ensuring that staff is trained appropriately to take on the higher-level responsibilities of review and release to reduce risk is critical. 

Expect to see some spectacular failures as industries come to terms with how these systems work in practice and how to manage the associated risks appropriately.

Effective recognition of the unanticipated potential issues of deploying these tools and quick remediation of them will define success in healthcare and other industries.

Mark Adams Ph.D.
Mark Adams, Ph.D

Dr. Mark Adams is the Chief Operating Officer of Adaptive Biotechnologies, a commercial-stage biotech company that aims to translate the genetics of the adaptive immune system into clinical products to diagnose and treat disease. Before Adaptive, Mark served as Managing Director, Healthcare Advanced Analytics at SVB Leerink, a health and life sciences-focused investment bank. Mark has also served as Chief Information Officer at Celmatix, a women’s health precision medicine company, and at Good Start Genetics, a molecular diagnostics company. Mark holds a BA from Oberlin College and a Ph.D. from Baylor College of Medicine.

pam adams MBA
Pam Adams, MBA

Pam Adams is a former CFO and Technology Director with experience in various industries, including life sciences, advanced manufacturing, and education. She has worked as a management consultant, both independently and for PriceWaterhouse and Booz Allen Hamilton. She advises both private and nonprofit organizations. She has recently developed and delivered a curriculum for technical managers to explore management science and learn how to develop leadership skills, build high-performing teams, and achieve effective results. Pam holds a BA from Oberlin College and an MBA from Babson, with a focus on entrepreneurship.

Scroll to Top