The role of AI-driven natural language processing technologies in healthcare amidst the rise of ChatGPT
Rapidly developing natural language processing (NLP) technologies have already become a natural part of our everyday lives. Whether it’s to interact with an online chatbot, or to ask the voice-controlled device within your home for a weather update, their influence has become somewhat inescapable.
NLP describes a branch of artificial intelligence (AI) that involves technologies with the ability to understand and negotiate text and the spoken word much like humans can. Using rule-based modelling built off the human language, machine learning and command processing, they are ever-learning tools able to make sense of and adapt to a user’s intent and provide guidance.
These capable AI-driven language processing technologies have naturally spilled into several industries, including the healthcare sector. Their capacity to sift through large text-based datasets, identify patterns in patient notes and detect important points within documentation at speed has illuminated great potential for NLP technologies.
Transformer language models such as Microsoft’s BioGPT and the pioneering NLP algorithm developed by the winning team of IntelliHQ’s 2022 National AI in Healthcare Training Program Datathon have already shown that language-driven AI insights can truly transform our medical world.
However, it’s not all bells and whistles. The successful implementation of AI-driven NLP technology within the healthcare space is contemporarily plagued with a multitude of ethical and governance concerns, as well as issues regarding misinformation and malfunction.
The NLP innovations empowering the next generation of healthcare
In January 2023, Microsoft announced BioGPT – a transformer language model trained on 15 million published biomedical research articles. With the ability to answer questions, analyse datasets and documentation, generate biomedical texts and uncover new insights, it works to empower biologists in scientific discovery.
A transformer language model is a form of NLP that is able to identify the different significances of inputted data. This recognises it can detect what keywords or data points are most beneficial to a user’s query and also take into account outliers in decision-making processes.
Microsoft states that BioGPT has already achieved human parity, meaning its accuracy is on par with that of the human mind. The technology has already been trialled to develop digital biomarkers, predict treatment outcomes and guide clinical therapies.
ICU pressure injury identification algorithm
As part of IntelliHQ’s 2022 National AI in Healthcare Training Program Datathon, a Sydney-based team developed an algorithm able to identify and mitigate pressure injuries within ICU settings. The NLP solution monitored for keywords among clinician notes, detecting instances, symptoms, signs and suspicions of pressure injury.
Pressure injury is a medical issue affecting over 4,000 Australians every year at a cost of $820 million. Mainly occurring on bony parts of the body, they are defined as localised damage to the skin or underlying tissue due to pressure.
The team built and tested the NLP model within Datarwe’s Clinical Data Nexus platform using the Kaggle open-source medical notes dataset. When applied in clinical settings, its findings empower ICU units and clinicians to mitigate risks, increase resource allocation where needed and ultimately improve patient outcomes in real-time.
A pragmatic look into healthcare NLP technologies
NLP technologies are reliant on large amounts of text-based data. This means they generally align with the information-collecting reality of the modern, western healthcare system, where data use and collection (numerically or via text) is instilled in almost every medical practice. This includes documentation in the form of patient notes and imaging reports, and the information available within medical journals and papers.
However, the integral reliance on available data illuminates a considerable concern in contexts where complete data or data that represents the true diversity of patient populations is unavailable. This is where a functional bias may eventuate, as the capabilities of NLP tools are reflective of the datasets they are exposed to during their development process.
Leo Anthony Celi, a principal research scientist at MIT’s Institute for Medical Engineering and Science and founding partner of IntelliHQ’s AI in Healthcare Training Program, said recently in a MIT interview that such functional bias is undermining the expansive capabilities of NLP tools such as ChatGPT.
“There is no question that language models such as ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, or even groups of experts, and extracting knowledge,” says Celi. “However, we will need to address the problem of data bias before we can leverage [language models] and other artificial intelligence technologies.
“The body of knowledge that LLMs train on, both medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It is not representative of most of the world”.
Dr. Ashwani Kumar of The George Institute is currently travelling throughout India to educate healthcare practitioners and executives about the importance and benefits of data collection. He mirrors Celi’s concern, noting the massive amounts of documentation and data NLP technologies draw on are simply not available in many developing countries.
“There just is no data,” admits Dr. Ashwani, “And in the rare case that there is, there is no standardised structure or format in its collection that allows technologies to make sense of it.”
He notes that for AI and NLP technologies to be truly effective, there must be quality data to draw on. Without it, the tools naturally develop biases and provide direction not applicable to the wider population.
There is significant work underway to create guardrails to prevent these issues occuring. This includes the promotion of fair community representation at the AI governance level through leadership initiatives like IntelliHQ’s Diversity in AI program, and the constant recalibration and training of NLP systems throughout their lifetime.
Enter the future
The possibilities of AI-driven NLP technologies to revolutionise healthcare is undeniable, yet tempered with apprehension. At IntelliHQ, we believe little access to training, truly representative AI leadership and comprehensive governance frameworks are actively contributing to this doubt. That’s why, to truly harness the power of AI and ensure its success, ongoing training and opportunity creation is required.
IntelliHQ are passionate about making AI education and training accessible to all healthcare professionals. With a series of training programs for both healthcare executives and clinicians, collaborative data-driven events and diversity programs, we believe a more accessible, affordable and efficient healthcare system uplifted by AI is within our sights.