AutoPrompt aims to improve ChatGPT’s analysis of clinical data
Research project develops more targeted prompting.
Clinical studies include large amounts of data and text. Language models such as ChatGPT help doctors and clinical staff to retrieve specific information using natural language. But how well can AI bots analyze logical correlations and make the right inferences? This is where the AutoPrompt research project sets in. It aims to counteract errors and hallucinations, which can occur when the systems make inferences. To this end, the researchers are developing a system that combines the capabilities of large language models with human interaction. The goal is to improve the performance of ChatGPT in understanding natural language and inference in the context of healthcare.
The ‘AutoPrompt’ project was initiated at DFKI thanks to a grant from Accenture Labs. Dr Bogdan E. Sacaleanu, Principal Director and Global AI Research Lead at Accenture Labs (on the left) and Prof Daniel Sonntag, DFKI, have made the collaboration possible. Credit: Jaron Hollax / DFKI
In healthcare, language models are gaining increasing attention due to their ability to automatically process large amounts of unstructured or semi-structured data. “With their emergence, our interest in understanding their capabilities for tasks such as inference with natural language as a data basis is growing,” says scientist Siting Liang, who is advancing the AutoPrompt project in the Interactive Machine Learning research department at DFKI Lower Saxony. According to Liang, Natural Language Inference (NLI) is about determining “whether a statement is consistent with or contradicts the premise”. The AutoPrompt project runs from January to December 2024 and is funded by a grant from Accenture, one of the world’s leading consulting, technology and outsourcing companies.
Siting Liang explains her approach using an example. The starting point is the statement that patients with hemophilia are excluded from a study if certain premises apply, such as an increased risk of bleeding. “This task requires the models to understand the content of the statement, identify and extract relevant information from clinical trial data. The model evaluates whether the evidence supports, contradicts, or is neutral (i.e., neither supports nor contradicts) regarding the statement. Finally, based on the evaluation, the model infers the logical relationship between the statement and the evidence.” she explains.
Optimize the prompting
As a first step, the computational linguist wants to optimize the prompting, i.e. the instruction to the chatbot to receive a specific answer. To this end, she researches various strategies such as chain-of-thoughts methods. These involve giving instructions with intermediate steps that follow certain paths and trigger chains of thought. The aim is to elicit a certain degree of reasoning ability from the bot. “ChatGPT may be able to recognize relevant sentences from a context, but drawing precise logical inference requires a deeper understanding of domain knowledge and natural written language,” says Liang. In a second step, she will evaluate the performance of ChatGPT in NLI tasks using different datasets and suggest improvements. “Our goal is to provide the language models with more domain-specific sources as context,” she says. The goal is to implement the most suitable prompting strategies and a generation framework that enables more efficient access to additional knowledge.
Study with medical students
AI Human Collaboration, i.e. the collaboration between system and human, in this case medical students, plays a major role in the project. To this end, Siting Liang has set up a study within the project, for which she is currently looking for around ten participants. The given statement is that patients diagnosed with a malignant brain tumor are excluded from a primary study if criteria such as chemotherapy apply. The prospective participants are divided into two groups, within which they contribute their knowledge for two hours and make decisions comparing the statement with the clinical trial eligibility data. Group 1 evaluates the decisions of the AI system, and group 2 corrects errors of the system.
“If we want to improve AI systems, we need feedback from humans,” says Siting Liang, who has already worked with medical data in previous projects of the research department. Liang knows that systems can usually analyze medical texts and data very well: “But it is also possible that they hallucinate and give us wrong results. AutoPrompt is supposed to help achieve greater accuracy in the answers.”
Wissenschaftliche Ansprechpartner:
Siting Liang
siting.liang@dfki.de
Prof. Dr. Daniel Sonntag
Daniel.Sonntag@dfki.de
https://www.dfki.de/en/web/news/autoprompt-aims-to-improve-chatgpts-analysis-of-clinical-data
Media Contact
All latest news from the category: Information Technology
Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.
This area covers topics such as IT services, IT architectures, IT management and telecommunications.
Newest articles
First-of-its-kind study uses remote sensing to monitor plastic debris in rivers and lakes
Remote sensing creates a cost-effective solution to monitoring plastic pollution. A first-of-its-kind study from researchers at the University of Minnesota Twin Cities shows how remote sensing can help monitor and…
Laser-based artificial neuron mimics nerve cell functions at lightning speed
With a processing speed a billion times faster than nature, chip-based laser neuron could help advance AI tasks such as pattern recognition and sequence prediction. Researchers have developed a laser-based…
Optimising the processing of plastic waste
Just one look in the yellow bin reveals a colourful jumble of different types of plastic. However, the purer and more uniform plastic waste is, the easier it is to…