Niu, Shuai, Ma, Jing, Bai, Liang, Wang, Zhihua, Guo, Li ORCID: https://orcid.org/0000-0003-1272-8480 and Yang, Xian (2024) EHR-KnowGen: Knowledge-enhanced multimodal learning for disease diagnosis generation. Information Fusion, 102. 102069. ISSN 1566-2535
|
Published Version
Available under License Creative Commons Attribution. Download (2MB) | Preview |
Abstract
Electronic health records (EHRs) contain diverse patient information, including medical notes, clinical events, and laboratory test results. Integrating this multimodal data can improve disease diagnoses using deep learning models. However, effectively combining different modalities for diagnosis remains challenging. Previous approaches, such as attention mechanisms and contrastive learning, have attempted to address this but do not fully integrate the modalities into a unified feature space. This paper presents EHR-KnowGen, a multimodal learning model enhanced with external domain knowledge, for improved disease diagnosis generation from diverse patient information in EHRs. Unlike previous approaches, our model integrates different modalities into a unified feature space with soft prompts learning and leverages large language models (LLMs) to generate disease diagnoses. By incorporating external domain knowledge from different levels of granularity, we enhance the extraction and fusion of multimodal information, resulting in more accurate diagnosis generation. Experimental results on real-world EHR datasets demonstrate the superiority of our generative model over comparative methods, providing explainable evidence to enhance the understanding of diagnosis results.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.