글로벌 연구동향
핵의학
- 2025년 02월호
[PLoS One .] Multimodal feature fusion-based graph convolutional networks for Alzheimer's disease stage classification using F-18 florbetaben brain PET images and clinical indicators부경대, 동아의대 / 이규빈, 정영진*, 윤현진, 윤민, 강도영*
- 출처
- PLoS One .
- 등재일
- 2024 Dec 23
- 저널이슈번호
- 19(12):e0315809.
- 내용
Abstract
Alzheimer's disease (AD), the most prevalent degenerative brain disease associated with dementia, requires early diagnosis to alleviate worsening of symptoms through appropriate management and treatment. Recent studies on AD stage classification are increasingly using multimodal data. However, few studies have applied graph neural networks to multimodal data comprising F-18 florbetaben (FBB) amyloid brain positron emission tomography (PET) images and clinical indicators. The objective of this study was to demonstrate the effectiveness of graph convolutional network (GCN) for AD stage classification using multimodal data, specifically FBB PET images and clinical indicators, collected from Dong-A University Hospital (DAUH) and Alzheimer's Disease Neuroimaging Initiative (ADNI). The effectiveness of GCN was demonstrated through comparisons with the support vector machine, random forest, and multilayer perceptron across four classification tasks (normal control (NC) vs. AD, NC vs. mild cognitive impairment (MCI), MCI vs. AD, and NC vs. MCI vs. AD). As input, all models received the same combined feature vectors, created by concatenating the PET imaging feature vectors extracted by the 3D dense convolutional network and non-imaging feature vectors consisting of clinical indicators using multimodal feature fusion method. An adjacency matrix for the population graph was constructed using cosine similarity or the Euclidean distance between subjects' PET imaging feature vectors and/or non-imaging feature vectors. The usage ratio of these different modal data and edge assignment threshold were tuned by setting them as hyperparameters. In this study, GCN-CS-com and GCN-ED-com were the GCN models that received the adjacency matrix constructed using cosine similarity (CS) and the Euclidean distance (ED) between the subjects' PET imaging feature vectors and non-imaging feature vectors, respectively. In modified nested cross validation, GCN-CS-com and GCN-ED-com respectively achieved average test accuracies of 98.40%, 94.58%, 94.01%, 82.63% and 99.68%, 93.82%, 93.88%, 90.43% for the four aforementioned classification tasks using DAUH dataset, outperforming the other models. Furthermore, GCN-CS-com and GCN-ED-com respectively achieved average test accuracies of 76.16% and 90.11% for NC vs. MCI vs. AD classification using ADNI dataset, outperforming the other models. These results demonstrate that GCN could be an effective model for AD stage classification using multimodal data.Affiliations
Gyu-Bin Lee 1 2, Young-Jin Jeong 1 3, Do-Young Kang 1 3, Hyun-Jin Yun 1, Min Yoon 2
1Department of Nuclear Medicine, Dong-A University College of Medicine and Medical Center, Busan, Korea.
2Department of Applied Mathematics, Pukyong National University, Busan, Korea.
3Institute of Convergence Bio-Health, Dong-A University, Busan, Korea.
- 연구소개
- 알츠하이머병(AD)은 대표적인 퇴행성 뇌질환으로, 조기 진단을 통해 적절한 치료와 관리가 이루어지는 것이 중요합니다. 최근 AD 단계 분류 연구에서는 서로 다른 종류의 데이터를 함께 활용하는 멀티모달 접근법이 주목받고 있습니다. 본 연구에서는 F-18 florbetaben(FBB) 아밀로이드 PET 영상과 임상 데이터로 구성된 멀티모달 데이터에 3D Dense Convolutional Network(DenseNet), 유사도 및 거리 측도, Graph Convolutional Network(GCN)를 활용한 새로운 AD 분류 접근법을 제안하였습니다. 동아대학교병원과 ADNI 데이터셋을 이용한 실험을 통해 제안한 접근법이 기존 머신러닝 모델보다 더 높은 성능을 보임을 확인하였습니다. 따라서, 멀티모달 데이터 분류에 관심 있는 연구자들에게 참고가 될 만한 연구라고 생각됩니다.
- 덧글달기





