핵의학

본문글자크기
  • 2025년 09월호
    [Eur J Nucl Med Mol Imaging .] Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

    연세의대, University of Texas Southwestern / 김대성, 추교빈, 윤미진*, 양재원*

  • 출처
    Eur J Nucl Med Mol Imaging .
  • 등재일
    2025 Jul
  • 저널이슈번호
    52(8):2959-2967.
  • 내용

    바로가기  >

    Abstract
    Purpose: Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MRSYN) and performing automated quantitative regional analysis using MRSYN-derived segmentation.

    Methods: In this retrospective study, 139 subjects who underwent brain [18F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MRSYN; subsequently, a separate model was trained to segment MRSYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [18F]FBB PET images using the acquired ROIs. For evaluation of MRSYN, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MRSYN-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MRSYN and ground-truth MR (MRGT).

    Results: Compared to MRGT, the mean SSIM of MRSYN was 0.974 ± 0.005. The MRSYN-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MRSYN and those from MRGT, excluding the precuneus.

    Conclusion: We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MRSYN. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.

     

     

    Affiliations

    Daesung Kim # 1, Kyobin Choo # 2, Sangwon Lee 3, Seongjin Kang 3, Mijin Yun # 4, Jaewon Yang # 5
    1Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea.
    2Department of Computer Science, Yonsei University, Seoul, Republic of Korea.
    3Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea.
    4Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea. yunmijin@yuhs.ac.
    5Department of Radiology, University of Texas Southwestern, Dallas, TX, USA.
    #Contributed equally.

  • 키워드
    Amyloid; Deep learning; PET/CT; Quantification; Segmentation.
  • 편집위원

    CT와 MRI 모두 해부학적 영상을 제공하지만, 부드럽고 정밀한 연부조직 표현에서는 CT가 MRI의 영상 품질을 따라가기는 어렵다. 특히 MRI 촬영이 힘든 환자의 경우, CT로부터 MRI와 유사한 영상을 얻을 수 있다면 큰 도움이 될 것이다. 본 연구는 [18F]FBB PET/CT 영상을 활용하여, PET/CT만으로도 MRI가 필요한 뇌 영역 정량분석을 가능하게 하기 위해 CT 영상을 합성 MRI(MRSYN)로 변환하는 딥러닝 프레임워크 개발에 대한 것이다. 결과적으로, MRI 촬영이 어려운 환자에서도 PET/CT만으로 신뢰할 수 있는 뇌 정량분석의 가능성을 제시하고 있다.

    덧글달기2025-08-28 16:51:09

  • 편집위원2

    딥러닝을 활용해 한 영상 모달리티(CT)를 다른 모달리티(MR)로 변환하는 것은 영상 과학에서 매우 도전적인 과제로, 높은 잠재성을 지니고 있습니다. PET/CT 데이터의 뇌 정량화를 자동화하려는 시도로 과학/연구 분야에서 장비없이 정밀 정량화가 된다느 부분에서 접근성과 효용성이 높으며 향후 발전 가능성에서 흥미로운 연구임

    덧글달기2025-08-28 16:52:30

  • 덧글달기
    덧글달기
       IP : 18.97.14.80

    등록