의학물리학

본문글자크기
  • [Phys Med Biol.] Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain 18F-FDG PET.

    University of California San Francisco / 양재원*

  • 출처
    Phys Med Biol.
  • 등재일
    2019 Apr 4
  • 저널이슈번호
    64(7):075019. doi: 10.1088/1361-6560/ab0606.
  • 내용

    바로가기  >

    Abstract
    Dedicated brain positron emission tomography (PET) devices can provide higher-resolution images with much lower doses compared to conventional whole-body PET systems, which is important to support PET neuroimaging and particularly useful for the diagnosis of neurodegenerative diseases. However, when a dedicated brain PET scanner does not come with a combined CT or transmission source, there is no direct solution for accurate attenuation and scatter correction, both of which are critical for quantitative PET. To address this problem, we propose joint attenuation and scatter correction (ASC) in image space for non-corrected PET (PETNC) using deep convolutional neural networks (DCNNs). This approach is a one-step process, distinct from conventional methods that rely on generating attenuation maps first that are then applied to iterative scatter simulation in sinogram space. For training and validation, time-of-flight PET/MR scans and additional helical CTs were performed for 35 subjects (25/10 split for training and test dataset). A DCNN model was proposed and trained to convert PETNC to DCNN-based ASC PET (PETDCNN) directly in image space. For quantitative evaluation, uptake differences between PETDCNN and reference CT-based ASC PET (PETCT-ASC) were computed for 116 automated anatomical labels (AALs) across 10 test subjects (1160 regions in total). MR-based ASC PET (PETMR-ASC), a current clinical protocol in PET/MR imaging, was another reference for comparison. Statistical significance was assessed using a paired t test. The performance of PETDCNN was comparable to that of PETMR-ASC, in comparison to reference PETCT-ASC. The mean SUV differences (mean  ±  SD) from PETCT-ASC were 4.0%  ±  15.4% (P  <  0.001) and  -4.2%  ±  4.3% (P  <  0.001) for PETDCNN and PETMR-ASC, respectively. The overall larger variation of PETDCNN (15.4%) was prone to the subject with the highest mean difference (48.5%  ±  10.4%). The mean difference of PETDCNN excluding the subject was substantially improved to  -0.8%  ±  5.2% (P  <  0.001), which was lower than that of PETMR-ASC (-5.07%  ±  3.60%, P  <  0.001). In conclusion, we demonstrated the feasibility of directly producing PET images corrected for attenuation and scatter using a DCNN (PETDCNN) from PETNC in image space without requiring conventional attenuation map generation and time-consuming scatter correction. Additionally, our DCNN-based method provides a possible alternative to MR-ASC for simultaneous PET/MRI.

     


    Author information

    Yang J1, Park D, Gullberg GT, Seo Y.
    1
    Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, United States of America. UCSF Physics Research Laboratory, 185 Berry Street, Suite 350, San Francisco, CA 94143-0946, United States of America. Author to whom any correspondence should be addressed.

  • 편집위원

    PET과 관련해서 Deep Learning을 사용하여 영상 질을 향상 시티는 연구로서 최근 많은 관심을 일으키는 영역의 논문인 것 같습니다.

    2019-05-28 17:43:55

  • 덧글달기
    덧글달기
       IP : 18.225.149.32

    등록