Deep-learning-based Attenuation Correction for <sup>68</sup>Ga-DOTATATE Whole-body PET Imaging: A Dual-center Clinical Study
PDF
Cite
Share
Request
Original Article
P: 138-146
October 2024

Deep-learning-based Attenuation Correction for 68Ga-DOTATATE Whole-body PET Imaging: A Dual-center Clinical Study

Mol Imaging Radionucl Ther 2024;33(3):138-146
1. Tabriz University of Medical Sciences School of Medicine, Department of Medical Physics, Tabriz, Iran
2. Iran University of Medical Sciences, Nursing and Midwifery Care Research Center, Tehran, Iran
3. Shahid Beheshti University, Department of Medical Radiation Engineering, Tehran, Iran
4. Tehran University of Medical Sciences Faculty of Medicine, Imam Khomeini Hospital Complex, Department of Nuclear Medicine, Tehran, Iran
5. Tehran University of Medical Sciences Faculty of Medicine, Department of Medical Physics and Biomedical Engineering, Tehran, Iran
No information available.
No information available
Received Date: 24.02.2024
Accepted Date: 05.06.2024
Online Date: 07.10.2024
Publish Date: 07.10.2024
PDF
Cite
Share
Request

Abstract

Objectives

Attenuation correction is a critical phenomenon in quantitative positron emission tomography (PET) imaging with its own special challenges. However, computed tomography (CT) modality which is used for attenuation correction and anatomical localization increases patient radiation dose. This study was aimed to develop a deep learning model for attenuation correction of whole-body 68Ga-DOTATATE PET images.

Methods

Non-attenuation-corrected and computed tomography-based attenuation-corrected (CTAC) whole-body 68Ga-DOTATATE PET images of 118 patients from two different imaging centers were used. We implemented a residual deep learning model using the NiftyNet framework. The model was trained four times and evaluated six times using the test data from the centers. The quality of the synthesized PET images was compared with the PET-CTAC images using different evaluation metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean square error (MSE), and root mean square error (RMSE).

Results

Quantitative analysis of four network training sessions and six evaluations revealed the highest and lowest PSNR values as (52.86±6.6) and (47.96±5.09), respectively. Similarly, the highest and lowest SSIM values were obtained (0.99±0.003) and (0.97±0.01), respectively. Additionally, the highest and lowest RMSE and MSE values fell within the ranges of (0.0117±0.003), (0.0015±0.000103), and (0.01072±0.002), (0.000121±5.07xe–5), respectively. The study found that using datasets from the same center resulted in the highest PSNR, while using datasets from different centers led to lower PSNR and SSIM values. In addition, scenarios involving datasets from both centers achieved the best SSIM and the lowest MSE and RMSE.

Conclusion

Acceptable accuracy of attenuation correction on 68Ga-DOTATATE PET images using a deep learning model could potentially eliminate the need for additional X-ray imaging modalities, thereby imposing a high radiation dose on the patient.

Keywords:
Deep-learning, attenuation correction, PET/CT, 68Ga-DOTATATE, medical imaging

Introduction

68Ga-DOTATATE positron emission tomography/computed tomography (PET/CT) has emerged as a sensitive and accurate functional imaging method with significant advantages over conventional imaging in the diagnosis and management of neuroendocrine tumors (1,2). In PET imaging, a positron emitter radiopharmaceutical is administered to a patient that emits two 511-keV gamma photons in opposite directions following positron annihilation. However, the gamma pair can undergo photoelectric and Compton interactions before reaching the detector, leading to photon attenuation, poor contrast, and errors in quantitative calculations (3,4).

If the PET images obtained from the standard uptake value (SUV) for diagnosis, prognosis, and treatment-related issues are adequately corrected, it can enable quantitative measures with considerable accuracy (5). The use of CT-based attenuation correction (CTAC) algorithms is considered one of the most common and well-known methods of attenuation correction (AC) in PET (4). The main drawback of these methods is the imposed high effective dose on patients. There was a report in the early days of introducing PET/CT that showed that the average effective dose of patients from whole-body 18F-fluorodeoxyglucose (18F-FDG)-PET/CT examinations was approximately 25 mSv (6). On the other hand, PET radiopharmaceuticals usually have an effective dose of 10 mSv (7). Therefore, the majority of the radiation dose received from imaging is related to CT scans. Because obtaining the tissue attenuation map directly from magnetic resonance imaging (MRI) signals poses challenges, various methods have been employed to address this issue (8,9,10,11,12). One of the commonly used methods for AC in PET/MRI scanners is the Dixon-based method (9). Nevertheless, a major drawback of this method is its failure to account for bone tissue (10). Consequently, a model-based approach was adopted to address this limitation (11). However, this method introduces a quantification error due to inconsistent registration (12). The inconsistency and small field of view of MRI compared with PET can result in the loss of information from certain body parts (13). Meanwhile, the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm can be used to obtain missing information and create an attenuation map from the PET emission data (14). However, one drawback of this algorithm is the presence of high noise and induced cross-talk artifacts (15). Additionally, atlas-based segmentation methods (16,17,18) have been employed, but they suffer from incorrect classification of tissue, anatomic abnormalities, noise, and metal-induced artifacts, making AC a challenging issue in PET/MRI (19). In recent years, deep learning has demonstrated great potential in enhancing medical image quality, denoizing, and artifact reduction (20,21). So far, deep learning has been used in producing synthetic CT using MRI images for AC in PET (22), including direct transformation to pseudo-CT from T1-weighted MR, ultrashort echo time, zero-TE MR, Dixon, estimation of AC factors from time-of-flight data (23,24,25,26,27), generation of synthetic CT images from non-AC (NAC) PET images on whole-body PET/MRI imaging, and MLAA-based AC maps (28,29,30). However, there is a need for structural images, and accuracy is compromised by image artifacts from misregistration and inter-modality errors (31). Several studies have attempted to directly convert NAC PET images to corrected PET images without the need for multiple imaging modalities such as MRI and CT (31,32). These studies employed different approaches and models in different areas of the body and with different radiopharmaceuticals.

In the present study, we aimed to develop an optimal deep-learning model for AC of whole-body 68Ga-DOTATATE PET images without relying on anatomical structures.

Materials and Methods

Data Acquisition

68Ga-DOTATATE whole-body PET images of 118 patients from two imaging centers (59 images from center 1 and 59 images from center 2) were retrospectively included in the study. This study was approved by the Research Ethics Committee of Tabriz University of Medical Sciences (approval no.: IR.TBZMED.REC.1401.584, approval date: 03.10.2022), which ensures adherence to ethical standards. The examinations were performed using 5-ring BGO-based PET/CT and 3-ring LSO-based PET/CT scanners. PET imaging was performed approximately 60 min after injection of 1.85 MBq 68Ga-DOTATATE per kilogram of patient weight. Before radiotracer injection, a low-dose CT scan was performed for AC and anatomical localization.

Data Preprocessing

From the 118 68Ga-DOTATATE PET images, 85% of the data from each center were considered for training the model, while 15% were used for external validation. In addition, 15% of the training dataset was set aside for validation during the training process to evaluate the loss function and prevent overfitting. To reduce the dynamic range of image intensity, all PET images, including CTAC and NAC images, were converted to SUVs. In addition, to reduce the computational load, the image intensities were normalized by an empirical fixed value of 9 and 3, respectively.

Network Architecture

A deep learning algorithm based on the NiftyNet platform was utilized to generate PET/CT image AC using reference (PET-CTAC) images. NiftyNet is an infrastructure built upon the TensorFlow library and is designed to be used in various image analysis programs. It supports segmentation, regression, image generation, and reconstruction tasks. Therefore, it plays a vital and fundamental role in speeding up clinical work, including diagnostic and therapeutic procedures (33). The NiftyNet platform is a high-resolution residual neural network (HighResNet) (34). Our prepared network was composed of 20 residual layers. In the first seven layers, a 3x3x3 voxel kernel is employed to encode low-level image features, such as edges and corners. This kernel is dilated by factors of 2 and 4 in subsequent layers to extract mid- and high-level features. Then, a residual connection is used to link all two layers. In the residual blocks, each layer comprises an element-wise rectified linear unit (ReLU) and batch normalization. The structural details of the model are shown in Figure 1.

Implementation Details

In this study, we used the following parameters to train the network: lr=0.001, activation function=leakyReLU, loss function=l2 loss, optimizer=Adam, decay factor=0.00001, batch size=12, queue length=480. The model was trained four times but was evaluated six times using test datasets with different matrix sizes.

Initially, a dataset comprising 50 samples from center 1 with a matrix size of 192x192 was used to train the network. However, the test was separately conducted using 9 images from each center, namely, center 1 and 2 test datasets for the first and second evaluations, with a matrix size of 192x192. Specifically, the matrix size of 9 test datasets from center 2 was resized from 200x200 to 192x192.

The second training was performed using only 50 data samples from center 2, with a matrix size of 192x92. The model was then tested using 9 datasets from both centers, i.e., center 1 and 2 test datasets for the third and fourth evaluations, with a matrix size of 192x192. For the third training phase, a total of 100 samples were used to train the network with 50 data from each center and a matrix size of 192x192. Additionally, 18 samples (9 datasets from each center) were used as the test dataset in the fifth evaluation. For the fourth training session of the network, a dataset of 100 samples from both centers was used, with a matrix size of 200x200. The matrix size of the 50 images from center 1 was resized from 192x192 to 200x200. Eighteen data points were utilized for network testing from both centers, as the sixth evaluation with a matrix size of 200x200.

Statistical Analysis

In this study, we performed statistical analyses to explore the relationship between the two variables. Specifically, we utilized the Pearson correlation coefficient to assess the type of connection, and the paired sample t-test to calculate the p-value. Additionally, we computed several evaluation metrics, including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean squared error (MSE), and root mean squared error (RMSE) for parameters such as the view signal-to-noise peak ratio, structural similarity, and average error rate.

Evaluation Strategy

The performance of the prepared model was assessed using various quantitative metrics, including quantitative metrics such as PSNR (Eq.1), MSE (Eq.2), RMSE (Eq.3), and SSIM (Eq.4). The metrics were computed by comparing the reference PET-CTAC images with the images generated by the network [PET-deep learning AC (PET-DLAC)]. The metrics are defined as follows:

where in Equation (Eq.) (1), R2 represents the maximum value of the PET-CTAC images as the reference image, represents the predicted image, and MSE denotes the mean squared error. In Eq. (2), “n” indicates the number of voxels inside the region of interest, “i”denotes the voxel index, petpredict stands for AC PET images, and petref stands for the reference PET-CTAC images.

In Eq. (4), µref and µpre represent the mean values of the reference and predicted PET images, respectively, σref and σpre are the variances of the petref and petpredict images, where σref,pre represents their covariance. Additionally, c1 and c2 are two parameters with constants c_1 = 0.01 and c_2 = 0.02 in Eq. (3), respectively, to avoid division by very small values.

Furthermore, to illustrate the voxel-wise distribution of radiotracer uptake correlation between PET-CTAC and PET-DLAC images, a joint histogram analysis was performed for SUV values ranging from 0.1 to 18 using 200 bins.

Results

The summary of the mean ± standard deviation of the image quantitative assessment parameters, including MSE, PSNR, RMSE, and SSIM, that were calculated between the SUV of PET-CTAC images as the reference and the 18 test datasets predicted by the model for the six evaluations, are demonstrated in Table 1. Although the values of these parameters in all evaluations were acceptable, there were some variations among them. Among all evaluations, the fourth evaluation obtained the highest PSNR value (52.86±6.6), indicating a better representation of image quality. The third evaluation showed the lowest PSNR (47.96±5.09), and the fifth evaluation had the lowest MSE value (0.000121±5.07xe-5) and RMSE (0.01072±0.002) value, indicating a smaller deviation from the reference images. The second evaluation demonstrated the highest MSE (0.0015±0.000103) and RMSE (0.0117±0.003). Additionally, the sixth evaluation showed the highest SSIM level (0.99±0.003) among all evaluations, while the second evaluation showed the lowest SSIM level (0.97±0.01) compared to the reference images. A box plot comparing the parameters in the six evaluations is shown in Figure 2. Furthermore, we calculated the maximum SUV (SUVmax) difference between PET-CTAC and PET-DLAC images in 20 superficial regions of interest (ROIs) and 20 deep ROIs in the axial section along the x-axis for each evaluation. For all evaluations p-value <0.05, Only for some evaluations where the original size of the images had been changed such as the sixth evaluation p-value >0.05. Additionally, the SUVmax difference was calculated for 5 ROIs within the tumor volumes for each evaluation in the axial section along the y-axis, for the first, fourth, and fifth evaluations p-value <0.05. The coronal views of the NAC, PET-CTAC, and PET-DLAC images, as well as the bias map between PET-CTAC and PET-DLAC images, are shown in Figures 3 and 4. The images represent the results of the four training sessions performed on the image represents the nine test data from two imaging centers. In all nine test data related to the four train sets, errors and underestimations were visually observed compared to the reference images. In particular, where the matrix size of images was set to 200x200, the rate of underestimation increased, and it should be noted that center 2 had data of the same matrix size. However, when the matrix size of 192x192, the underestimation was at its lowest level. Additionally, by reducing the size of the images, the number of errors observed in the lungs was significantly reduced; thus, images related to center 2 in the second training set showed the lowest number of errors and underestimation. These images were created using only data from the same center and were resized to 192x192 pixels. In general, most images exhibited the highest amount of error in the lungs, while the liver, kidneys, and bladder images exhibited the highest amount of underestimation. The joint histogram in Figure 5 reveals that there was the highest voxel-wise similarity between PET-CTAC and PET-DLAC images in the first evaluation and first training within the data from center1, R²=0.95, and a curve slope of 1.10. In contrast, for the fourth evaluation related to the second train, the correlation coefficient remained high at R²=0.95, but the slope was slightly lower at 0.95. The lowest R² value of 0.82 was observed in the second evaluation, which is related to the first training. In this case, the training dataset was obtained from center 1, and the test dataset was obtained from center 2. Both datasets were resized to match the size of the training dataset. In summary, joint histogram analysis revealed a significant level of similarity between PET-CTAC and PET-DLAC images.

Discussion

In this study, we used a deep learning model for the AC of whole-body 68Ga-DOTATATE PET images without the need for structural information. The model was also evaluated using training and test datasets from two distinct imaging centers to assess and enhance its performance. In recent years, there has been a significant concern about AC in PET images using deep learning methods. Many studies have been conducted to generate pseudo-CT images using MRI (22,23,24,25,26,27) or NAC images (28,29) for AC purposes, but these methods require an additional modality as well as insufficient accuracy due to the large mismatch of images between the two modalities, and many artifacts and errors can be observed between them (31). Hence, there are many studies on PET image AC based on NAC images, without the need for structural images (CT or MRI). Shiri et al. (32) used a deep convolutional encoder-decoder (deep-DAC) network to calculate AC directly for 18F-FDG PET brain images. They achieved promising results on 18 images with a PSNR of 38.7±3.54 and SSIM of 0.988±0.006, respectively. Dong et al. (31) proposed 3D patch-based cycle-consistent generative adversarial networks (CycleGAN) for AC of 18F-FDG PET whole-body (n=30) images and reported an average PSNR of 44.3±3.5 and NMSE of 0.72±0.34. Likewise, Mostafapour et al. (35) proposed the ResNet model for AC of 46 PET images with 68Ga-PSMA and reported PSNR and SSIM 48.17±2.96 and 0.973±0.034, respectively. However, to enhance and elevate the accuracy of outcomes, further studies are needed. In this study, we used the Resnet model to obtain 68Ga-DOTATATE PET whole-body images. Our proposed model was trained four times and evaluated six times using 18 test datasets from two imaging centers for 68Ga-DOTATATE PET images with different matrix sizes. In all 18 test data bias maps across six evaluations, high error rates were observed in the lungs, whereas the liver, bladder, and kidneys displayed a marked tendency toward underestimation. It is worth noting that the magnitude of these errors was substantially diminished by decreasing the dimensions of the images. Although the evaluations did not show significant differences, certain errors undoubtedly stemmed from the incomplete AC of the reference images, which cannot be overlooked. It may be advisable to use data from the same center to train the model at a specific center to achieve optimal AC. Additionally, the results indicate that reducing the image matrix size relative to the increase in size can improve model performance. From the viewpoint of image quality, although our model was not comparable with the CTAC approach, it ruled out the radiation dose from CT. However, our promising finding reveals the potential of the model for further exploration on larger datasets with possibly enhanced levels of accuracy in future studies.

Conclusion

This study demonstrated the performance and feasibility of a deep learning model for AC in whole-body 68Ga-DOTATATE PET images. The results indicate the accuracy and high performance of the model, demonstrating its potential for effectively correcting attenuation in PET imaging. It appears that the model can reduce the reliance on CT images for AC of PET images, thereby minimizing additional radiation exposure to the patient.

References

1
Sanli Y, Garg I, Kandathil A, Kendi T, Zanetti MJB, Kuyumcu S, Subramaniam RM. Neuroendocrine Tumor Diagnosis and Management: 68Ga-DOTATATE PET/CT. AJR Am J Roentgenol. 2018;211:267-277.
2
Tirosh A, Kebebew E. The utility of 68Ga-DOTATATE positron-emission tomography/computed tomography in the diagnosis, management, follow-up and prognosis of neuroendocrine tumors. Future Oncol. 2018;14:111-122.
3
Zaidi H, Koral KF. Scatter modelling and compensation in emission tomography. Eur J Nucl Med Mol Imaging. 2004;31:761-782.
4
Kinahan PE, Hasegawa BH, Beyer T. X-ray-based attenuation correction for positron emission tomography/computed tomography scanners. Semin Nucl Med. 2003;33:166-179.
5
Zaidi H, Karakatsanis N. Towards enhanced PET quantification in clinical oncology. Br J Radiol. 2018;91:20170508.
6
Brix G, Lechel U, Glatting G, Ziegler SI, Münzing W, Müller SP, Beyer T. Radiation exposure of patients undergoing whole-body dual-modalityF-FDG PET/CT examinations. J Nucl Med. 2005;46:608-613.
7
Mattsson S, Johansson L, Leide Svegborn S, Liniecki J, Noßke D, Riklund KÅ, Stabin M, Taylor D, Bolch W, Carlsson S, Eckerman K, Giussani A, Söderberg L, Valind S; ICRP. Radiation Dose to Patients from Radiopharmaceuticals: a Compendium of Current Information Related to Frequently Used Substances. Ann ICRP. 2015;44:7-321.
8
Martinez-Möller A, Nekolla SG. Attenuation correction for PET/MR: problems, novel approaches and practical solutions. Z Med Phys. 2012;22:299-310.
9
Berker Y, Franke J, Salomon A, Palmowski M, Donker HC, Temur Y, Mottaghy FM, Kuhl C, Izquierdo-Garcia D, Fayad ZA, Kiessling F, Schulz V. MRI-based attenuation correction for hybrid PET/MRI systems: a 4-class tissue segmentation technique using a combined ultrashort-echo-time/Dixon MRI sequence. J Nucl Med. 2012;53:796-804.
10
Koesters T, Friedman KP, Fenchel M, Zhan Y, Hermosillo G, Babb J, Jelescu IO, Faul D, Boada FE, Shepherd TM. Dixon Sequence with Superimposed Model-Based Bone Compartment Provides Highly Accurate PET/MR Attenuation Correction of the Brain. J Nucl Med. 2016;57:918-924.
11
Paulus DH, Quick HH, Geppert C, Fenchel M, Zhan Y, Hermosillo G, Faul D, Boada F, Friedman KP, Koesters T. Whole-Body PET/MR Imaging: Quantitative Evaluation of a Novel Model-Based MR Attenuation Correction Method Including Bone. J Nucl Med. 2015;56:1061-1066.
12
Hwang D, Kang SK, Kim KY, Seo S, Paeng JC, Lee DS, Lee JS. Generation of PET Attenuation Map for Whole-Body Time-of-FlightF-FDG PET/MRI Using a Deep Neural Network Trained with Simultaneously Reconstructed Activity and Attenuation Maps. J Nucl Med. 2019;60:1183-1189.
13
Tang J, Haagen R, Blaffert T, Renisch S, Blaeser A, Salomon A, Schweizer B, Hu Z. Effect of MR truncation compensation on quantitative PET image reconstruction for whole-body PET/MR. 2011 IEEE Nucl Sci Symp Conf Rec. 2011:2506-2509.
14
Nuyts J, Bal G, Kehren F, Fenchel M, Michel C, Watson C. Completion of a truncated attenuation image from the attenuated PET emission data. IEEE Trans Med Imaging. 2013;32:237-246.
15
Nuyts J, Dupont P, Stroobants S, Benninck R, Mortelmans L, Suetens P. Simultaneous maximum a posteriori reconstruction of attenuation and activity distributions from emission sinograms. IEEE Trans Med Imaging. 1999;18:393-403.
16
Ribeiro AS, Kops ER, Herzog H, Almeida P. Hybrid approach for attenuation correction in PET/MR scanners. Nucl Instrum Methods Phys Res A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2014;734:166-170.
17
Mérida I, Reilhac A, Redouté J, Heckemann RA, Costes N, Hammers A. Multi-atlas attenuation correction supports full quantification of static and dynamic brain PET data in PET-MR. Phys Med Biol. 2017;62:2834-2858.
18
Hofmann M, Bezrukov I, Mantlik F, Aschoff P, Steinke F, Beyer T, Pichler BJ, Schölkopf B. MRI-based attenuation correction for whole-body PET/MRI: quantitative evaluation of segmentation- and atlas-based methods. J Nucl Med. 2011;52:1392-1399.
19
Mehranian A, Arabi H, Zaidi H. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities. Med Phys. 2016;43:1130-1155.
20
Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. Adv Exp Med Biol. 2020;1213:3-21.
21
Ghafari A, Sheikhzadeh P, Seyyedi N, Abbasi M, Farzenefar S, Yousefirizi F, Ay MR, Rahmim A. Generation ofF-FDG PET standard scan images from short scans using cycle-consistent generative adversarial network. Phys Med Biol. 2022;67.
22
Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408-1419.
23
Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology. 2018;286:676-684.
24
Jang H, Liu F, Zhao G, Bradshaw T, McMillan AB. Technical Note: Deep learning based MRAC using rapid ultrashort echo time imaging. Med Phys. 2018:10.1002/mp.12964.
25
Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol. 2018;63:125011.
26
Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, Hope TA, Larson PEZ. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI. J Nucl Med. 2018;59:852-858.
27
Arabi H, Zaidi H. Deep learning-guided estimation of attenuation correction factors from time-of-flight PET emission data. Med Image Anal. 2020;64:101718.
28
Dong X, Wang T, Lei Y, Higgins K, Liu T, Curran WJ, Mao H, Nye JA, Yang X. Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging. Phys Med Biol. 2019;64:215016.
29
Reimold M, Nikolaou K, Christian La Fougère M, Gatidis S. 18 Independent brain F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks. Hell J Nucl Med. 2019;22:179-186.
30
Shi L, Onofrey JA, Revilla EM, Toyonaga T, Menard D, Ankrah J, Carson RE, Liu C, Lu Y. A novel loss function incorporating imaging acquisition physics for PET attenuation map generation using deep learning. MICCAI. 2019.
31
Dong X, Lei Y, Wang T, Higgins K, Liu T, Curran WJ, Mao H, Nye JA, Yang X. Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging. Phys Med Biol. 2020;65:055011.
32
Shiri I, Ghafarian P, Geramifar P, Leung KH, Ghelichoghli M, Oveisi M, Rahmim A, Ay MR. Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC). Eur Radiol. 2019;29:6867-6879.
33
Gibson E, Li W, Sudre C, Fidon L, Shakir DI, Wang G, Eaton-Rosen Z, Gray R, Doel T, Hu Y, Whyntie T, Nachev P, Modat M, Barratt DC, Ourselin S, Cardoso MJ, Vercauteren T. NiftyNet: a deep-learning platform for medical imaging. Comput Methods Programs Biomed. 2018;158:113-122.
34
Li W, Wang G, Fidon L, Ourselin S, Cardoso MJ, Vercauteren T. On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task. IPMI. 2017:348-360.
35
Mostafapour S, Gholamiankhah F, Dadgar H, Arabi H, Zaidi H. Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med. 2021;46:609-615.