A DATA-CENTRIC DEEP LEARNING METHOD FOR PULMONARY NODULE DETECTION
Keywords:Data-centric learning, Deep learning, Pulmonary nodule detection.
Lung cancer is one of the most serious cancer-related diseases in Vietnam and all over the world. Early detection of lung nodules can help to increase the survival rate of lung cancer patients. Computer-aided diagnosis (CAD) systems are proposed in the literature for early detection of lung nodules. However, most of the current CAD systems are based on the building of high-quality machine learning models for a fixed dataset rather than taking into account the dataset properties which are very important for the lung cancer diagnosis. In this paper, we follow the direction of data-centric approach for lung nodule detection by proposing a data-centric method to improve detection performance of lung nodules on CT scans. Our method takes into account the dataset-specific features (nodule sizes and aspect ratios) to train detection models as well as add more training data from local Vietnamese hospital. We experiment our method on the three widely used object detection networks (Faster R-CNN, YOLOv3 and RetinaNet). The experimental results show that our proposed method improves detection sensitivity of these object detection models up to 4.24%.
R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2021,” CA: a cancer journal for clinicians, vol. 71, no. 1, pp. 7–33, 2021. DOI: https://doi.org/10.3322/caac.21654
H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer journal for clinicians, vol. 71, no. 3, pp. 209–249, 2021. DOI: https://doi.org/10.3322/caac.21660
A. C. Society, “Non-small cell lung cancer survival rates,” 2021.
J. Ding, A. Li, Z. Hu, and L. Wang, “Accurate pulmonary nodule detection in computed tomog- raphy images using deep convolutional neural networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2017, pp. 559–567. DOI: https://doi.org/10.1007/978-3-319-66179-7_64
J. Wang, J. Wang, Y. Wen, H. Lu, T. Niu, J. Pan, and D. Qian, “Pulmonary nodule detection in volumetric chest ct scans using cnns-based nodule-size-adaptive detection and classification,” IEEE access, vol. 7, pp. 46 033–46 044, 2019. DOI: https://doi.org/10.1109/ACCESS.2019.2908195
Y. Li and Y. Fan, “Deepseed: 3d squeeze-and-excitation encoder-decoder convolutional neural networks for pulmonary nodule detection,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020, pp. 1866–1869. DOI: https://doi.org/10.1109/ISBI45749.2020.9098317
Z. Guo, L. Zhao, J. Yuan, and H. Yu, “Msanet multi-scale aggregation network integrating spatial and channel information for lung nodule detection,” IEEE Journal of Biomedical and Health Informatics, 2021. DOI: https://doi.org/10.1109/JBHI.2021.3131671
A. A. A. Setio, A. Traverso, T. De Bel, M. S. Berens, C. van den Bogaard, P. Cerello, H. Chen, Q. Dou, M. E. Fantacci, B. Geurts et al., “Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge,” Medical image analysis, vol. 42, pp. 1–13, 2017. DOI: https://doi.org/10.1016/j.media.2017.06.015
S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll ́ar, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988. DOI: https://doi.org/10.1109/ICCV.2017.324
S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman et al., “The lung image database consor- tium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans,” Medical physics, vol. 38, no. 2, pp. 915–931, 2011.
W. Li, P. Cao, D. Zhao, and J. Wang, “Pulmonary nodule classification with deep convolutional neural networks on computed tomography images,” Computational and mathematical methods in medicine, vol. 2016, 2016. DOI: https://doi.org/10.1155/2016/6215085
J. George, S. Skaria, V. Varun et al., “Using yolo based deep learning network for real time detection and localization of lung nodules from low dose ct scans,” in Medical Imaging 2018: Computer-Aided Diagnosis, vol. 10575. International Society for Optics and Photonics, 2018, p. 105751I.
W. Fan, H. Jiang, L. Ma, J. Gao, and H. Yang, “A modified faster r-cnn method to improve the performance of the pulmonary nodule detection,” in Tenth International Conference on Digital Image Processing (ICDIP 2018), vol. 10806. International Society for Optics and Photonics, 2018, p. 108065A.
I. W. Harsono, S. Liawatimena, and T. W. Cenggoro, “Lung nodule detection and classification from thorax ct-scan using retinanet with transfer learning,” Journal of King Saud University- Computer and Information Sciences, 2020.
C. Liu, S.-C. Hu, C. Wang, K. Lafata, and F.-F. Yin, “Automatic detection of pulmonary nodules on ct images with yolov3: development and evaluation using simulated and patient data,” Quantitative Imaging in Medicine and Surgery, vol. 10, no. 10, 2020. DOI: https://doi.org/10.21037/qims-19-883
C. Tong, B. Liang, M. Zhang, R. Chen, A. K. Sangaiah, Z. Zheng, T. Wan, C. Yue, and X. Yang, “Pulmonary nodule detection based on isodata-improved faster rcnn and 3d-cnn with focal loss,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 16, no. 1s, pp. 1–9, 2020. DOI: https://doi.org/10.1145/3365445
A. Traor ́e, A. O. Ly, and M. A. Akhloufi, “Evaluating deep learning algorithms in pulmonary nodule detection,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2020, pp. 1335–1338. DOI: https://doi.org/10.1109/EMBC44109.2020.9175152
H. Xie, D. Yang, N. Sun, Z. Chen, and Y. Zhang, “Automated pulmonary nodule detection in ct images using deep convolutional neural networks,” Pattern Recognition, vol. 85, pp. 109–119, 2019. DOI: https://doi.org/10.1016/j.patcog.2018.07.031
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recog- nition, 2016, pp. 779–788. DOI: https://doi.org/10.1109/CVPR.2016.91
J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271. DOI: https://doi.org/10.1109/CVPR.2017.690
T.-Y. Lin, P. Doll ́ar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117–2125. DOI: https://doi.org/10.1109/CVPR.2017.106
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Pro- ceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. DOI: https://doi.org/10.1109/CVPR.2016.90
F. Liao, M. Liang, Z. Li, X. Hu, and S. Song, “Evaluate the malignancy of pulmonary nodules using the 3-d deep leaky noisy-or network,” IEEE transactions on neural networks and learning systems, 2017.
N. Khosravan and U. Bagci, “S4nd: Single-shot single-scale lung nodule detection,” in Interna- tional Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 794–802. DOI: https://doi.org/10.1007/978-3-030-00934-2_88
D. Wang, Y. Zhang, K. Zhang, and L. Wang, “Focalmix: Semi-supervised learning for 3d medical image detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3951–3960. DOI: https://doi.org/10.1109/CVPR42600.2020.00401
W. Zuo, F. Zhou, and Y. He, “An embedded multi-branch 3d convolution neural network for false positive reduction in lung nodule detection,” Journal of digital imaging, vol. 33, no. 4, pp. 846–857, 2020. DOI: https://doi.org/10.1007/s10278-020-00326-0
Q. Dou, H. Chen, L. Yu, J. Qin, and P.-A. Heng, “Multilevel contextual 3-d cnns for false posi- tive reduction in pulmonary nodule detection,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 7, pp. 1558–1567, 2016. DOI: https://doi.org/10.1109/TBME.2016.2613502
A. Masood, P. Yang, B. Sheng, H. Li, P. Li, J. Qin, V. Lanfranchi, J. Kim, and D. D. Feng, “Cloud-based automated clinical decision support system for detection and diagnosis of lung cancer in chest ct,” IEEE journal of translational engineering in health and medicine, vol. 8, pp. 1–13, 2019. DOI: https://doi.org/10.1109/JTEHM.2019.2955458
How to Cite
License1. We hereby assign copyright of our article (the Work) in all forms of media, whether now known or hereafter developed, to the Journal of Computer Science and Cybernetics. We understand that the Journal of Computer Science and Cybernetics will act on my/our behalf to publish, reproduce, distribute and transmit the Work.
2. This assignment of copyright to the Journal of Computer Science and Cybernetics is done so on the understanding that permission from the Journal of Computer Science and Cybernetics is not required for me/us to reproduce, republish or distribute copies of the Work in whole or in part. We will ensure that all such copies carry a notice of copyright ownership and reference to the original journal publication.
3. We warrant that the Work is our results and has not been published before in its current or a substantially similar form and is not under consideration for another publication, does not contain any unlawful statements and does not infringe any existing copyright.
4. We also warrant that We have obtained the necessary permission from the copyright holder/s to reproduce in the article any materials including tables, diagrams or photographs not owned by me/us.