| Peer-Reviewed

Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems

Received: 4 August 2020    Accepted: 17 August 2020    Published: 27 August 2020
Views:       Downloads:
Abstract

Medical computer-aided diagnosis systems are essential applications that help doctors speed up, standardize, and improve disease prediction quality. Nevertheless, it is hard to implement a high-accuracy diagnosis system due to complex medical data structures that are hard to interpret even by an experienced radiologist, lack of the labeled data, and the high-resolution three-dimensional nature of the data. Meanwhile, modern deep learning methods achieved a significant breakthrough in various computer vision tasks. Thus, the same methods began to gain popularity in the community that works on the computer-aided systems implementation. Most modern diagnosis systems work with three-dimensional medical images that cannot be processed by traditional two-dimensional convolutional neural networks to get high enough prediction results. Hence, medical research introduced new methods that use three-dimensional neural networks to work with medical images. Even though these networks are usually an adapted version of state-of-the-art two-dimensional networks, they still have their specifics and modifications that help achieve human-level accuracy and should be considered separately. This article overviews the three-dimensional convolutional neural networks and how they are different from their two-dimensional versions. Moreover, the article examines the most influenced systems that achieve human-level accuracy in predicting the specific disease. The networks discussed in the perspective of two basic tasks: segmentation and classification. That is because the simple end-to-end classification neural networks usually do not work well on the available amount of data in the medical domain.

Published in American Journal of Neural Networks and Applications (Volume 6, Issue 2)
DOI 10.11648/j.ajnna.20200602.12
Page(s) 22-28
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Three-dimensional Convolutional Neural Networks, Medical Imaging, Deep Learning

References
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, Jun. 2017, doi: 10.1145/3065386.
[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, 1998, doi: 10.1109/5.726791.
[3] A. S. Lundervold and A. Lundervold, “An overview of deep learning in medical imaging focusing on MRI,” Zeitschrift fur Medizinische Physik. 2019, doi: 10.1016/j.zemedi.2018.11.002.
[4] D. Ardila et al., “End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography,” Nat. Med., 2019, doi: 10.1038/s41591-019-0447-x.
[5] T. Ross et al., “Exploiting the potential of unlabeled endoscopic video data with self-supervised learning,” Int. J. Comput. Assist. Radiol. Surg., 2018, doi: 10.1007/s11548-018-1772-0.
[6] F. Liao, M. Liang, Z. Li, X. Hu, and S. Song, “Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-or Network,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3484-3495, Nov. 2019, doi: 10.1109/TNNLS.2019.2892409.
[7] W. Zhu, C. Liu, W. Fan, and X. Xie, “DeepLung: Deep 3D dual path nets for automated pulmonary nodule detection and classification,” Proc. - 2018 IEEE Winter Conf. Appl. Comput. Vision, WACV 2018, vol. 2018-Janua, pp. 673–681, 2018, doi: 10.1109/WACV.2018.00079.
[8] S. G. Armato et al., “The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans,” Med. Phys., 2011, doi: 10.1118/1.3528204.
[9] J. A. Neutze, “Computed tomography,” in Radiology Fundamentals: Introduction to Imaging & Technology, 2020.
[10] R. W. Chan, J. Y. C. Lau, W. W. Lam, and A. Z. Lau, “Magnetic resonance imaging,” in Encyclopedia of Biomedical Engineering, 2018.
[11] Tensorflow, “tf.nn.convolution.” https://www.tensorflow.org/api_docs/python/tf/nn/convolution.
[12] Pytorch, “Convolution layers.” https://pytorch.org/docs/stable/nn.html#convolution-layers.
[13] A. C. Ian Goodfellow, Yoshua Bengio, “Deep Learning Book,” Deep Learn., 2015, doi: 10.1016/B978-0-12-391420-0.09987-X.
[14] S. Ji, W. Xu, M. Yang, and K. Yu, “3D Convolutional neural networks for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 221–231, 2013, doi: 10.1109/TPAMI.2012.59.
[15] G. E. Hinton and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines Vinod Nair,” Accessed: Mar. 13, 2020. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.6419.
[16] Y. T. Zhou and R. Chellappa, “Computation of optical flow using a neural network,” 1988, doi: 10.1109/icnn.1988.23914.
[17] Y. L. Boureau, J. Ponce, and Y. Lecun, “A theoretical analysis of feature pooling in visual recognition,” 2010.
[18] N. Lagattolla, “Endoscopy,” in Key Topics in General Surgery, 2002.
[19] R. Takahashi and Y. Kajikawa, “Computer-aided diagnosis: A survey with bibliometric analysis,” International Journal of Medical Informatics. 2017, doi: 10.1016/j.ijmedinf.2017.02.004.
[20] A. Neroladaki, D. Botsikas, S. Boudabbous, C. D. Becker, and X. Montet, “Computed tomography of the chest with model-based iterative reconstruction using a radiation exposure similar to chest X-ray examination: Preliminary observations,” Eur. Radiol., vol. 23, no. 2, pp. 360–366, 2013, doi: 10.1007/s00330-012-2627-7.
[21] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3D convolutional networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 4489–4497, 2015, doi: 10.1109/ICCV.2015.510.
[22] A. Q. and C. K. A. and S. S. and M. K. A. A. K. and L. W. H. and M. M. and T. D. Chung, “Hybrid 3D-ResNet Deep Learning Model for Automatic Segmentation of Thoracic Organs at Risk in CT Images,” 2020 Int. Conf. Ind. Eng. Appl. Manuf., pp. 1–5, 2020.
[23] T. D. Bui, J. Shin, and T. Moon, “3D Densely Convolutional Networks for Volumetric Segmentation,” 2017, [Online]. Available: http://arxiv.org/abs/1709.03199.
[24] B. Chapaliuk and Y. Zaychenko, “End-to-End Deep Learning Strategies for Computer-Aided Lung Cancer Detection Systems,” vol. 4, no. 5, pp. 140–155, 2019.
[25] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), May 2015, vol. 9351, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.
[26] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” 2018, doi: arXiv: 1807.10165v1.
[27] P. F. Jaeger et al., “Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection,” Nov. 2018, Accessed: Apr. 17, 2020. [Online]. Available: http://arxiv.org/abs/1811.08661.
[28] O. Cicek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,” Med. Image Comput. Comput. Interv. - MICCAI 2016, 2016, doi: 10.1007/978-3-319-46723-8.
[29] H. Hwang, H. Z. Ur Rehman, and S. Lee, “3D U-Net for skull stripping in brain MRI,” Appl. Sci., 2019, doi: 10.3390/app9030569.
[30] S. Peng, W. Chen, J. Sun, and B. Liu, “Multi-Scale 3D U-Nets: An approach to automatic segmentation of brain tumor,” Int. J. Imaging Syst. Technol., vol. 30, no. 1, pp. 5–17, 2020, doi: 10.1002/ima.22368.
[31] F. Milletari, N. Navab, and S. A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” Proc. - 2016 4th Int. Conf. 3D Vision, 3DV 2016, pp. 565–571, 2016, doi: 10.1109/3DV.2016.79.
[32] F. Isensee, P. Kickingereder, W. Wick, M. Bendszus, and K. H. Maier-Hein, “Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10670 LNCS, pp. 287–297, Feb. 2018, Accessed: Aug. 01, 2020. [Online]. Available: http://arxiv.org/abs/1802.10508.
[33] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
[34] Y. L. and J. Z. and X. D. and T. W. and H. M. and M. W. M. and W. J. C. and T. M. L. and X. Yang, “Multi-organ segmentation in head and neck MRI using U-Faster-RCNN,” in Medical Imaging: Image Processing, 2020.
[35] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 2980–2988, 2017, doi: 10.1109/ICCV.2017.322.
[36] C.-Y. C. and L. M. and Y. J. and P. Zuo, “Kidney and Tumor Segmentation Using Modified 3D Mask RCNN,” 2019.
[37] E. K. and G. Engelhard, “Lung Nodules Detection and Segmentation Using 3D Mask-RCNN,” ArXiv, vol. abs/1907.0, 2019.
[38] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez, “Solving the multiple instance problem with axis-parallel rectangles,” Artif. Intell., vol. 89, no. 1–2, pp. 31–71, 2002, doi: 10.1016/s0004-3702(96) 00034-3.
[39] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-Decem, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[40] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243.
[41] J. Zhou et al., “Weakly supervised 3D deep learning for breast cancer classification and localization of the lesions in MR images,” J. Magn. Reson. Imaging, 2019, doi: 10.1002/jmri.26721.
[42] K. O. and Y. C. and K. W. K. and W. K. and I. Oh, “Classification and Visualization of Alzheimer’s Disease using Volumetric Convolutional Neural Network and Transfer Learning,” Sci. Rep., vol. 9, 2019.
[43] V. W. and S. A. and J. M. Buhmann, “Classification of brain MRI with big data and deep 3D convolutional neural networks,” in Medical Imaging, 2018.
Cite This Article
  • APA Style

    Bohdan Chapaliuk. (2020). Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems. American Journal of Neural Networks and Applications, 6(2), 22-28. https://doi.org/10.11648/j.ajnna.20200602.12

    Copy | Download

    ACS Style

    Bohdan Chapaliuk. Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems. Am. J. Neural Netw. Appl. 2020, 6(2), 22-28. doi: 10.11648/j.ajnna.20200602.12

    Copy | Download

    AMA Style

    Bohdan Chapaliuk. Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems. Am J Neural Netw Appl. 2020;6(2):22-28. doi: 10.11648/j.ajnna.20200602.12

    Copy | Download

  • @article{10.11648/j.ajnna.20200602.12,
      author = {Bohdan Chapaliuk},
      title = {Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems},
      journal = {American Journal of Neural Networks and Applications},
      volume = {6},
      number = {2},
      pages = {22-28},
      doi = {10.11648/j.ajnna.20200602.12},
      url = {https://doi.org/10.11648/j.ajnna.20200602.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajnna.20200602.12},
      abstract = {Medical computer-aided diagnosis systems are essential applications that help doctors speed up, standardize, and improve disease prediction quality. Nevertheless, it is hard to implement a high-accuracy diagnosis system due to complex medical data structures that are hard to interpret even by an experienced radiologist, lack of the labeled data, and the high-resolution three-dimensional nature of the data. Meanwhile, modern deep learning methods achieved a significant breakthrough in various computer vision tasks. Thus, the same methods began to gain popularity in the community that works on the computer-aided systems implementation. Most modern diagnosis systems work with three-dimensional medical images that cannot be processed by traditional two-dimensional convolutional neural networks to get high enough prediction results. Hence, medical research introduced new methods that use three-dimensional neural networks to work with medical images. Even though these networks are usually an adapted version of state-of-the-art two-dimensional networks, they still have their specifics and modifications that help achieve human-level accuracy and should be considered separately. This article overviews the three-dimensional convolutional neural networks and how they are different from their two-dimensional versions. Moreover, the article examines the most influenced systems that achieve human-level accuracy in predicting the specific disease. The networks discussed in the perspective of two basic tasks: segmentation and classification. That is because the simple end-to-end classification neural networks usually do not work well on the available amount of data in the medical domain.},
     year = {2020}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Overview of the Three-dimensional Convolutional Neural Networks Usage in Medical Computer-aided Diagnosis Systems
    AU  - Bohdan Chapaliuk
    Y1  - 2020/08/27
    PY  - 2020
    N1  - https://doi.org/10.11648/j.ajnna.20200602.12
    DO  - 10.11648/j.ajnna.20200602.12
    T2  - American Journal of Neural Networks and Applications
    JF  - American Journal of Neural Networks and Applications
    JO  - American Journal of Neural Networks and Applications
    SP  - 22
    EP  - 28
    PB  - Science Publishing Group
    SN  - 2469-7419
    UR  - https://doi.org/10.11648/j.ajnna.20200602.12
    AB  - Medical computer-aided diagnosis systems are essential applications that help doctors speed up, standardize, and improve disease prediction quality. Nevertheless, it is hard to implement a high-accuracy diagnosis system due to complex medical data structures that are hard to interpret even by an experienced radiologist, lack of the labeled data, and the high-resolution three-dimensional nature of the data. Meanwhile, modern deep learning methods achieved a significant breakthrough in various computer vision tasks. Thus, the same methods began to gain popularity in the community that works on the computer-aided systems implementation. Most modern diagnosis systems work with three-dimensional medical images that cannot be processed by traditional two-dimensional convolutional neural networks to get high enough prediction results. Hence, medical research introduced new methods that use three-dimensional neural networks to work with medical images. Even though these networks are usually an adapted version of state-of-the-art two-dimensional networks, they still have their specifics and modifications that help achieve human-level accuracy and should be considered separately. This article overviews the three-dimensional convolutional neural networks and how they are different from their two-dimensional versions. Moreover, the article examines the most influenced systems that achieve human-level accuracy in predicting the specific disease. The networks discussed in the perspective of two basic tasks: segmentation and classification. That is because the simple end-to-end classification neural networks usually do not work well on the available amount of data in the medical domain.
    VL  - 6
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Mathematical Methods of Systems Analysis, Institute for Applied System Analysis, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine

  • Sections