Javascript is required
1.
H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA Cancer J. Clin., vol. 71, no. 3, pp. 209–249, 2021. [Google Scholar] [Crossref]
2.
M. Brisson, J. J. Kim, K. Canfell, et al., “Impact of HPV vaccination and cervical screening on cervical cancer elimination: A comparative modelling analysis in 78 low-income and lower-middle-income countries,” Lancet, vol. 395, no. 10224, pp. 575–590, 2020. [Google Scholar] [Crossref]
3.
M. Schiffman, P. E. Castle, J. Jeronimo, A. C. Rodriguez, and S. Wacholder, “Human papillomavirus and cervical cancer,” Lancet, vol. 370, no. 9590, pp. 890–907, 2007. [Google Scholar] [Crossref]
4.
S. Moradi, “Modelling suggests that high human papilloma virus (HPV) vaccination coverage in combination with high uptake screening can lead to cervical cancer elimination in most low-income and lower-middle-income countries (LMICs) by the end of the century,” Evid.-Based Nurs., vol. 24, no. 3, 2020. [Google Scholar] [Crossref]
5.
C. W. E. Redman, V. Kesic, M. E. Cruickshank, et al., “European consensus statement on essential colposcopy,” Eur. J. Obstet. Gynecol. Reprod. Biol., vol. 256, pp. 57–62, 2021. [Google Scholar] [Crossref]
6.
F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA Cancer J. Clin., vol. 68, no. 6, pp. 394–424, 2018. [Google Scholar] [Crossref]
7.
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. [Google Scholar] [Crossref]
8.
A. Janowczyk and A. Madabhushi, “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases,” J. Pathol. Inform., vol. 7, no. 1, p. 29, 2016. [Google Scholar] [Crossref]
9.
A. Hosny, C. Parmar, J. Quackenbush, L. H. Schwartz, and H. J. W. L. Aerts, “Artificial intelligence in cancer imaging: Clinical challenges and applications,” CA Cancer J. Clin., vol. 68, no. 2, pp. 127–157, 2018. [Google Scholar] [Crossref]
10.
Y. Yuan, H. Failmezger, M. O. Rueda, et al., “Artificial intelligence for cancer-associated fibroblasts,” J. Am. Med. Inform. Assoc., vol. 21, no. 4, pp. 657–665, 2014. [Google Scholar] [Crossref]
11.
A. N. Ramesh, C. Kambhampati, J. R. T. Monson, and P. J. Drew, “Artificial intelligence in medicine,” Ann R Coll Surg Engl., vol. 86, pp. 334–338, 2004. [Google Scholar] [Crossref]
12.
A. T. Greenhill and B. R. Edmunds, “A primer of artificial intelligence in medicine,” Techn Gastrointest Endosc, vol. 22, pp. 85–89, 2020. [Google Scholar] [Crossref]
13.
Amisha, P. Malik, M. Pathania, and V. K. Rathaur, “Overview of artificial intelligence in medicine,” J. Family Med Prim Care, vol. 8, pp. 2328–2331, 2019. [Google Scholar] [Crossref]
14.
P. Hamet and J. Tremblay, “Artificial intelligence in medicine,” Metabolism, vol. 69S, pp. S36–S40, 2017. [Google Scholar] [Crossref]
15.
P. P. Shinde and S. Shah, “A review of machine learning and deep learning applications,” in 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1–6. [Google Scholar] [Crossref]
16.
F. Emmert-Streib, Z. Yang, H. Feng, S. Tripathi, and M. Dehmer, “An introductory review of deep learning for prediction models with big data,” Front. Artif. Intell., vol. 3, pp. 1–9, 2020. [Google Scholar] [Crossref]
17.
R. Hamamoto, K. Suvarna, M. Yamada, et al., “Application of artificial intelligence technology in oncology: towards the establishment of precision medicine,” Cancers (Basel), vol. 12, no. 12, pp. 1–21, 2020. [Google Scholar] [Crossref]
18.
K. H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nat. Biomed. Eng., vol. 2, pp. 719–731, 2018. [Google Scholar] [Crossref]
19.
K. Yu, N. Hyun, B. Fetterman, T. Lorey, T. R. Raine-Bennett, H. Zhang, R. E. Stamps, N. E. Poitras, W. Wheeler, B. Befano, J. C. Gage, P. E. Castle, N. Wentzensen, and M. Schiffman, “Automated cervical screening and triage, based on HPV testing and computer-interpreted cytology,” JNCI, vol. 110, no. 11, pp. 1222–1228, 2018. [Google Scholar] [Crossref]
20.
N. Wentzensen, B. Lahrmann, M. A. Clarke, et al., “Accuracy and efficiency of deep-learning-based automation of dual stain cytology in cervical cancer screening,” JNCI, vol. 113, no. 1, pp. 72–79, 2021. [Google Scholar] [Crossref]
21.
L. R. Long, “Introduction to neural networks and deep learning,” 2021. [Google Scholar]
22.
J. Melnikow, J. T. Henderson, B. U. Burda, C. A. Senger, S. Durbin, and M. S. Weyrich, “Screening for cervical cancer with high-risk human papillomavirus testing: updated evidence report and systematic review for the us preventive services task force,” JAMA, vol. 320, pp. 687–705, 2018. [Google Scholar] [Crossref]
23.
G. S. Ogilvie, D. van Niekerk, M. Krajden, L. W. Smith, D. Cook, L. Gondara, K. Ceballos, D. Quinlan, M. Lee, R. E. Martin, L. Gentile, S. Peacock, G. C. E. Stuart, E. L. Franco, and A. J. Coldman, “Effect of screening with primary cervical HPV testing vs cytology testing on high-grade cervical intraepithelial neoplasia at 48 months: the HPV focal randomized clinical trial,” JAMA, vol. 320, pp. 43–52, 2018. [Google Scholar] [Crossref]
24.
“Human papillomavirus-associated cancers-united states, 2004-2008,” MMWR Morb Mortal Wkly Rep., 2012. [Google Scholar]
25.
P. E. Castle, M. H. Stoler, T. C. J. Wright, A. Sharma, T. L. Wright, and C. M. Behrens, “Performance of carcinogenic human papillomavirus (HPV) testing and HPV16 or HPV18 genotyping for cervical cancer screening of women aged 25 years and older: a subanalysis of the Athena study,” Lancet Oncol., vol. 12, no. 9, pp. 880–890, 2011. [Google Scholar] [Crossref]
26.
“Guidelines for Cervical Cancer Prevention and Screening,” 2016, [Online]. Available: http://www.hkcog.org.hk/hkcog/Download/Cervical_Cancer_Prevention_and_Screening_revised_November_2016.pdf [Google Scholar]
27.
C. M. Castro, H. Im, H. Lee, M. Avila-Wallace, R. Weissleder, and T. Randall, “Harnessing artificial intelligence and digital diffraction to advance pointof-care HPV 16 and 18 detection,” Gynecol. Oncol., vol. 154, p. 38, 2019. [Google Scholar] [Crossref]
28.
Y. Miyagi, K. Takehara, Y. Nagayasu, and T. Miyake, “Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images combined with HPV types,” Oncol. Lett., vol. 19, no. 2, pp. 1602–1610, 2019. [Google Scholar] [Crossref]
29.
G. Bogani, A. Ditto, F. Martinelli, S. Mauroa, C. Valentina, L. R. M. Umberto, T. Francesca, L. Claudia, B. Chiarae, S. Cono, L. Domenica, and R. Francesco, “Artificial intelligence estimates the impact of human papillomavirus types in influencing the risk of cervical dysplasia recurrence: progress toward a more personalized approach,” Eur. J. Cancer Prev., vol. 28, no. 2, pp. 81–86, 2018. [Google Scholar] [Crossref]
30.
X. M. He, T. Huang, T. Wang, H. Du, M. Y. Jiang, and H. Liang, “Analysis of artificial intelligence-assisted cervical cytology screening combined with HPV detection in cervical cancer screening,” Acta Acad. Med. Xuzhou, vol. 42, pp. 273–278, 2022. [Google Scholar] [Crossref]
31.
A. A. Hashmi, S. Naz, O. Ahmed, S. R. Yaqeen, M. Irfan, M. G. Asif, A. Kamal, and N. Faridi, “Comparison of liquid-based cytology and conventional papanicolaou smear for cervical cancer screening: An experience from Pakistan,” Cureus, vol. 12, no. 12, 2020. [Google Scholar] [Crossref]
32.
S. Cox, “Guidelines for papanicolaou test screening and follow-up,” J. Midwifery Womens Health, vol. 57, no. 1, pp. 86–89, 2012. [Google Scholar] [Crossref]
33.
P. Phaliwong, P. Pariyawateekul, N. Khuakoonratt, W. Sirichai, K. Bhamarapravatana, and K. Suwannarurk, “Cervical cancer detection between conventional and liquid based cervical cytology: A 6-year experience in northern Bangkok Thailand,” Asian Pac. J. Cancer Prev., vol. 19, no. 5, pp. 1331–1336, 2018. [Google Scholar] [Crossref]
34.
R. S. Hoda, K. Loukeris, and F. W. Abdul-Karim, “Gynecologic cytology on conventional and liquid-based preparations: A comprehensive review of similarities and differences,” Diagn. Cytopathol., vol. 41, no. 3, pp. 257–278, 2013. [Google Scholar] [Crossref]
35.
A. M. Marchevsky and P. Bartels, “Image analysis: A primer for pathologists,” 1994. [Google Scholar]
36.
D. Saslow, D. Solomon, H. W. Lawson, M. Killackey, S. L. Kulasingam, J. Cain, F. A. Garcia, A. T. Moriarty, A. G. Waxman, and D. C. Wilbur, “American cancer society, American society for colposcopy and cervical pathology, and American society for clinical pathology screening guidelines for the prevention and early detection of cervical cancer,” CA Cancer J. Clin., vol. 62, pp. 147–172, 2012. [Google Scholar] [Crossref]
37.
E. Davey, A. Barratt, L. Irwig, S. F. Chan, P. Macaskill, P. Mannes, and A. M. Saville, “Effect of study design and quality on unsatisfactory rates, cytology classifications, and accuracy in liquid-based versus conventional cervical cytology: A systematic review,” Lancet, vol. 367, no. 9505, pp. 122–132, 2006. [Google Scholar] [Crossref]
38.
A. Gençtav, S. Aksoy, and S. Önder, “Unsupervised segmentation and classification of cervical cell images,” Pattern Recogn., vol. 45, no. 12, pp. 4151–4168, 2012. [Google Scholar] [Crossref]
39.
T. M. Elsheikh, R. M. Austin, D. F. Chhieng, F. S. Miller, A. T. Moriarty, and A. A. Renshaw, “American society of cytopathology workload recommendations for automated pap test screening: Developed by the productivity and quality assurance in the era of automated screening task force,” Diagn. Cytopathol., vol. 41, no. 2, pp. 174–178, 2013. [Google Scholar] [Crossref]
40.
R. Lozano, “Comparison of computer-assisted and manual screening of cervical cytology,” Gynecol. Oncol., vol. 104, no. 1, pp. 134–138, 2007. [Google Scholar] [Crossref]
41.
M. A. Aswathy and M. Jagannath, “Detection of breast cancer on digital histopathology images: Present status and future possibilities,” Inf. Med. Unlocked, vol. 8, pp. 74–79, 2017. [Google Scholar] [Crossref]
42.
Y. Tan, GPU-Based Parallel Implementation of Swarm Intelligence Algorithms. San Mateo, CA, USA: Morgan Kaufmann, 2016. [Online]. Available: https://dl.acm.org/doi/10.5555/3033080 [Google Scholar]
43.
Y. Y. Song, L. Zhu, J. Qin, B. Y. Lei, B. Sheng, and K. S. Choi, “Segmentation of overlapping cytoplasm in cervical smear images via adaptive shape priors extracted from contour fragments,” IEEE Trans. Med. Imaging, vol. 38, no. 12, pp. 2849–2862, 2019. [Google Scholar] [Crossref]
44.
T. Wan, S. Xu, C. Sang, et al., “Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks,” Neurocomputing, vol. 365, pp. 157–170, 2019. [Google Scholar] [Crossref]
45.
C. W. Wang, Y. A. Liou, Y. J. Lin, C. C. Chang, P. H. Chu, Y. C. Lee, C. H. Wang, and T. K. Chao, “Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning,” Sci. Rep., vol. 11, 2021. [Google Scholar] [Crossref]
46.
Y. Zhao, C. Fu, S. Xu, L. Cao, and H. F. Ma, “LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images,” Comput. Biol. Med., vol. 145, p. 105500, 2022. [Google Scholar] [Crossref]
47.
E. Zawadzka-Gosk, K. Wolk, and W. Czarnowski, “Deep learning in state-of-the-art image classification exceeding 99 percent accuracy,” in New Knowledge in Information Systems and Technologies. WorldCIST’19 2019. Advances in Intelligent Systems and Computing. Springer, pp. 946–957, 2019. [Google Scholar] [Crossref]
48.
P. Wang, J. X. Wang, Y. M. Li, L. Y. Li, and H. H. Zhang, “Adaptive pruning of transfer learned deep convolutional neural network for classification of cervical pap smear images,” IEEE Access, vol. 8, pp. 50674–50683, 2020. [Google Scholar] [Crossref]
49.
P. Huang, S. L. Zhang, M. Li, J. Wang, C. L. Ma, B. W. Wang, and X. Y. Lv, “Classification of cervical biopsy images basedon lasso and EL-SVM,” IEEE Access, vol. 8, pp. 24219–24228, 2020. [Google Scholar] [Crossref]
50.
N. Dong, L. Zhao, C. H. Wu, and J. F. Chang, “Inception v3 based cervical cell classification combined with artificially extracted features,” Appl. Soft Comput., vol. 93, p. 106311, 2020. [Google Scholar] [Crossref]
51.
N. Dong, M. D. Zhai, L. Zhao, and C. H. Wu, “Cervical cell classification based on the cart feature selection algorithm,” J. Ambient Intell. Humaniz. Comput., vol. 12, pp. 1837–1849, 2020. [Google Scholar] [Crossref]
52.
S. Liu, Z. Yuan, X. Qiao, Q. Liu, K. Song, B. H. Kong, and X. T. Su, “Light scattering pattern specific convolutional network static cytometry for label-free classification of cervical cells,” Cytom. Part A, vol. 99, no. 6, pp. 610–621, 2021. [Google Scholar] [Crossref]
53.
M. Rahaman, C. Li, Y. Yao, F. Kulwa, X. C. Wu, X. Y. Li, and Q. Wang, “Deepcervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques,” Comput. Biol. Med., vol. 136, 2021. [Google Scholar] [Crossref]
54.
W. Chen, W. M. Shen, L. Gao, and X. Y. Li, “Hybrid loss-constrained lightweight convolutional neural networks for cervical cell classification,” Sensors, vol. 22, no. 9, p. 3272, 2022. [Google Scholar] [Crossref]
55.
B. J. Cho, J. W. Kim, J. Park, G. Y. Kwon, M. Hong, S. H. Jang, H. Bang, G. Kim, and S. T. Park, “Automated diagnosis of cervical intraepithelial neoplasia in histology images via deep learning,” Diagnostics, vol. 12, no. 2, p. 548, 2022. [Google Scholar] [Crossref]
56.
F. Kanavati, N. Hirose, T. Ishii, A. Fukuda, S. Ichihara, and M. Tsuneki, “A deep learning model for cervical cancer screening on liquid-based cytology specimens in whole slide images,” Cancers, vol. 14, no. 5, p. 1159, 2022. [Google Scholar] [Crossref]
57.
O. Yaman and T. Tuncer, “Exemplar pyramid deep feature extraction based cervical cancer image classification model using pap-smear images,” Biomed. Signal Process. Control, vol. 73, p. 103428, 2022. [Google Scholar] [Crossref]
58.
M. Khan, C. Werner, T. Darragh, et al., “ASCCP colposcopy standards: Role of colposcopy, benefits, potential harms, and terminology for colposcopic practice,” J. Lower Genit. Tract Dis., vol. 21, no. 4, pp. 223–229, 2017. [Google Scholar] [Crossref]
59.
Y. Miyagi, K. Takehara, and T. Miyake, “Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images,” Mol. Clin. Oncol., vol. 11, no. 6, 2019. [Google Scholar] [Crossref]
60.
G. Ogilvie, C. Nakisige, W. Huh, R. Mehrotra, E. Franco, and J. Jeronimo, “Optimizing secondary prevention of cervical cancer: recent advances and future challenges,” Int. J. Gynaecol. Obstet., vol. 138, no. Suppl 1, pp. 15–19, 2017. [Google Scholar] [Crossref]
61.
M. Schiffman, J. Doorbar, N. Wentzensen, S. de Sanjose, C. Fakhry, B. J. Monk, M. A. Stanley, and S. Franceschi, “Carcinogenic human papillomavirus infection,” Nat. Rev. Dis. Primers, vol. 2, p. 16086, 2016. [Google Scholar] [Crossref]
62.
F. Zhao and Y. Qiao, “Cervical cancer prevention in China: A key to cancer control,” Lancet, vol. 393, no. 10175, pp. 969–970, 2019. [Google Scholar] [Crossref]
63.
W. L. Bi, A. Hosny, M. B. Schabath, et al., “Artificial intelligence in cancer imaging: clinical challenges and applications,” CA Cancer J. Clin., vol. 69, no. 2, pp. 127–157, 2019. [Google Scholar] [Crossref]
64.
C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,” BMC Med., vol. 17, no. 1, p. 195, 2019. [Google Scholar] [Crossref]
65.
C. Yuan, Y. Yao, B. Cheng, et al., “The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images,” Sci. Rep., vol. 10, no. 1, 2020. [Google Scholar] [Crossref]
66.
P. Guo, Z. Xue, L. R. Long, and S. Antani, “Cross-dataset evaluation of deep learning networks for uterine cervix segmentation,” Diagnostics, vol. 10, no. 1, p. 44, 2020. [Google Scholar] [Crossref]
67.
Z. Yue, S. Ding, X. Li, S. Yang, and Y. Zhang, “Automatic acetowhite lesion segmentation via specular reflection removal and deep attention network,” IEEE J. Biomed. Health Inform., vol. 25, no. 9, pp. 3529–3540, 2021. [Google Scholar] [Crossref]
68.
J. Liu, T. Liang, Y. Peng, G. Peng, L. Sun, L. Li, and H. Dong, “Segmentation of acetowhite region in uterine cervical image based on deep learning,” Technol. Health Care, vol. 30, no. 2, pp. 469–482, 2022. [Google Scholar] [Crossref]
69.
V. Kudva, K. Prasad, and S. Guruvare, “Hybrid transfer learning for classification of uterine cervix images for cervical cancer screening,” J. Digit. Imaging, vol. 33, pp. 619–631, 2020. [Google Scholar] [Crossref]
70.
C. Buiu, V. R. Dănăilă, and C. N. Răduţă, “Mobilenetv2 ensemble for cervical precancerous lesions classification,” Processes, vol. 8, no. 5, p. 595, 2020. [Google Scholar] [Crossref]
71.
S. K. Saini, V. Bansal, R. Kaur, and M. Juneja, “Colponet for automated cervical cancer screening using colposcopy images,” Mach. Vis. Appl., vol. 31, p. 15, 2020. [Google Scholar] [Crossref]
72.
Y. M. Luo, T. Zhang, P. Li, P. M. Sun, B. H. Dong, and G. Ruan, “Mdfi: Multi-CNN decision feature integration for diagnosis of cervical precancerous lesions,” IEEE Access, pp. 29616–29626, 2020. [Google Scholar] [Crossref]
73.
Y. Yu, J. Ma, W. D. Zhao, Z. M. Li, and S. Ding, “MSCI: A multistate dataset for colposcopy image classification of cervical cancer screening,” Int. J. Med. Inform., vol. 146, no. 1, p. 104352, 2020. [Google Scholar] [Crossref]
74.
K. Adweb, N. Cavus, and B. Sekeroglu, “Cervical cancer diagnosis using very deep networks over different activation functions,” IEEE Access, vol. 9, pp. 46612–46625, 2021. [Google Scholar] [Crossref]
75.
Y. R. Park, Y. J. Kim, W. Ju, K. Nam, S. Kim, and K. G. Kim, “Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images,” Sci. Rep., vol. 11, 2021. [Google Scholar] [Crossref]
76.
L. Liu, Y. Wang, X. L. Liu, S. Han, L. Jia, L. H. Meng, Z. Y. Yang, W. Chen, Y. Z. Zhang, and X. Qiao, “Computer-aided diagnostic system based on deep learning for classifying colposcopy images,” Ann. Transl. Med., vol. 9, no. 13, 2021. [Google Scholar] [Crossref]
77.
C. Bourgioti, K. Chatoupis, and L. Moulopoulos, “Current imaging strategies for the evaluation of uterine cervical cancer,” World J. Radiol., vol. 8, no. 4, 2016. [Google Scholar] [Crossref]
78.
S. H. Choi, S. H. Kim, H. J. Choi, B. K. Park, and H. J. Lee, “Preoperative magnetic resonance imaging staging of uterine cervical carcinoma: Results of prospective study,” J. Comput. Assist. Tomogr., vol. 28, no. 5, pp. 620–627, 2004. [Google Scholar] [Crossref]
79.
H. Hricak, C. Gatsonis, D. S. Chi, M. A. Amendola, K. Brandt, L. H. Schwartz, S. Koelliker, E. S. Siegelman, J. J. Brown, R. B. McGhee Jr, R. Iyer, K. M. Vitellas, B. Snyder, H. J. Long III, J. V. Fiorica, and D. G. Mitchell, “Role of imaging in pretreatment evaluation of early invasive cervical cancer: Results of the intergroup study American college ofradiology imaging network 6651-gynecologic oncology group 183,” J. Clin. Oncol., vol. 23, no. 36, pp. 9329–9337, 2005. [Google Scholar] [Crossref]
80.
J. Merz, M. Bossart, F. Bamberg, and M. Eisenblaetter, “Revised figo staging for cervical cancer - a new role for MRI,” Rofo, vol. 192, no. 10, pp. 937–944, 2020. [Google Scholar] [Crossref]
81.
P. Cohen, A. Jhingran, A. Oaknin, and L. Denny, “Cervical cancer,” Lancet, vol. 393, no. 10167, pp. 169–182, 2019. [Google Scholar] [Crossref]
82.
E. Kim and X. Huang, “A data driven approach to cervigram image analysis and classification,” in Lecture Notes in Computational Vision and Biomechanics, Netherlands: Springer, 2013, pp. 1–13. [Google Scholar] [Crossref]
83.
T. Wang, T. Gao, H. Guo, X. B. Zhou, J. Tian, L. Y. Huang, and M. Zhang, “Preoperative prediction of parametrial invasion in early-stage cervical cancer with MRI-based radiomics nomogram,” Eur. Radiol., vol. 30, no. 3, 2020. [Google Scholar] [Crossref]
84.
Y. Lin, C. Lin, H. Y. Lu, H. J. Chiang, H. K. Wang, Y. T. Huang, S. H. Ng, J. H. Hong, T. C. Yen, C. H. Lai, and G. G. Lin, “Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer,” Eur. Radiol., vol. 30, no. 3, pp. 1297–1305, 2019. [Google Scholar] [Crossref]
85.
B. Wang, Y. Y. Zhang, C. Y. Wu, and F. Wang, “Multimodal MRI analysis of cervical cancer on the basis of artificial intelligence algorithm,” Contrast Media Mol. Imaging, vol. 2021, 2021. [Google Scholar] [Crossref]
86.
A. Cibi and R. J. Rose, “Classification of stages in cervical cancer MRI by customized CNN and transfer learning,” Cogn. Neurodyn., vol. 2022, pp. 1–9, 2022. [Google Scholar] [Crossref]
87.
Y. Yan, T. Yu, R. Zhang, R. T. Dong, Q. Y. Hu, T. Yu, F. Liu, Y. H. Luo, and Y. Dong, “Feasibility of an ADC-based radiomics model for predicting pelvic lymph node metastases in patients with stage IB–IIA cervical squamous cell carcinoma,” Br. J. Radiol., vol. 2019, 2019. [Google Scholar] [Crossref]
88.
T. Wang, T. T. Gao, J. B. Yang, X. J. Yan, Y. B. Wang, X. B. Zhou, J. Tian, L. Y. Huang, and M. Zhang, “Preoperative prediction of pelvic lymph nodes metastasis in early-stage cervical cancer using radiomics nomogram developed based on T2-weighted MRI and diffusion-weighted imaging,” Eur. J. Radiol., vol. 114, pp. 128–135, 2019. [Google Scholar] [Crossref]
89.
Q. X. Wu, S. Wang, S. X. Zhang, et al., “Development of a deep learning model to identify lymph node metastasis on magnetic resonance imaging in patients with cervical cancer,” JAMA Netw. Open, vol. 3, no. 7, p. e2011625, 2020. [Google Scholar] [Crossref]
90.
P. Xue, C. Tang, Q. Li, et al., “Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies,” BMC Med., vol. 18, no. 1, p. 406, 2020. [Google Scholar] [Crossref]
91.
N. Bhatla, J. S. Berek, M. C. Fredes, et al., “Revised Figo staging for carcinoma of the cervix uteri,” Int. J. Gynaecol. Obstet., vol. 145, no. 1, pp. 129–135, 2019. [Google Scholar] [Crossref]
92.
J. Guiot, A. Vaidyanathan, L. Deprez, et al., “A review in radiomics: Making personalized medicine a reality via routine imaging,” Med. Res. Rev., vol. 42, no. 1, pp. 426–440, 2022. [Google Scholar] [Crossref]
93.
C. Marth, F. Landoni, S. Mahner, M. McCormack, A. Gonzalez-Martin, and N. Colombo, “Cervical cancer: Esmo clinical practice guidelines for diagnosis, treatment and follow-up,” Ann. Oncol., vol. 28, pp. iv72–iv83, 2017. [Google Scholar] [Crossref]
94.
M. A. Gold, “Pet in cervical cancer-implications for staging, treatment planning, assessment of prognosis, and prediction of response,” J. Natl. Compr. Canc. Netw., vol. 6, no. 1, pp. 37–45, 2008. [Google Scholar] [Crossref]
95.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [Google Scholar] [Crossref]
96.
B. Ma, X. Yin, D. Wu, H. Shen, X. Ban, and Y. Wang, “End-to-end learning for simultaneously generating decision map and multi-focus image fusion result,” Neurocomputing, vol. 470, pp. 204–216, 2022. [Google Scholar] [Crossref]
97.
S. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and S. Khan, “Medical image analysis using convolutional neural networks: A review,” J. Med. Syst., vol. 42, pp. 1–13, 2018. [Google Scholar] [Crossref]
98.
D. S. Kermany, M. Goldbaum, W. J. Cai, et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018. [Google Scholar] [Crossref]
99.
W. C. Shen, S. W. Chen, K. C. Wu, T. C. Hsieh, J. A. Liang, Y. C. Hung, L. S. Yeh, W. C. Chang, W. C. Lin, K. Y. Yen, and C. H. Kao, “Prediction of local relapse and distant metastasis in patients with definitive chemoradiotherapy-treated cervical cancer by deep learning from [18f]-fluorodeoxyglucose positron emission tomography/computed tomography,” SSRN, 2018. [Google Scholar] [Crossref]
100.
Z. Liu, W. Chen, H. Guan, et al., “An adversarial deep-learning-based model for cervical cancer CTV segmentation with multicenter blinded randomized controlled validation,” Front. Oncol., vol. 11, p. 702270, 2021. [Google Scholar] [Crossref]
101.
Y. Ming, X. Dong, J. Zhao, Z. Chen, H. Wang, and N. Wu, “Deep learning-based multimodal image analysis for cervical cancer detection,” Methods, vol. 205, pp. 46–52, 2022. [Google Scholar] [Crossref]
Search
Open Access
Research article

Artificial Intelligence in Cervical Cancer Research and Applications

chunhui liu1,
jiahui yang2,
ying liu3*,
ying zhang2,
shuang liu2,
tetiana chaikovska4,
chan liu5
1
Department of Gynecology, Affiliated Hospital of Hebei University, 071002 Baoding, China
2
College of Quality and Technical Supervision, Hebei University, 071002 Baoding, China
3
Department of Lecturer in Strategy, Bournemouth University, BH12 5BB Bournemouth, United Kingdom
4
Department of Medical Imaging, Clinical Infectious Diseases Hospital N1, 02154 Kryvyi Rih, Ukraine
5
Radiotherapy Department, Affiliated Hospital of Hebei University, 071002 Baoding, China
Acadlore Transactions on AI and Machine Learning
|
Volume 2, Issue 2, 2023
|
Pages 99-115
Received: 03-24-2023,
Revised: 05-11-2023,
Accepted: 05-27-2023,
Available online: 06-13-2023
View Full Article|Download PDF

Abstract:

Cervical cancer remains a leading cause of death among females, posing a severe threat to women's health. Due to the uneven distribution of resources in different regions, there are challenges regarding physicians' experience, quantity, and medical conditions. Early screening, diagnosis, and treatment of cervical cancer still face significant obstacles. In recent years, artificial intelligence (AI) has been increasingly applied to various diseases' screening, diagnosis, and treatment. Currently, AI has many research applications in cervical cancer screening, diagnosis, treatment, and prognosis, assisting doctors and clinical experts in decision-making, improving efficiency and accuracy. This study discusses the application of AI in cervical cancer screening, including HPV typing and detection, cervical cytology screening, and colposcopy screening, as well as AI in cervical cancer diagnosis and treatment, including magnetic resonance imaging (MRI) and computed tomography (CT). Finally, the study briefly describes the current challenges faced by AI applications in cervical cancer and proposes future research directions.

Keywords: Cervical cancer, Cervical intraepithelial neoplasia (CIN), Artificial intelligence, Deep learning, Cervical cancer early screening, Cervical cancer diagnosis

1. Introduction

Cervical cancer is one of the most common malignant tumors in women. On the basis of GLOBOCAN estimates for 185 countries in 2020, there were 604,000 new cases of cervical cancer and 342,000 deaths [1]. It is the only can through the efficient price of 9 human papilloma virus (HPV) vaccine, early detection and timely treatment to primary prevention strategies to eliminate cancer [2].

Nearly all cases of cervical cancer consist of 15 kinds of carcinogenic HPV genotypes caused by persistent infection. There are four main stages in the development of cervical cancer: metaplastic epithelial infection in the cervical transitional zone, persistent HPV infection, progression of persistently infected epithelium to precancerous lesions of the cervix, and infiltration of the epithelial basement membrane [3]. The HPV vaccine can protect age-appropriate females from HPV infection. However, even in some developed countries, the coverage rate of the HPV vaccine remains very low [2], [3], [4]. The slow progress of cervical lesion detection and treatment provides a number of very precious opportunities, such as, about 30% of all cervical intraepithelial neoplasia (CIN) level 3 lesions in 30 years progression to invasive cancer [5]. As screening techniques have improved, cancer detection rates have increased and death rates have declined. However, most deaths occur in low - and middle-income countries [6]. Despite advances in effective screening, diagnosis, and treatment programs, the accuracy and generalizability of screening, diagnosis, and treatment are relatively low due to the lack of physicians' experience, quantity, and medical conditions, posing significant challenges for early cervical cancer screening, diagnosis, and subsequent individualized treatment and prognosis. Therefore, it is crucial to develop a more accurate and cost-effective method for cervical cancer screening, diagnosis, and treatment.

In recent years, artificial intelligence (AI) is increasingly used in the diagnosis of various diseases, such as skin cancer classification [7], [8], retinal disease diagnosis and classification [9], and tumor imaging diagnosis [10], demonstrating good application value. In cervical cancer screening, diagnosis, and treatment, AI is also used to address the limited human resources and improve diagnostic accuracy. As described below, currently, AI has made many research achievements and progress in cervical cancer screening, including HPV typing and detection, cervical cytology screening, colposcopy screening, diagnosis, and treatment, including MRI and CT. This greatly improves the accuracy and specificity of cervical cancer screening, diagnosis, and treatment, assisting doctors and clinical experts in diagnosis and decision-making, contributing to overcoming the problems of inadequate accuracy and generalizability caused by the lack of physicians' experience, quantity, and medical conditions.

This study aims to introduce the latest artificial intelligence in cervical cancer research and applications, such as the integration of AI with HPV typing and detection, cervical cytology screening, colposcopy screening, cervical cancer lesion segmentation, and local staging in MRI, diagnosis of lymph node metastasis in cervical cancer, and diagnosis and treatment of cervical cancer in CT. This demonstrates the practicality, potential, and future challenges of AI in the early screening, diagnosis, and treatment of cervical cancer.

2. Artificial Intelligence in Cervical Cancer Screening, Diagnosis, and Treatment

Alan Turing first described the concept of simulating intelligent behavior and critical thinking in computers in 1950 [11]. In his book “Computing Machinery and Intelligence," he described a simple test to determine if a computer possesses human intelligence, which later became known as the “Turing Test" [12]. Six years later, John McCarthy defined Artificial Intelligence (AI) as the “science and engineering of making intelligent machines" [13], [14]. Over the following decades, the performance of artificial intelligence evolved into more complex algorithms resembling human-like capabilities. AI encompasses several subfields, such as Machine Learning (ML), Deep Learning (DL), and Computer Vision.

Machine learning refers to the analytical techniques which involved in technologies that learn patterns and derive criteria from data to predict and classify unknown objects based on these criteria [15]. ML is mainly divided into three types: Supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is a kind of machine learning based on training data to provide the results or the answer, it is mainly used for regression and classification [16]. In supervised learning, training data is used as the known information for learning, building regression or classification models that can respond to unknown information. Representative techniques include decision tree-based methods, such as random forest methods and regression analysis. Unsupervised learning does not require training data to determine correct answers. It is used for grouping and summarizing data [17].

Deep Learning is a form of machine learning that drives the current AI boom, with over 90% being supervised learning [16], [17]. The neural network in deep learning consists of three types of layer categories: input layer, intermediate layer and output layer. It uses mathematical models to simulate neurons in human neural networks [15], [16]. The neural network's output is compared to the training data, adjusting the weights of the information to increase the accuracy of the output. The development and use of deep learning make anomaly detection, image processing, natural language processing and speech recognition possible [16], [17].

The relationship between artificial intelligence, machine learning, and deep learning is shown in Figure 1.

Figure 1. Relationship between artificial intelligence, machine learning, and deep learning

Artificial intelligence, particularly in the deep learning domain, has many applications in medicine, such as Automatic Visual Evaluation (AVE), automated dual-staining cytology, diagnostic radiology, and automated diabetic retinopathy screening [18], [19], [20]. Deep learning-based cervical image Automatic Visual Evaluation (AVE) is emerging as an alternative, low-cost solution for screening, diagnosis, and treatment [21]. Currently, AI is extensively applied in cervical cancer diagnosis and treatment, where machine learning SVM models and deep learning neural networks are rifely used in cervical cancer screening, including HPV typing and detection, cervical cytology screening, and colposcopy. AI is also used in cervical cancer diagnosis and treatment, including Magnetic Resonance Imaging (MRI) and computed tomography scans, as shown in Figure 2. However, there are still challenges, such as the scarcity of high-quality clinical data and the protection of patient data privacy. At the same time, there is a broader research prospect.

Figure 2. Application of artificial intelligence in screening, diagnosis and treatment of cervical cancer

3. Artificial Intelligence in Early Screening of Cervical Cancer

Nearly all cervical cancers are caused by persistent infection with one of the high-risk HPV genotypes, of which there are 15 types in the cervical epithelium. The slow progress of cervical lesion detection and treatment provides a number of very precious opportunities. Therefore, early screening of cervical cancer is very important for the prevention, diagnosis, and early cure of cervical cancer. Related detection and diagnosis include HPV genotyping and testing, cervical cytology screening and colposcopy screening.

Based on deep learning, automatic visual assessment (AVE) of cervical images is becoming an alternative new low-cost screening and diagnostic solution. By leveraging big data and advanced computing resources, it provides accurate screening of cervical cancer and precancerous lesions. The following will specifically introduce three methods of early cervical cancer screening: HPV genotyping and testing, cervical cytology screening and colposcopy screening, as well as related research and progress of artificial intelligence technology in these methods. And discuss the potential benefits, limitations, and challenges of using artificial intelligence in cervical cancer screening, and future prospects and research directions.

3.1 HPV Genotyping and Testing

Preliminary screening for cervical cancer in recent years, more and more dependent on the type of high-risk human papilloma virus (hrHPV), has been proved that the detection usually higher than cytologic examination has higher sensitivity and negative predictive value [22], [23].

ATHENA, a large trial has shown that patients infected with HPV16 or HPV18 which causes most cervical cancers are more likely to develop CIN3 or higher grade lesions [24], [25]. Therefore, if HPV testing is used as the primary screening tool for young women, those with HPV16 or HPV18 positivity should be promptly referred for the colposcopy, while for the patients with non-16 or 18 HPV, positivity cytological classification was further conducted [26]. Therefore, HPV genotyping is more conducive to cervical cancer screening and management. Many scholars are devoted to research related to AI and HPV testing. Deep learning methods such as CNN models and ANN models have achieved high accuracy on internal datasets of research units, reaching over 90%, showing a good application prospect of deep learning models in HPV detection, as shown in Table 1.

Table 1. Relevant research on AI and HPV testing

Author

Publication year

Method

Case load

Classification category

Bear fruit

Superiority

Castro et al. [27]

2019

$\mathrm{CNN}$

96 People

Binary classification

Precision: $100 \%$

Developed a digital-diffraction-based DNA detection method

Miyagi et al. [28]

2019

$\mathrm{CNN}$

253 People

Binary classification

Accuracy: $94.1 \%$ sensitivity: $95.6 \%$ specificity: $83.3 \%$

Exploring the feasibility of classifying cervical squamous epithelial disease using deep learning from colposcopy images in combination with HPV types

Bogani et al. [29]

2019

ANN

5104 People

-

The accuracy of the AI classifier and gynecological oncologists were 0.941 and 0.843 respectively.

Bogani et al. [29] studied whether the pretreatment human papillomavirus (HPV)genotype predicted the risk of persistence or recurrence of cervical dysplasia. Artificial neural network (ANN) analysis was used to assess the importance of different HPV genotypes in predicting the persistence and recurrence of cervical dysplasia.

He et al. [30]

2022

25971 People

-

-

To analyze the feasibility and application value of AI-assisted cervical cytology screening combined with human papillomavirus (HPV) shunt in free cervical cancer screening in rural women. The results showed that AI-assisted cytology and high-risk HPV shunt reduced the colposcopy referral rate and improved the diagnostic agreement rate between cytology and biopsy pathology.

To analyze the feasibility and

application value of AI-assisted

cervical cytology screening

combined with human

papillomavirus (HPV) shunt in free

cervical cancer screening in rural

women. The results showed that

AI-assisted cytology and high-risk

HPV shunt reduced the colposcopy

referral rate and improved the

diagnostic agreement rate between

cytology and biopsy pathology.

3.2 Cervical Cytology Screening

There are two methods of cervical cytology. The first is the traditional Pap smear, and the second is liquid-based cytology (LBC) [31], [32], [33], [34]. Over the past 60 years, the pap smear screening has had a significant effect on reducing cervical cancer mortality [35]. Due to cost issues, the most common method for diagnosing malignant cervical tumors is currently cervical cytology [36], [37]. These tests are carried out by specialist cytologists who analyse a sample of cervical cells taken from a patient's cervix under a microscope to detect the effects of HPV. However, because each slide contains about 3 million cells with different orientations and overlapping [38]. So manual screening is expensive, difficult, time consuming, expensive, and error-prone, because each slide contains about 3 million different oriented and overlapping cells [39], [40].

In medical image analysis, DL programs are now a repeating and successful type of machine learning algorithm. Cervical cytology image analysis is no exception. The popular deep architecture is convolutional neural network (CNN) is widely used in this field. This method has good results in cell detection, cell segmentation, cell classification, and cell region of interest (ROI) extraction [41].

3.2.1 Segmentation of cervical cells

The main goal of segmentation is to segment medical images into multiple regions for rapid analysis of cells [42]. Accurate segmentation of the cell nucleus and cytoplasm is critical because the cell nucleus carries reliable information for cancer detection. In the medical field, automatic segmentation saves patients' lives by providing fast, reliable and accurate disease diagnosis. Due to these advantages, many researchers are using deep learning (such as convolutional neural networks and feature attention networks) to perform segmentation tasks on cervical cells, with an accuracy and recall rate of over 90% on internal datasets. This demonstrates the good application prospects of deep learning models in cervical cell segmentation, which can greatly improve the screening efficiency of expert detection. Relevant research is shown in Table 2.

Table 2. Relevant research on cervical cell segmentation using deep learning

Author

Publication year

Method

Case load

Classification category

Bear fruit

Song et al. [43]

2019

Adaptive shape priors extracted from cytoplasmic profile fragmentation and shape statistics to segment the overlapping cytoplasm of cells in the cervical smear image.

-

-

The experimental results show that the proposed method is general enough to be applied to other similar microscopy image segmentation tasks in the presence of a large number of overlapping objects.

Wan et al. [44]

2019

TernausDetDee pLab V2

ISBI2014 (945)

ISBI2015 (210)

Internal dataset (580)

ISBI2014: DCS 93%

ISBI2015: DSC 92%

Internal dataset: 92%

A new framework based on deep convolutional neural network (DCNN) is proposed for automatic segmentation of overlapping cells.

Wang et al. [45]

2021

VGG16+SGD

Internal dataset (143)

Precision rate: 93%

Recall: 90%

The proposed method handles the whole Pap smear for only 210 seconds, which is 20 times faster than U-Net and 19 times faster than SegNet.

Zhao et al. [46]

2022

LEANE

Herlex Dataset (917)

WBC Dataset (400)

Warwick-QU dataset (165)

Precision: 93.01%

Recall: 96.1%

A lightweight feature attention network (LEANet) is proposed to accurately segment the nucleus and cytoplasm regions in cervical images.

3.2.2 Classification of cervical cells

Image classification is a major area of research in medical image analysis. It is a process in computer vision. In this field, deep learning-based methods have made enormous contributions by providing the most advanced accuracy [47]. Computer vision is also an indispensable part of cervical cell classification. A large number of studies have shown that cervical cells can achieve an accuracy of 95% or more in binary classification and multi-classification tasks on homemade datasets and public datasets, providing an effective tool for cervical cancer classification in clinical settings. Relevant research is shown in Table 3.

Table 3. Relevant research on colposcopy image classification using deep learning

Author

Publication Method year

Data set

Classification category

Bear fruit

Superiority

Wang et al. [48]

2020

BsiNet-TAP

Self-made data set (389)

Three classification

Precision: 98.49%

An adaptive pruning deep transfer leaming model is proposed for Pap smear image classification.

Huang et al. [49]

2020

LASSO+

EL-SVM

Self-made data set (468)

Seven classification

Nomal accuracy: 99.64%,

HSIL accuracy: 87.4%,

LSIL accuracy: 91.88%,

Cancer accuracy: 81.4%

We propose a cervical biopsy tissue image classification method based on minimum absolute contraction and selection operator and integrated Learning-support vector machine.

Dong et al. [50]

2020

Inception v3

Herlex data set (917)

Binary classification

Accuracy: 98.23%,

Sensitivity: 99.44%,

Specific: 96.73%

To propose a cell classification algorithm combining Inception v3 and artificial features.

Dong et al. [51]

2021

PSO-SVM

Herlex data set (917)

Seven classification

Accuracy: 99.81%

Sensitivity: 99.89%

Recall: 99.26%

We propose a machine learning method for cervical cell classification based on a feature selection algorithm.

Liu et al. [52]

2021

LSPS-net

Self-made data (2119)

Two categories and three categories

Accuracy of three

classification: 90.90%

Developed an LSPS net integrated 2 D light-scattering static flow cytometer for single-cervical cell analysis.

Rahaman et al. [53]

2021

DeepCervix

SIPakded Se

Data Set

(4049) Herlex

Data set (917)

STAaldyed

Data set: two classifichtion; three classification;

five

classification

Herlex data set:

two classification;

seven classification

SIPalbyeD accuracy on the data set:

2: 99.85%

3.99.38%

7.99.14%

Herlev accuracy on

the data set: 2 classification: 98.32% 7: 90.32%

A DeepCerrix algorithm based on DL hybrid depth feature fusion (HDFF) technology is proposed.

Chen et al. [54]

2022

HI+Ghostnet

SIPakkMeD data set

(4049)

Five classification

Accuracy: 96.39%,

Sensitivity: 96.42 %,

Specific: 99.09 %,

Recall: 96.39%

Proposed a hybrid loss function HL with label smoothing

Cho et al. [55]

2022

DenseNet-

161Effic ientNet-B7

Self-made data set

(1106)

Three classification

DenseNet-161

Accuracy: 91.4%

EfficientNet-B7

Accuracy: 92.6%

The performance of two pre-trained convolutional neural network (CNN) models using the DenseNet-16 and EfficientNet-B7 architectures was evaluated.

Kanavati et al. [56]

2022

CNN+RNN

Self-made data set

(1468)

Binary classification

Accuracy: 90.7%,

Sensitivity: 85%,

Specific: 91.1%

A dataset of 1,605 cervical WSI was used. We evaluated the model on three test sets, with ROC AUC in the range 0.89-0.96.

Yaman and Tuncer [57]

2022

DarkHet

STPAWWeD

Data Set

Mendeley

Data Set

(4049)

Five classification

SIPaKMGD,

Accuracy of data set: 95.43%

Accuracy of

Mendeley data set: 99.23%

A cervical cancer detection method based on the typical pyramid deep feature extraction has been proposed.

3.3 Colposcopy Screening

Colposcopy involves magnifying a fully exposed cervix 5 to 40 times using the specific instruments used for this examination to visually assess the cervix in real time, especially the transformation zone, to detect cervical intraepithelial neoplasia (CIN) or squamous intraepithelial lesion (SIL) and invasive cancer [58]. Comprehensive colposcopy should include visibility of the cervix, visibility of the squamous columnar junction, presence or absence of acetic acid whitening, presence or absence of lesions, visibility of lesions, size and location of lesions, changes in blood vessels, other characteristics of lesions, and colposcopic impression [59].

3.3.1 Segmentation of colposcopy images

Automatic segmentation of acetic acid lesions in colposcopy images is critical for assisting gynecologists in grading cervical intraepithelial neoplasia and cervical cancer [60]. Relevant research on colposcopy image segmentation using deep learning is currently relatively weak compared to research on colposcopy image classification, but existing relevant research has shown considerable results [61], [62]. Existing research shows high accuracy and specificity on homemade and public datasets, which can assist experts in improving detection and screening efficiency and accuracy [63], [64]. Relevant research results are shown in Table 4.

Table 4. Relevant research on colposcopy image segmentation using deep learning

Author

Publication year

Method

Data set

Bear fruit

Superiority

Yuan et al. [65]

2020

U-Net

Self-made data set (22330)

Average accuracy of acetic acid image: 95.59%, accuracy of iodine image: 95.70%

An independent dataset of HD images was collected and, in addition, a comparison of diagnostic accuracy between the colposcopist and the model.

Guo et al. [66]

2020

Mask R-CNNMaskX R-CNN

CVT (Costa Rica Vaccine Trial) dataset (3398); ALTS dataset (939); MobileQDT. dataset (1960)

Dice:0.9471oU: 0.901

Two state-of-the-art deep learning-based object localization and segmentation methods, the Mask R Convolutional Neural Network (CNN) and MaskX R-CNN, were evaluated for automated cervical segmentation using three datasets.

Yue et al. [67]

2021

AWL-CNN

Self-made data set (3045)

Dice: 0.823 $\pm$ 0.129; Precision: 0.928 $\pm$ 0.139

A novel AW, lesion-sensing convolutional Neural Network (AWLCNN), for the segmentation of AW lesions in cervical maps, is presented.

Liu et al. [68]

2022

DeepLab V3+

Self-made data set (280)

Average specificity was 94.9%; average accuracy 91.2%; and average sensitivity 78.2%

The cervical region was first extracted from the original colposcopy images by the k-means clustering algorithm. The AW region was again segmented from the neck region with DeepLabV3 +.

3.3.2 Classification of colposcopy images

Colposcopy is easy to misdiagnose and miss diagnosis because of its poor consistency with pathology. In addition, colposcopy performed by an inexperienced clinician can lead to potential harm (including vaginal discharge, pain or even bleeding, infection, etc.), so the doctor needs adequate training to achieve a certain level of proficiency to be competent. However, the long training time of the relevant doctors and the lack of qualified or skilled personnel pose a great challenge to the application of colposcopy diagnosis. In the past, deep learning has been widely and effectively applied in medical imaging. Therefore, deep learning technology can be applied in colposcopy classification tasks, which helps to solve the bottleneck and problems of traditional colposcopy, thus significantly improving its diagnostic performance. At present, there are also many research results showing that deep learning has achieved good results in classifying colposcopy images on homemade and public datasets, which can greatly improve the classification and diagnosis efficiency of doctors and experts. Relevant research is shown in Table 5.

Table 5. Relevant research on colposcopy image classification using deep learning

Author

Publication year

Method

Data set

Classification category

Bear fruit

Superiority

Kudva et al. [69]

2019

AlexNet;VGG-16

IEEE Dataport cervigram (3339)

Two categories and four categories

Accuracy of two classification: 91.66%, Accuracy of four classification: 83.33%

Proposed as a novel hybrid transfer learning technique

Buiu et al. [70]

2020

MobileNetV2

253 People

Binary classification

Accuracy: 94.1%, sensitivity: 95.6%. specificity: 83.3%

In this paper, we propose an automated colposcopy image analysis framework based on an ensemble of MobileNetV2 networks.

Saini et al. [71]

2020

ColpoNet

Self-made data set (400)

Three classification

Precision:81.353%

Proposed a deep learning based ColpoNet network using colposcopy images for cervical cancer classification

Luo et al. [72]

2020

DenseNet121 ResNet50

homemade data set (3920)

Four classification

Accuracy: 79% Sensitive: 70.4% Specificity: 82.2%

A multiple CNN (DenseNet121 ResNet50) decision feature integrated system MDFI for diagnosis of cervical precancerous is proposed.

Yu et al. [73]

2021

C-GCNN

Homemade dataset MSCI (679)

Four classification

Accuracy: 96.87% Sensitivity: 95.68% Specificity: 98.72%.

A multistate Colposcopy Image dataset (MSCI) is presented. Establish a CIN hierarchical model C-GCNN based on the MSCI dataset.

Adweb et al. [74]

2021

PreLU-ResNet

Datasets in Intel and MobileODT cervical cancer screening

Three classification

Accuracy: 100% Sensitive: 97.8% Specificity: 98.1%

Three residual networks of the same structure were constructed using different activation functions.

Park et al. [75]

2021

ResNet-50; XGB; SVM; RF

Self-made data set (4119)

Binary classification

Accuracy: ResNet-50:91% XGB: 74% SVM: 76%RF: 71%

The performance of two different models, machine learning and deep learning, are compared.

Liu et al. [76]

2021

ResNet

Self-made data set (15276)

Binary classification

NC and LSIL + Classification: Accuracy: 88.6% Sensitivity: 93.2% Specificity: 84.6% HSIL-and HSIL + Classification: Accuracy: 80.7% Sensitivity: 82.3% Specificity: 80%

The residual neural network (ResNet) was calculated for each patient. And the results were compared with the diagnosis of a senior colposcoscopist and a junior colposcopist.

Artificial intelligence in early cervical cancer screening, including HPV genotyping and testing, cervical cytology screening and colposcopy screening, has achieved good research results and has good application prospects. It can greatly improve the screening efficiency and accuracy of doctors and experts. It helps to solve problems such as missed diagnosis and misdiagnosis caused by insufficient experience, quantity and medical conditions of physicians. However, the clinical data currently available is generally of poor quality. At the same time, privacy issues such as the protection of patient data also need to be considered. In the future, improving the quality of clinical image data can further improve the accuracy and applicability of artificial intelligence technology in early cervical cancer screening. It has good research prospects and is expected to further improve the popularity and diagnostic accuracy of early cervical cancer screening, especially in underdeveloped regions.

4. Artificial Intelligence in the Diagnosis and Treatment of Cervical Cancer

Cross-sectional imaging modalities generally include computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET-CT). It is an important tool for studying prognostic factors of cervical cancer, such as lymph node status, parametrial invasion, cervical canal extension, tumor size, pelvic sidewall, tumor size and so on. Imaging indications also include cervical cancer follow-up, assessment of tumor response to treatment, and selection of appropriate candidates for less radical surgery, such as radical cervical excision to preserve fertility. MRI is the preferred imaging method for local cervical cancer assessment, while CT is also effective for assessing extrauterine spread of the disease [77].

However, at present, due to the lack of experience, quantity and medical conditions of physicians, the accuracy and generalization of existing diagnosis and treatment are relatively low. Therefore, early screening, diagnosis and subsequent individualized treatment and prognosis judgment of cervical cancer still face enormous challenges. This paper will mainly introduce the application of artificial intelligence in the diagnosis and treatment of cervical cancer on magnetic resonance imaging (MRI) images, including cervical cancer lesion segmentation and local staging, and diagnosis of cervical cancer lymph node metastasis (LNM) on MRI, as well as applications on computed tomography (CT) images in the diagnosis and treatment of cervical cancer.

4.1 Magnetic Resonance Imaging (MRI)

MRI has been shown to be highly accurate in preoperative staging of cervical cancer [78], [79]. Therefore, MRI is the preferred method for local staging, treatment response evaluation, tumor recurrence detection and follow-up of cervical cancer patients [80]. The main purpose of MRI is to determine the presence of tumor surrounding infiltration and lymph node metastasis (LNM) [81].

4.1.1 Segmentation of cervical cancer lesions and local staging on MRI

MRI has higher soft tissue resolution than CT. It can determine tumor size and adjacent pelvic structures, and assess invasion around the uterus and involvement of uterus and vagina [82]. SVM model, U-Net model, 3D-CNN model, CapsNet model, etc. The accuracy on self-built datasets can reach more than 90%. Related research is shown in Table 6.

Table 6. Related research on segmentation of cervical cancer lesions and local staging, as well as diagnosis of cervical cancer LNM on MRI

Author

Publication year

Learning goals

Data set

Method

Bear fruit

Wang et al. [83]

2020

Partition: the prediction of parauterine invasion

There were 137 patients.

SVM model

Training Set AUC T2WI: 0.797 T 2 WI and DWI 0.780 (95% CI) Validation Set T2WI 0.946 (95% CI) T 2 WI and DWI 0.921 (95% CI)

Lin et al. [84]

2020

To evaluate the performance of U-Net in fully automated localization and segmentation of cervical tumors in magnetic resonance (MR) images

There were 169 patients.

U-Net model

Dice coefficient: 82% Sensitivity: 89% Positive prediction: 92%

Wang et al. [85]

2021

Identification and segmentation of cervical cancer lesions

TThere were 80 patients.

3D-CNN model

Precision:93.11%

Cibi et al. [86]

2022

Local staging of the cervical cancer

The 12,771 pieces of Fig

CapsNet model

Precision:90.28%

Yan et al. [87]

2019

Assisted in the diagnosis of lymph node metastasis

There were 153 patents.

Radiomics model

Accuracy: 78.4% Sensitive: 86.7% Specificity: 75%

Wang et al. [88]

2019

Assisted in the diagnosis of lymph node metastasis

There were 96 patients.

SVM model

C-index: 0.922 (P=3.412*10-2

Wu et al. [89]

2020

Assisted in the diagnosis of lymph node metastasis

There were 479 patients.

DL model

AUC 0.933 (95% CI)

4.1.2 Diagnosis of cervical cancer LNM

It helps early diagnosis of cervical cancer LNM. Although CT and MRI have an accuracy of only 83% to 85% in assessing lymph node involvement, but they are particularly high specificity, can even reach to 93% from 66% [90]. In 2018, the cervical cancer staging system was revised, and lymph node status was included as a staging criterion for the first time. Imaging or pathology showing lymph node involvement in cervical cancer is classified as stage IIIC [91]. Radiology has now advanced to the point where it can bridge the gap between fusion imaging and precision medicine. Radiology extracts the wealth of information hidden in medical images by combining the use of statistical analysis with sophisticated image analysis tools [92]. Many scholars have achieved good accuracy using radiogenomics models, SVM models, DL models, etc. on self-built datasets. Related research is shown in Table 6.

4.2 Computed Tomography (CT)

Accurate detection of cervical cancer plays a crucial role in disease treatment and prognosis prediction, so the accuracy and timeliness of detection are very important [93]. Computed tomography (PET/CT) and fluorodeoxyglucose positron emission tomography (FDG-PET/CT) play an important role in cervical cancer detection because of their superior sensitivity and specificity [94]. However, traditional FDG-PET/CT data analysis takes an especially long time and is not efficient because it requires interpretation of hundreds of images for each patient. However, with the development of computer hardware and algorithm and progress, especially in machine learning, especially on behalf of the development of deep learning [95], image processing techniques [96] play an indispensable role in many fields of clinical medicine [97]. The application of this technology in the diagnosis of cervical cancer can help clinicians make judgements, reduce workload and improve diagnostic accuracy [98]. Many scholars have achieved high accuracy using CNN, DpnUNet, YoloV5 and other deep learning models on self-built datasets. Related research is shown in Table 7.

Table 7. Related research on the application of computed tomography in the diagnosis of cervical cancer

Author

Publication year

Learning goals

Case load

Method

Bear fruit

Shen et al. [99]

2019

To achieve early prediction of local and distant failure in patients with locally advanced cervical cancer.

142

CNN

Tumor prediction: sensitivity 71%, and specificity 93%; distant metastasis: sensitivity 77% and specificity 90%

Liu et al. [100]

2021

To realize automatic segmentation of clinical target volume contour of cervical cancer.

CT:237

DpnUNet

Dice similarity coefficient (DSC): 0.88; 95th percentile Hausdorff distance (95HD)3.46mm

Ming et al. [101]

2022

Image registration, multimodal image fusion, and detection of lesion objects.

CT/PET:220

YoloV5

Meverage accuracy above the joint threshold (AP50): 84.3

Artificial intelligence has been extensively studied and applied in the diagnosis and treatment of cervical cancer. It has achieved good results in cervical cancer lesion segmentation and local staging on MRI, early diagnosis of cervical cancer LNM, and computed tomography, greatly improving the accuracy and specificity of early prediction and diagnosis, improving the efficiency of cervical cancer diagnosis and treatment, helping clinicians make decisions, reducing the workload of physicians and reducing misdiagnosis rates. However, there are still problems such as lack of high-quality clinical data, reliability and stability of models need to be improved. At the same time, issues such as patient data protection and security also need to be considered. Currently, there is less research on the treatment and prognosis prediction of cervical cancer using artificial intelligence, with greater challenges and research prospects.

5. Conclusions

In summary, artificial intelligence has performed well in computer vision and imaging, especially in the medical field, helping clinicians make decisions, reducing the workload of physicians and reducing misdiagnosis rates. Artificial intelligence has achieved good results in early screening, diagnosis and treatment of cervical cancer, and prognosis prediction, improving the specificity and accuracy of screening and diagnosis, and has good applicability. Overall, while improving the specificity and accuracy of screening and diagnosis, it has overcome a series of problems such as time constraints, limited specialists and subjective bias caused by physicians, which will enable cervical cancer screening to be implemented in resource-poor areas, thereby significantly reducing the incidence of cervical cancer.

However, the application of artificial intelligence currently still faces a series of problems such as lack of high-quality clinical data, obstacles in the management of medical data, lack of technical maintenance, reliability and stability of models need to be improved, and models have not yet been promoted in clinical applications. At the same time, the use of artificial intelligence in cervical cancer screening and diagnosis will also involve ethical and privacy issues, such as the protection and security of patient data.

Artificial intelligence has a promising application prospect in cervical cancer screening, especially in cervical cytology screening, where the application of convolutional neural networks (CNN) has been relatively mature. CNN has achieved great success in cell detection, segmentation, classification and region of interest (ROI) extraction [34]. The assistance of related models can greatly improve the detection efficiency of cervical cytology experts. However, segmentation techniques still face many challenges, which may be the direction of future development. Effective segmentation techniques can further improve the accuracy and reliability of artificial intelligence in cervical cancer screening and diagnosis. In addition to early screening and diagnosis, artificial intelligence can also be applied to treatment, prognosis prediction and prevention of cervical cancer, which will also be an important research direction in the future with good application prospects. It is believed that in the future, artificial intelligence will greatly improve the predictive ability of cervical cancer, maximize the improvement of cervical cancer screening and diagnosis, optimize the staging system, improve patient prognosis, and be fully applied to the early diagnosis, treatment and prognosis prediction of cervical cancer.

Funding
This paper was funded by Baoding Science and Technology Planning Project (Grant No.: 2141ZF306, 2141ZF135); Youth Foundation of Affiliated Hospital of Hebei University (Grant No.: 2022QC54).
Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA Cancer J. Clin., vol. 71, no. 3, pp. 209–249, 2021. [Google Scholar] [Crossref]
2.
M. Brisson, J. J. Kim, K. Canfell, et al., “Impact of HPV vaccination and cervical screening on cervical cancer elimination: A comparative modelling analysis in 78 low-income and lower-middle-income countries,” Lancet, vol. 395, no. 10224, pp. 575–590, 2020. [Google Scholar] [Crossref]
3.
M. Schiffman, P. E. Castle, J. Jeronimo, A. C. Rodriguez, and S. Wacholder, “Human papillomavirus and cervical cancer,” Lancet, vol. 370, no. 9590, pp. 890–907, 2007. [Google Scholar] [Crossref]
4.
S. Moradi, “Modelling suggests that high human papilloma virus (HPV) vaccination coverage in combination with high uptake screening can lead to cervical cancer elimination in most low-income and lower-middle-income countries (LMICs) by the end of the century,” Evid.-Based Nurs., vol. 24, no. 3, 2020. [Google Scholar] [Crossref]
5.
C. W. E. Redman, V. Kesic, M. E. Cruickshank, et al., “European consensus statement on essential colposcopy,” Eur. J. Obstet. Gynecol. Reprod. Biol., vol. 256, pp. 57–62, 2021. [Google Scholar] [Crossref]
6.
F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA Cancer J. Clin., vol. 68, no. 6, pp. 394–424, 2018. [Google Scholar] [Crossref]
7.
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. [Google Scholar] [Crossref]
8.
A. Janowczyk and A. Madabhushi, “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases,” J. Pathol. Inform., vol. 7, no. 1, p. 29, 2016. [Google Scholar] [Crossref]
9.
A. Hosny, C. Parmar, J. Quackenbush, L. H. Schwartz, and H. J. W. L. Aerts, “Artificial intelligence in cancer imaging: Clinical challenges and applications,” CA Cancer J. Clin., vol. 68, no. 2, pp. 127–157, 2018. [Google Scholar] [Crossref]
10.
Y. Yuan, H. Failmezger, M. O. Rueda, et al., “Artificial intelligence for cancer-associated fibroblasts,” J. Am. Med. Inform. Assoc., vol. 21, no. 4, pp. 657–665, 2014. [Google Scholar] [Crossref]
11.
A. N. Ramesh, C. Kambhampati, J. R. T. Monson, and P. J. Drew, “Artificial intelligence in medicine,” Ann R Coll Surg Engl., vol. 86, pp. 334–338, 2004. [Google Scholar] [Crossref]
12.
A. T. Greenhill and B. R. Edmunds, “A primer of artificial intelligence in medicine,” Techn Gastrointest Endosc, vol. 22, pp. 85–89, 2020. [Google Scholar] [Crossref]
13.
Amisha, P. Malik, M. Pathania, and V. K. Rathaur, “Overview of artificial intelligence in medicine,” J. Family Med Prim Care, vol. 8, pp. 2328–2331, 2019. [Google Scholar] [Crossref]
14.
P. Hamet and J. Tremblay, “Artificial intelligence in medicine,” Metabolism, vol. 69S, pp. S36–S40, 2017. [Google Scholar] [Crossref]
15.
P. P. Shinde and S. Shah, “A review of machine learning and deep learning applications,” in 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1–6. [Google Scholar] [Crossref]
16.
F. Emmert-Streib, Z. Yang, H. Feng, S. Tripathi, and M. Dehmer, “An introductory review of deep learning for prediction models with big data,” Front. Artif. Intell., vol. 3, pp. 1–9, 2020. [Google Scholar] [Crossref]
17.
R. Hamamoto, K. Suvarna, M. Yamada, et al., “Application of artificial intelligence technology in oncology: towards the establishment of precision medicine,” Cancers (Basel), vol. 12, no. 12, pp. 1–21, 2020. [Google Scholar] [Crossref]
18.
K. H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nat. Biomed. Eng., vol. 2, pp. 719–731, 2018. [Google Scholar] [Crossref]
19.
K. Yu, N. Hyun, B. Fetterman, T. Lorey, T. R. Raine-Bennett, H. Zhang, R. E. Stamps, N. E. Poitras, W. Wheeler, B. Befano, J. C. Gage, P. E. Castle, N. Wentzensen, and M. Schiffman, “Automated cervical screening and triage, based on HPV testing and computer-interpreted cytology,” JNCI, vol. 110, no. 11, pp. 1222–1228, 2018. [Google Scholar] [Crossref]
20.
N. Wentzensen, B. Lahrmann, M. A. Clarke, et al., “Accuracy and efficiency of deep-learning-based automation of dual stain cytology in cervical cancer screening,” JNCI, vol. 113, no. 1, pp. 72–79, 2021. [Google Scholar] [Crossref]
21.
L. R. Long, “Introduction to neural networks and deep learning,” 2021. [Google Scholar]
22.
J. Melnikow, J. T. Henderson, B. U. Burda, C. A. Senger, S. Durbin, and M. S. Weyrich, “Screening for cervical cancer with high-risk human papillomavirus testing: updated evidence report and systematic review for the us preventive services task force,” JAMA, vol. 320, pp. 687–705, 2018. [Google Scholar] [Crossref]
23.
G. S. Ogilvie, D. van Niekerk, M. Krajden, L. W. Smith, D. Cook, L. Gondara, K. Ceballos, D. Quinlan, M. Lee, R. E. Martin, L. Gentile, S. Peacock, G. C. E. Stuart, E. L. Franco, and A. J. Coldman, “Effect of screening with primary cervical HPV testing vs cytology testing on high-grade cervical intraepithelial neoplasia at 48 months: the HPV focal randomized clinical trial,” JAMA, vol. 320, pp. 43–52, 2018. [Google Scholar] [Crossref]
24.
“Human papillomavirus-associated cancers-united states, 2004-2008,” MMWR Morb Mortal Wkly Rep., 2012. [Google Scholar]
25.
P. E. Castle, M. H. Stoler, T. C. J. Wright, A. Sharma, T. L. Wright, and C. M. Behrens, “Performance of carcinogenic human papillomavirus (HPV) testing and HPV16 or HPV18 genotyping for cervical cancer screening of women aged 25 years and older: a subanalysis of the Athena study,” Lancet Oncol., vol. 12, no. 9, pp. 880–890, 2011. [Google Scholar] [Crossref]
26.
“Guidelines for Cervical Cancer Prevention and Screening,” 2016, [Online]. Available: http://www.hkcog.org.hk/hkcog/Download/Cervical_Cancer_Prevention_and_Screening_revised_November_2016.pdf [Google Scholar]
27.
C. M. Castro, H. Im, H. Lee, M. Avila-Wallace, R. Weissleder, and T. Randall, “Harnessing artificial intelligence and digital diffraction to advance pointof-care HPV 16 and 18 detection,” Gynecol. Oncol., vol. 154, p. 38, 2019. [Google Scholar] [Crossref]
28.
Y. Miyagi, K. Takehara, Y. Nagayasu, and T. Miyake, “Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images combined with HPV types,” Oncol. Lett., vol. 19, no. 2, pp. 1602–1610, 2019. [Google Scholar] [Crossref]
29.
G. Bogani, A. Ditto, F. Martinelli, S. Mauroa, C. Valentina, L. R. M. Umberto, T. Francesca, L. Claudia, B. Chiarae, S. Cono, L. Domenica, and R. Francesco, “Artificial intelligence estimates the impact of human papillomavirus types in influencing the risk of cervical dysplasia recurrence: progress toward a more personalized approach,” Eur. J. Cancer Prev., vol. 28, no. 2, pp. 81–86, 2018. [Google Scholar] [Crossref]
30.
X. M. He, T. Huang, T. Wang, H. Du, M. Y. Jiang, and H. Liang, “Analysis of artificial intelligence-assisted cervical cytology screening combined with HPV detection in cervical cancer screening,” Acta Acad. Med. Xuzhou, vol. 42, pp. 273–278, 2022. [Google Scholar] [Crossref]
31.
A. A. Hashmi, S. Naz, O. Ahmed, S. R. Yaqeen, M. Irfan, M. G. Asif, A. Kamal, and N. Faridi, “Comparison of liquid-based cytology and conventional papanicolaou smear for cervical cancer screening: An experience from Pakistan,” Cureus, vol. 12, no. 12, 2020. [Google Scholar] [Crossref]
32.
S. Cox, “Guidelines for papanicolaou test screening and follow-up,” J. Midwifery Womens Health, vol. 57, no. 1, pp. 86–89, 2012. [Google Scholar] [Crossref]
33.
P. Phaliwong, P. Pariyawateekul, N. Khuakoonratt, W. Sirichai, K. Bhamarapravatana, and K. Suwannarurk, “Cervical cancer detection between conventional and liquid based cervical cytology: A 6-year experience in northern Bangkok Thailand,” Asian Pac. J. Cancer Prev., vol. 19, no. 5, pp. 1331–1336, 2018. [Google Scholar] [Crossref]
34.
R. S. Hoda, K. Loukeris, and F. W. Abdul-Karim, “Gynecologic cytology on conventional and liquid-based preparations: A comprehensive review of similarities and differences,” Diagn. Cytopathol., vol. 41, no. 3, pp. 257–278, 2013. [Google Scholar] [Crossref]
35.
A. M. Marchevsky and P. Bartels, “Image analysis: A primer for pathologists,” 1994. [Google Scholar]
36.
D. Saslow, D. Solomon, H. W. Lawson, M. Killackey, S. L. Kulasingam, J. Cain, F. A. Garcia, A. T. Moriarty, A. G. Waxman, and D. C. Wilbur, “American cancer society, American society for colposcopy and cervical pathology, and American society for clinical pathology screening guidelines for the prevention and early detection of cervical cancer,” CA Cancer J. Clin., vol. 62, pp. 147–172, 2012. [Google Scholar] [Crossref]
37.
E. Davey, A. Barratt, L. Irwig, S. F. Chan, P. Macaskill, P. Mannes, and A. M. Saville, “Effect of study design and quality on unsatisfactory rates, cytology classifications, and accuracy in liquid-based versus conventional cervical cytology: A systematic review,” Lancet, vol. 367, no. 9505, pp. 122–132, 2006. [Google Scholar] [Crossref]
38.
A. Gençtav, S. Aksoy, and S. Önder, “Unsupervised segmentation and classification of cervical cell images,” Pattern Recogn., vol. 45, no. 12, pp. 4151–4168, 2012. [Google Scholar] [Crossref]
39.
T. M. Elsheikh, R. M. Austin, D. F. Chhieng, F. S. Miller, A. T. Moriarty, and A. A. Renshaw, “American society of cytopathology workload recommendations for automated pap test screening: Developed by the productivity and quality assurance in the era of automated screening task force,” Diagn. Cytopathol., vol. 41, no. 2, pp. 174–178, 2013. [Google Scholar] [Crossref]
40.
R. Lozano, “Comparison of computer-assisted and manual screening of cervical cytology,” Gynecol. Oncol., vol. 104, no. 1, pp. 134–138, 2007. [Google Scholar] [Crossref]
41.
M. A. Aswathy and M. Jagannath, “Detection of breast cancer on digital histopathology images: Present status and future possibilities,” Inf. Med. Unlocked, vol. 8, pp. 74–79, 2017. [Google Scholar] [Crossref]
42.
Y. Tan, GPU-Based Parallel Implementation of Swarm Intelligence Algorithms. San Mateo, CA, USA: Morgan Kaufmann, 2016. [Online]. Available: https://dl.acm.org/doi/10.5555/3033080 [Google Scholar]
43.
Y. Y. Song, L. Zhu, J. Qin, B. Y. Lei, B. Sheng, and K. S. Choi, “Segmentation of overlapping cytoplasm in cervical smear images via adaptive shape priors extracted from contour fragments,” IEEE Trans. Med. Imaging, vol. 38, no. 12, pp. 2849–2862, 2019. [Google Scholar] [Crossref]
44.
T. Wan, S. Xu, C. Sang, et al., “Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks,” Neurocomputing, vol. 365, pp. 157–170, 2019. [Google Scholar] [Crossref]
45.
C. W. Wang, Y. A. Liou, Y. J. Lin, C. C. Chang, P. H. Chu, Y. C. Lee, C. H. Wang, and T. K. Chao, “Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning,” Sci. Rep., vol. 11, 2021. [Google Scholar] [Crossref]
46.
Y. Zhao, C. Fu, S. Xu, L. Cao, and H. F. Ma, “LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images,” Comput. Biol. Med., vol. 145, p. 105500, 2022. [Google Scholar] [Crossref]
47.
E. Zawadzka-Gosk, K. Wolk, and W. Czarnowski, “Deep learning in state-of-the-art image classification exceeding 99 percent accuracy,” in New Knowledge in Information Systems and Technologies. WorldCIST’19 2019. Advances in Intelligent Systems and Computing. Springer, pp. 946–957, 2019. [Google Scholar] [Crossref]
48.
P. Wang, J. X. Wang, Y. M. Li, L. Y. Li, and H. H. Zhang, “Adaptive pruning of transfer learned deep convolutional neural network for classification of cervical pap smear images,” IEEE Access, vol. 8, pp. 50674–50683, 2020. [Google Scholar] [Crossref]
49.
P. Huang, S. L. Zhang, M. Li, J. Wang, C. L. Ma, B. W. Wang, and X. Y. Lv, “Classification of cervical biopsy images basedon lasso and EL-SVM,” IEEE Access, vol. 8, pp. 24219–24228, 2020. [Google Scholar] [Crossref]
50.
N. Dong, L. Zhao, C. H. Wu, and J. F. Chang, “Inception v3 based cervical cell classification combined with artificially extracted features,” Appl. Soft Comput., vol. 93, p. 106311, 2020. [Google Scholar] [Crossref]
51.
N. Dong, M. D. Zhai, L. Zhao, and C. H. Wu, “Cervical cell classification based on the cart feature selection algorithm,” J. Ambient Intell. Humaniz. Comput., vol. 12, pp. 1837–1849, 2020. [Google Scholar] [Crossref]
52.
S. Liu, Z. Yuan, X. Qiao, Q. Liu, K. Song, B. H. Kong, and X. T. Su, “Light scattering pattern specific convolutional network static cytometry for label-free classification of cervical cells,” Cytom. Part A, vol. 99, no. 6, pp. 610–621, 2021. [Google Scholar] [Crossref]
53.
M. Rahaman, C. Li, Y. Yao, F. Kulwa, X. C. Wu, X. Y. Li, and Q. Wang, “Deepcervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques,” Comput. Biol. Med., vol. 136, 2021. [Google Scholar] [Crossref]
54.
W. Chen, W. M. Shen, L. Gao, and X. Y. Li, “Hybrid loss-constrained lightweight convolutional neural networks for cervical cell classification,” Sensors, vol. 22, no. 9, p. 3272, 2022. [Google Scholar] [Crossref]
55.
B. J. Cho, J. W. Kim, J. Park, G. Y. Kwon, M. Hong, S. H. Jang, H. Bang, G. Kim, and S. T. Park, “Automated diagnosis of cervical intraepithelial neoplasia in histology images via deep learning,” Diagnostics, vol. 12, no. 2, p. 548, 2022. [Google Scholar] [Crossref]
56.
F. Kanavati, N. Hirose, T. Ishii, A. Fukuda, S. Ichihara, and M. Tsuneki, “A deep learning model for cervical cancer screening on liquid-based cytology specimens in whole slide images,” Cancers, vol. 14, no. 5, p. 1159, 2022. [Google Scholar] [Crossref]
57.
O. Yaman and T. Tuncer, “Exemplar pyramid deep feature extraction based cervical cancer image classification model using pap-smear images,” Biomed. Signal Process. Control, vol. 73, p. 103428, 2022. [Google Scholar] [Crossref]
58.
M. Khan, C. Werner, T. Darragh, et al., “ASCCP colposcopy standards: Role of colposcopy, benefits, potential harms, and terminology for colposcopic practice,” J. Lower Genit. Tract Dis., vol. 21, no. 4, pp. 223–229, 2017. [Google Scholar] [Crossref]
59.
Y. Miyagi, K. Takehara, and T. Miyake, “Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images,” Mol. Clin. Oncol., vol. 11, no. 6, 2019. [Google Scholar] [Crossref]
60.
G. Ogilvie, C. Nakisige, W. Huh, R. Mehrotra, E. Franco, and J. Jeronimo, “Optimizing secondary prevention of cervical cancer: recent advances and future challenges,” Int. J. Gynaecol. Obstet., vol. 138, no. Suppl 1, pp. 15–19, 2017. [Google Scholar] [Crossref]
61.
M. Schiffman, J. Doorbar, N. Wentzensen, S. de Sanjose, C. Fakhry, B. J. Monk, M. A. Stanley, and S. Franceschi, “Carcinogenic human papillomavirus infection,” Nat. Rev. Dis. Primers, vol. 2, p. 16086, 2016. [Google Scholar] [Crossref]
62.
F. Zhao and Y. Qiao, “Cervical cancer prevention in China: A key to cancer control,” Lancet, vol. 393, no. 10175, pp. 969–970, 2019. [Google Scholar] [Crossref]
63.
W. L. Bi, A. Hosny, M. B. Schabath, et al., “Artificial intelligence in cancer imaging: clinical challenges and applications,” CA Cancer J. Clin., vol. 69, no. 2, pp. 127–157, 2019. [Google Scholar] [Crossref]
64.
C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,” BMC Med., vol. 17, no. 1, p. 195, 2019. [Google Scholar] [Crossref]
65.
C. Yuan, Y. Yao, B. Cheng, et al., “The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images,” Sci. Rep., vol. 10, no. 1, 2020. [Google Scholar] [Crossref]
66.
P. Guo, Z. Xue, L. R. Long, and S. Antani, “Cross-dataset evaluation of deep learning networks for uterine cervix segmentation,” Diagnostics, vol. 10, no. 1, p. 44, 2020. [Google Scholar] [Crossref]
67.
Z. Yue, S. Ding, X. Li, S. Yang, and Y. Zhang, “Automatic acetowhite lesion segmentation via specular reflection removal and deep attention network,” IEEE J. Biomed. Health Inform., vol. 25, no. 9, pp. 3529–3540, 2021. [Google Scholar] [Crossref]
68.
J. Liu, T. Liang, Y. Peng, G. Peng, L. Sun, L. Li, and H. Dong, “Segmentation of acetowhite region in uterine cervical image based on deep learning,” Technol. Health Care, vol. 30, no. 2, pp. 469–482, 2022. [Google Scholar] [Crossref]
69.
V. Kudva, K. Prasad, and S. Guruvare, “Hybrid transfer learning for classification of uterine cervix images for cervical cancer screening,” J. Digit. Imaging, vol. 33, pp. 619–631, 2020. [Google Scholar] [Crossref]
70.
C. Buiu, V. R. Dănăilă, and C. N. Răduţă, “Mobilenetv2 ensemble for cervical precancerous lesions classification,” Processes, vol. 8, no. 5, p. 595, 2020. [Google Scholar] [Crossref]
71.
S. K. Saini, V. Bansal, R. Kaur, and M. Juneja, “Colponet for automated cervical cancer screening using colposcopy images,” Mach. Vis. Appl., vol. 31, p. 15, 2020. [Google Scholar] [Crossref]
72.
Y. M. Luo, T. Zhang, P. Li, P. M. Sun, B. H. Dong, and G. Ruan, “Mdfi: Multi-CNN decision feature integration for diagnosis of cervical precancerous lesions,” IEEE Access, pp. 29616–29626, 2020. [Google Scholar] [Crossref]
73.
Y. Yu, J. Ma, W. D. Zhao, Z. M. Li, and S. Ding, “MSCI: A multistate dataset for colposcopy image classification of cervical cancer screening,” Int. J. Med. Inform., vol. 146, no. 1, p. 104352, 2020. [Google Scholar] [Crossref]
74.
K. Adweb, N. Cavus, and B. Sekeroglu, “Cervical cancer diagnosis using very deep networks over different activation functions,” IEEE Access, vol. 9, pp. 46612–46625, 2021. [Google Scholar] [Crossref]
75.
Y. R. Park, Y. J. Kim, W. Ju, K. Nam, S. Kim, and K. G. Kim, “Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images,” Sci. Rep., vol. 11, 2021. [Google Scholar] [Crossref]
76.
L. Liu, Y. Wang, X. L. Liu, S. Han, L. Jia, L. H. Meng, Z. Y. Yang, W. Chen, Y. Z. Zhang, and X. Qiao, “Computer-aided diagnostic system based on deep learning for classifying colposcopy images,” Ann. Transl. Med., vol. 9, no. 13, 2021. [Google Scholar] [Crossref]
77.
C. Bourgioti, K. Chatoupis, and L. Moulopoulos, “Current imaging strategies for the evaluation of uterine cervical cancer,” World J. Radiol., vol. 8, no. 4, 2016. [Google Scholar] [Crossref]
78.
S. H. Choi, S. H. Kim, H. J. Choi, B. K. Park, and H. J. Lee, “Preoperative magnetic resonance imaging staging of uterine cervical carcinoma: Results of prospective study,” J. Comput. Assist. Tomogr., vol. 28, no. 5, pp. 620–627, 2004. [Google Scholar] [Crossref]
79.
H. Hricak, C. Gatsonis, D. S. Chi, M. A. Amendola, K. Brandt, L. H. Schwartz, S. Koelliker, E. S. Siegelman, J. J. Brown, R. B. McGhee Jr, R. Iyer, K. M. Vitellas, B. Snyder, H. J. Long III, J. V. Fiorica, and D. G. Mitchell, “Role of imaging in pretreatment evaluation of early invasive cervical cancer: Results of the intergroup study American college ofradiology imaging network 6651-gynecologic oncology group 183,” J. Clin. Oncol., vol. 23, no. 36, pp. 9329–9337, 2005. [Google Scholar] [Crossref]
80.
J. Merz, M. Bossart, F. Bamberg, and M. Eisenblaetter, “Revised figo staging for cervical cancer - a new role for MRI,” Rofo, vol. 192, no. 10, pp. 937–944, 2020. [Google Scholar] [Crossref]
81.
P. Cohen, A. Jhingran, A. Oaknin, and L. Denny, “Cervical cancer,” Lancet, vol. 393, no. 10167, pp. 169–182, 2019. [Google Scholar] [Crossref]
82.
E. Kim and X. Huang, “A data driven approach to cervigram image analysis and classification,” in Lecture Notes in Computational Vision and Biomechanics, Netherlands: Springer, 2013, pp. 1–13. [Google Scholar] [Crossref]
83.
T. Wang, T. Gao, H. Guo, X. B. Zhou, J. Tian, L. Y. Huang, and M. Zhang, “Preoperative prediction of parametrial invasion in early-stage cervical cancer with MRI-based radiomics nomogram,” Eur. Radiol., vol. 30, no. 3, 2020. [Google Scholar] [Crossref]
84.
Y. Lin, C. Lin, H. Y. Lu, H. J. Chiang, H. K. Wang, Y. T. Huang, S. H. Ng, J. H. Hong, T. C. Yen, C. H. Lai, and G. G. Lin, “Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer,” Eur. Radiol., vol. 30, no. 3, pp. 1297–1305, 2019. [Google Scholar] [Crossref]
85.
B. Wang, Y. Y. Zhang, C. Y. Wu, and F. Wang, “Multimodal MRI analysis of cervical cancer on the basis of artificial intelligence algorithm,” Contrast Media Mol. Imaging, vol. 2021, 2021. [Google Scholar] [Crossref]
86.
A. Cibi and R. J. Rose, “Classification of stages in cervical cancer MRI by customized CNN and transfer learning,” Cogn. Neurodyn., vol. 2022, pp. 1–9, 2022. [Google Scholar] [Crossref]
87.
Y. Yan, T. Yu, R. Zhang, R. T. Dong, Q. Y. Hu, T. Yu, F. Liu, Y. H. Luo, and Y. Dong, “Feasibility of an ADC-based radiomics model for predicting pelvic lymph node metastases in patients with stage IB–IIA cervical squamous cell carcinoma,” Br. J. Radiol., vol. 2019, 2019. [Google Scholar] [Crossref]
88.
T. Wang, T. T. Gao, J. B. Yang, X. J. Yan, Y. B. Wang, X. B. Zhou, J. Tian, L. Y. Huang, and M. Zhang, “Preoperative prediction of pelvic lymph nodes metastasis in early-stage cervical cancer using radiomics nomogram developed based on T2-weighted MRI and diffusion-weighted imaging,” Eur. J. Radiol., vol. 114, pp. 128–135, 2019. [Google Scholar] [Crossref]
89.
Q. X. Wu, S. Wang, S. X. Zhang, et al., “Development of a deep learning model to identify lymph node metastasis on magnetic resonance imaging in patients with cervical cancer,” JAMA Netw. Open, vol. 3, no. 7, p. e2011625, 2020. [Google Scholar] [Crossref]
90.
P. Xue, C. Tang, Q. Li, et al., “Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies,” BMC Med., vol. 18, no. 1, p. 406, 2020. [Google Scholar] [Crossref]
91.
N. Bhatla, J. S. Berek, M. C. Fredes, et al., “Revised Figo staging for carcinoma of the cervix uteri,” Int. J. Gynaecol. Obstet., vol. 145, no. 1, pp. 129–135, 2019. [Google Scholar] [Crossref]
92.
J. Guiot, A. Vaidyanathan, L. Deprez, et al., “A review in radiomics: Making personalized medicine a reality via routine imaging,” Med. Res. Rev., vol. 42, no. 1, pp. 426–440, 2022. [Google Scholar] [Crossref]
93.
C. Marth, F. Landoni, S. Mahner, M. McCormack, A. Gonzalez-Martin, and N. Colombo, “Cervical cancer: Esmo clinical practice guidelines for diagnosis, treatment and follow-up,” Ann. Oncol., vol. 28, pp. iv72–iv83, 2017. [Google Scholar] [Crossref]
94.
M. A. Gold, “Pet in cervical cancer-implications for staging, treatment planning, assessment of prognosis, and prediction of response,” J. Natl. Compr. Canc. Netw., vol. 6, no. 1, pp. 37–45, 2008. [Google Scholar] [Crossref]
95.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [Google Scholar] [Crossref]
96.
B. Ma, X. Yin, D. Wu, H. Shen, X. Ban, and Y. Wang, “End-to-end learning for simultaneously generating decision map and multi-focus image fusion result,” Neurocomputing, vol. 470, pp. 204–216, 2022. [Google Scholar] [Crossref]
97.
S. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and S. Khan, “Medical image analysis using convolutional neural networks: A review,” J. Med. Syst., vol. 42, pp. 1–13, 2018. [Google Scholar] [Crossref]
98.
D. S. Kermany, M. Goldbaum, W. J. Cai, et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018. [Google Scholar] [Crossref]
99.
W. C. Shen, S. W. Chen, K. C. Wu, T. C. Hsieh, J. A. Liang, Y. C. Hung, L. S. Yeh, W. C. Chang, W. C. Lin, K. Y. Yen, and C. H. Kao, “Prediction of local relapse and distant metastasis in patients with definitive chemoradiotherapy-treated cervical cancer by deep learning from [18f]-fluorodeoxyglucose positron emission tomography/computed tomography,” SSRN, 2018. [Google Scholar] [Crossref]
100.
Z. Liu, W. Chen, H. Guan, et al., “An adversarial deep-learning-based model for cervical cancer CTV segmentation with multicenter blinded randomized controlled validation,” Front. Oncol., vol. 11, p. 702270, 2021. [Google Scholar] [Crossref]
101.
Y. Ming, X. Dong, J. Zhao, Z. Chen, H. Wang, and N. Wu, “Deep learning-based multimodal image analysis for cervical cancer detection,” Methods, vol. 205, pp. 46–52, 2022. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Liu, C. H., Yang, J. H., Liu, Y., Zhang, Y., Liu, S., Chaikovska, T., & Liu, C. (2023). Artificial Intelligence in Cervical Cancer Research and Applications. Acadlore Trans. Mach. Learn., 2(2), 99-115. https://doi.org/10.56578/ataiml020205
C. H. Liu, J. H. Yang, Y. Liu, Y. Zhang, S. Liu, T. Chaikovska, and C. Liu, "Artificial Intelligence in Cervical Cancer Research and Applications," Acadlore Trans. Mach. Learn., vol. 2, no. 2, pp. 99-115, 2023. https://doi.org/10.56578/ataiml020205
@research-article{Liu2023ArtificialII,
title={Artificial Intelligence in Cervical Cancer Research and Applications},
author={Chunhui Liu and Jiahui Yang and Ying Liu and Ying Zhang and Shuang Liu and Tetiana Chaikovska and Chan Liu},
journal={Acadlore Transactions on AI and Machine Learning},
year={2023},
page={99-115},
doi={https://doi.org/10.56578/ataiml020205}
}
Chunhui Liu, et al. "Artificial Intelligence in Cervical Cancer Research and Applications." Acadlore Transactions on AI and Machine Learning, v 2, pp 99-115. doi: https://doi.org/10.56578/ataiml020205
Chunhui Liu, Jiahui Yang, Ying Liu, Ying Zhang, Shuang Liu, Tetiana Chaikovska and Chan Liu. "Artificial Intelligence in Cervical Cancer Research and Applications." Acadlore Transactions on AI and Machine Learning, 2, (2023): 99-115. doi: https://doi.org/10.56578/ataiml020205
cc
©2023 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.