Javascript is required
1.
C. D. Lehman, R. D. Wellman, D. S. M. Buist, K. Kerlikowske, A. N. A. Tosteson, and D. L. Miglioretti, “Diagnostic accuracy of digital screening mammography with and without computer-aided detection,” JAMA Intern. Med., vol. 175, no. 11, pp. 1828–1837, 2015. [Google Scholar] [Crossref]
2.
J. J. Fenton, L. Abraham, S. H. Taplin, B. M. Geller, P. A. Carney, C. D’Orsi, J. G. Elmore, and W. E. Barlow, “Effectiveness of computer-aided detection in community mammography practice,” J. Natl. Cancer Inst., vol. 103, no. 15, pp. 1152–1161, 2011. [Google Scholar] [Crossref]
3.
J. D. Keen, J. M. Keen, and J. E. Keen, “Utilization of computer-aided detection for digital screening mammography in the United States, 2008 to 2016,” J. Am. Coll. Radiol., vol. 15, no. 1, pp. 44–48, 2018. [Google Scholar] [Crossref]
4.
American Cancer Society, “Mammography quality standards act and program,” 2023. https://www.fda.gov/radiation-emitting-products/mammography-quality-standards-act-and-program [Google Scholar]
5.
P. Samantha  Zuckerman, L. Brian  Sprague, L. Donald  Weaver, D. Sally  Herschorn, and F. Emily  Conant, “Survey results regarding uptake and impact of synthetic digital mammography with tomosynthesis in the screening setting,” J. Am. Coll. Radiol., vol. 17, no. 1, pp. 31–37, 2020. [Google Scholar] [Crossref]
6.
American Cancer Society, “Breast cancer statistics-How common is breast cancer,” 2023. https://www.cancer.org/cancer/types/breast-cancer/about/how-common-is-breast-cancer.html [Google Scholar]
7.
Food and Drug Administration, “MQSA national statistics,” 2023. https://www.fda.go v/radiation-emitting-products/mqsa-insights/mqsa-national-statistics [Google Scholar]
8.
B. Abhisheka, S. K. Biswas, and B. Purkayastha, “A comprehensive review on breast cancer detection, classification and segmentation using deep learning,” Arch. Comput. Methods Eng., vol. 30, pp. 5023–5052, 2023. [Google Scholar] [Crossref]
9.
C. R. Taylor, N. Monga, C. Johnson, J. R. Hawley, and M. Patel, “Artificial intelligence applications in breast imaging: Current status and future directions,” Diagnostics, vol. 13, no. 12, p. 2041, 2023. [Google Scholar] [Crossref]
10.
J. J. J. Condon, L. Oakden-Rayner, K. A. Hall, M. Reintals, A. Holmes, G. Carneiro, and L. J. Palmer, “Replication of an open-access deep learning system for screening mammography: Reduced performance mitigated by retraining on local data,” medRxiv 2021.05.28.21257892, 2021. [Google Scholar] [Crossref]
11.
W. Hsu, D. S. Hippe, N. Nakhaei, P. C. Wang, B. Zhu, N. Siu, M. E. Ahsen, W. Lotter, A. G. Sorensen, A. Naeim, D. S. M. Buist, T. Schaffter, J. Guinney, J. G. Elmore, and C. I. Lee, “External validation of an ensemble model for automated mammography interpretation by artificial intelligence,” JAMA Netw. Open, vol. 5, no. 11, p. e2242343, 2022. [Google Scholar] [Crossref]
12.
A. D. Lauritzen, A. Rodríguez-Ruiz, M. C. von Euler-Chelpin, E. Lynge, I. Vejborg, M. Nielsen, N. Karssemeijer, and M. Lillholm, “An artificial intelligence–based mammography screening protocol for breast cancer: Outcome and radiologist workload,” Radiology, vol. 304, no. 1, pp. 41–49, 2022. [Google Scholar] [Crossref]
13.
K. Dembrower, E. Wåhlin, Y. Liu, M. Salim, K. Smith, P. Lindholm, M. Eklund, and F. Strand, “Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: A retrospective simulation study,” Lancet Digit. Health, vol. 2, no. 9, pp. e468–e474, 2020. [Google Scholar] [Crossref]
14.
E. F. Conant, Y. Alicia  Toledano, S. Periaswamy, V. Sergei  Fotin, J. Go, E. Justin  Boatsman, and W. Jeffrey  Hoffmeister, “Improving accuracy and efficiency with concurrent use of artificial intelligence for digital breast tomosynthesis,” Radiol. Artif. Intell., vol. 1, no. 4, p. e180096, 2019. [Google Scholar] [Crossref]
15.
A. Singh, A. Dhillon, N. Kumar, M. S. Hossain, G. Muhammad, and M. Kumar, “eDiaPredict: An ensemblebased framework for diabetes prediction,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 17, no. 2s, pp. 1–26, 2021. [Google Scholar] [Crossref]
16.
X. F. Qi, F. S. Yi, L. Zhang, Y. Chen, Y. Pi, Y. Y. Chen, J. X. Guo, J. Y. Wang, Q. Guo, J. L. Li, Y. Chen, Q. Lv, and Z. Yi, “Computer-aided diagnosis of breast cancer in ultrasonography images by deep learning,” Neurocomputing, vol. 472, pp. 152–165, 2022. [Google Scholar] [Crossref]
17.
J. L. Thompson and G. P. Wright, “The role of breast MRI in newly diagnosed breast cancer: An evidence-based review,” Am. J. Surg., vol. 221, no. 3, pp. 525–528, 2021. [Google Scholar] [Crossref]
18.
G. Piantadosi, M. Sansone, R. Fusco, and C. Sansone, “Multi-planar 3D breast segmentation in MRI via deep convolutional neural networks,” Artif. Intell. Med., vol. 103, p. 101781, 2020. [Google Scholar] [Crossref]
19.
C. R. Saccarelli, A. G. V. Bitencourt, and E. A. Morris, “Breast cancer screening in high-risk women: Is MRI alone enough?,” J. Natl. Cancer Inst., vol. 112, no. 2, pp. 121–122, 2019. [Google Scholar] [Crossref]
20.
E. Desperito, L. Schwartz, K. M. Capaccione, B. T. Collins, S. Jamabawalikar, B. Peng, R. Patrizio, and M. M. Salvatore, “Chest CT for breast cancer diagnosis,” Life (Basel), vol. 12, no. 11, p. 1699, 2022. [Google Scholar] [Crossref]
21.
E. Nicolas, N. Khalifa, C. Laporte, S. Bouhroum, and Y. Kirova, “Safety margins for the delineation of the left anterior descending artery in patients treated for breast cancer,” Int. J. Radiat. Oncol. Biol. Phys., vol. 109, no. 1, pp. 267–272, 2021. [Google Scholar] [Crossref]
22.
J. Koh, Y. Y. Yoon, S. Kim, K. Han, and E. Kim, “Deep learning for the detection of breast cancers on chest computed tomography,” Clin. Breast Cancer, vol. 22, no. 1, pp. 26–31, 2022. [Google Scholar] [Crossref]
23.
M. Benjelloun, M. E. Adoui, M. A. Larhmam, and S. A. Mahmoudi, “Automated breast tumor segmentation in DCE-MRI using deep learning,” in 2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech), Brussels, Belgium, 2018, pp. 1–6. [Google Scholar] [Crossref]
24.
R. Thawani, L. Gao, A. Mohinani, A. Tudorica, X. Li, Z. Mitri, and W. Huang, “Quantitative DCE-MRI prediction of breast cancer recurrence following neoadjuvant chemotherapy: A preliminary study,” BMC Med Imaging, vol. 22, no. 1, p. 182, 2022. [Google Scholar] [Crossref]
25.
T. F. Majeed, N. Al-Jawad, and H. Sellahewa, “Breast border extraction and pectoral muscle removal in MLO mammogram images,” in 2013 5th Computer Science and Electronic Engineering Conference (CEEC), Colchester, UK, 2013, pp. 119–124. [Google Scholar] [Crossref]
26.
R. L. Birdwell, D. M. Ikeda, K. F. O’Shaughnessy, and E. A. Sickles, “Mammographic characteristics of 115 missed cancers later detected with screening mammography and the potential utility of computer-aided detection,” Radiology, vol. 219, no. 1, pp. 192–202, 2001. [Google Scholar] [Crossref]
27.
N. Roussel, J. Sprenger, S. J. Tappan, and J. R. Glaser, “Robust tracking and quantification of C. elegans body shape and locomotion through coiling,entanglement, and omega bends,” Worm, vol. 3, no. 4, p. e982437, 2015. [Google Scholar] [Crossref]
28.
J. Jebamony and D. Jacob, “Classification of benign and malignant breast masses on mammograms for large datasets using core vector machines,” Curr. Med. Imaging, vol. 16, no. 6, pp. 703–710, 2020. [Google Scholar] [Crossref]
29.
H. Abdellatif, T. E. Taha, O. F. Zahran, W. Al-Nauimy, and F. E. Abd El-Samie, “K9. automatic segmentation of digital mammograms to detect masses,” in 2013 30th National Radio Science Conference (NRSC), Cairo, Egypt, 2013, pp. 557–565. [Google Scholar] [Crossref]
30.
A. K. Chaubey, “Breast segmentation in mammograms using manual thresholding,” World J. Res. Rev., vol. 1, no. 1, p. 262995, 2015. [Google Scholar]
31.
S. J. S. Gardezi, A. Elazab, B. Y. Lei, and T. F. Wang, “Breast cancer detection and diagnosis using mammographic data: Systematic review,” J. Med. Internet Res., vol. 21, no. 7, p. e14464, 2019. [Google Scholar] [Crossref]
32.
S. H. Gu, Y. Ji, Y. J. Chen, J. Wang, and J. U. Kim, “Study on breast mass segmentation in mammograms,” in Information and Application 2015 3rd International Conference on Computer, Yeosu, Korea (South), 2015, pp. 22–25. [Google Scholar] [Crossref]
33.
D. Ribli, A. Horvath, Z. Unger, P. Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Sci. Rep., vol. 8, no. 1, 2018. [Google Scholar] [Crossref]
34.
D. L. Pham, C. Xu, and J. L. Prince, “Current methods in medical image segmentation,” Annu. Rev. Biomed. Eng., vol. 2, pp. 315–337, 2000. [Google Scholar] [Crossref]
35.
E. Michael, H. J. Ma, H. Li, F. Kulwa, and J. Li, “Breast cancer segmentation methods: Current status and future potentials,” BioMed. Res. Int., vol. 2021, p. e9962109, 2021. [Google Scholar] [Crossref]
36.
S. Dehghani and M. A. Dezfooli, “A method for improve preprocessing images mammography,” Int. J. Inf. Educ. Technol., vol. 1, no. 1, p. 90, 2011. [Google Scholar]
37.
K. Loizidou, G. Skouroumouni, C. Nikolaou, and C. Pitris, “Automatic breast mass segmentation and classification using subtraction of temporally sequential digital mammograms,” IEEE J. Transl. Eng. Health. Med., vol. 10, pp. 1–11, 2022. [Google Scholar] [Crossref]
38.
S. S. Mohamed, G. Behiels, and P. Dewaele, “Mass candidate detection and segmentation in digitized mammograms,” in 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON, Canada, 2009, pp. 557–562. [Google Scholar] [Crossref]
39.
S. H. Gu, Y. Chen, F. Q. Sheng, T. M. Zhan, and Y. J. Chen, “A novel method for breast mass segmentation: From superpixel to subpixel segmentation,” Mach. Vis. Appl., vol. 30, no. 7, pp. 1111–1122, 2019. [Google Scholar] [Crossref]
40.
N. Ramadijanti, A. R. Barakbah, and F. A. Husna, “Automatic breast tumor segmentation using hierarchical K-means on mammogram,” in 2018 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC), Bali, Indonesia, 2018, pp. 170–175. [Google Scholar] [Crossref]
41.
B. Senthilkumar and G. Umamaheswari, “Combination of novel enhancement technique and fuzzy C means clustering technique in breast cancer detection,” Biomed. Res., vol. 24, no. 2, 2013. [Google Scholar]
42.
N. Dhungel, G. Carneiro, and A. P. Bradley, “Deep structured learning for mass segmentation from mammograms,” arXiv preprint arXiv:1410.7454, 2014. [Google Scholar] [Crossref]
43.
A. Oliver, M. Tortajada, X. Lladó, J. Freixenet, S. Ganau, L. Tortajada, M. Vilagran, M. Sentís, and R. Martí, “Breast density analysis using an automatic density segmentation algorithm,” J. Digit. Imaging, vol. 28, no. 5, pp. 604–612, 2015. [Google Scholar] [Crossref]
44.
T. T. Wirtti and E. O. T. Salles, “Segmentation of masses in digital mammograms,” in ISSNIP Biosignals and Biorobotics Conference, Vitoria, Brazil, 2011, pp. 1–7. [Google Scholar] [Crossref]
45.
T. Y. Shen, C. Gou, J. G. Wang, and F. Y. Wang, “Simultaneous segmentation and classification of mass region from mammograms using a mixed-supervision guided deep model,” IEEE Signal Process. Lett., vol. 27, pp. 196–200, 2020. [Google Scholar] [Crossref]
46.
N. Saffari, H. A. Rashwan, M. Abdel-Nasser, V. K. Singh, M. Arenas, E. Mangina, B. Herrera, and D. Puig, “Fully automated breast density segmentation and classification using deep learning,” Diagnostics (Basel), vol. 10, no. 11, p. 988, 2020. [Google Scholar] [Crossref]
47.
Q. Y. Hu, H. M. Whitney, H. Li, Y. Ji, P. F. Liu, and M. L. Giger, “Improved classification of benign and malignant breast lesions using deep feature maximum intensity projection MRI in breast cancer diagnosis using dynamic contrast-enhanced MRI,” Radiol. Artif. Intell., vol. 3, no. 3, p. e200159, 2021. [Google Scholar] [Crossref]
48.
K. K. Dewangan, D. K. Dewangan, S. P. Sahu, and R. Janghel, “Breast cancer diagnosis in an early stage using novel deep learning with hybrid optimization technique,” Multimed. Tools Appl., vol. 81, no. 10, pp. 13935–13960, 2022. [Google Scholar] [Crossref]
49.
L. Tsochatzidis, P. Koutla, L. Costaridou, and I. Pratikakis, “Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses,” Comput. Methods Programs Biomed., vol. 200, p. 105913, 2021. [Google Scholar] [Crossref]
50.
D. Abdelhafiz, S. Nabavi, R. Ammar, C. Yang, and J. Bi, “Residual deep learning system for mass segmentation and classification in mammography,” in Proceedings of the 10th ACMInternational Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, USA, 2019, pp. 475–484. [Google Scholar] [Crossref]
51.
M. S. Hossain, “Microcalcification segmentation using modified U-net segmentation network from mammogram images,” J. King Saud Univ. Comput. Inf. Sci., vol. 34, no. 2, pp. 86–94, 2022. [Google Scholar] [Crossref]
52.
H. Sun, C. Li, B. Q. Liu, Z. Y. Liu, M. Y. Wang, H. R. Zheng, D. D. Feng, and S. S. Wang, “AUNet: Attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms,” Phys. Med. Biol., vol. 65, no. 5, p. 055005, 2020. [Google Scholar] [Crossref]
53.
J. E. Ball, T. W. Butler, and L. M. Bruce, “Towards automated segmentation and classification of masses in mammograms,” in The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 2004, pp. 1814–1817. [Google Scholar] [Crossref]
54.
K. C. Zhou, W. Li, and D. Z. Zhao, “Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+,” Technol. Health Care, vol. 30, no. S1, pp. 173–190, 2022. [Google Scholar] [Crossref]
55.
V. K. Singh, H. A. Rashwan, S. Romani, F. Akram, N. Pandey, M. M. K. Sarker, A. Saleh, M. Arenas, M. Arquez, D. Puig, and J. Torrents-Barrena, “Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network,” arXiv preprint arXiv:1809.01687, 2018. [Google Scholar] [Crossref]
56.
C. R. Taylor, N. Monga, C. Johnson, J. R. Hawley, and M. Patel, “Artificial intelligence applications in breast imaging: Current status and future directions,” Diagnostics, vol. 13, no. 12, p. 2041, 2023. [Google Scholar] [Crossref]
57.
S. M. Carter, W. Rogers, K. T. Win, H. Frazer, B. Richards, and N. Houssami, “The ethical, legal and social implications of using artificial intelligence systems in breast cancer care,” The Breast, vol. 49, pp. 25–32, 2020. [Google Scholar] [Crossref]
Search
Open Access
Review article

Advances in Breast Cancer Segmentation: A Comprehensive Review

ayah abo-el-rejal*,
shehab eldeen ayman,
farah aymen*
Faculty of Informatics and Computer Science, The British University in Egypt, 11837 Cairo, Egypt
Acadlore Transactions on AI and Machine Learning
|
Volume 3, Issue 2, 2024
|
Pages 70-83
Received: 01-07-2024,
Revised: 03-03-2024,
Accepted: 03-11-2024,
Available online: 03-20-2024
View Full Article|Download PDF

Abstract:

The diagnosis and treatment of breast cancer (BC) are significantly subject to medical imaging techniques, with segmentation being crucial in delineating pathological regions for precise diagnosis and treatment planning. This comprehensive analysis explores a variety of segmentation methodologies, encompassing classical, machine learning, deep learning (DL), and manual segmentation, as applied in the medical imaging field for BC detection. Classical segmentation techniques, which include edge-driven and threshold-driven segmentation, are highlighted for their utilization of filters and region-based methods to achieve precise delineation. Emphasis is placed on the establishment of clear guidelines for the selection and comparison of these classical approaches. Segmentation through machine learning is discussed, encompassing both unsupervised and supervised techniques that leverage annotated images and pathology reports for model training, with a focus on their efficacy in BC segmentation tasks. DL methods, especially models such as U-Net and convolutional neural networks (CNNs), are underscored for their remarkable efficiency in segmenting BC images, with U-Net models noted for their minimal requirement for annotated images and achieving accuracy levels up to 99.7%. Manual segmentation, though reliable, is identified as time-consuming and susceptible to errors. Various metrics, such as Dice, F-score, Intersection over Union (IOU), and Area Under the Curve (AUC), are used for assessing and comparing the segmentation techniques. The analysis acknowledges the challenges posed by limited dataset availability, data range inadequacy, and confidentiality concerns, which hinder the broader integration of segmentation methods into clinical practice. Solutions to overcome these challenges are proposed, including the promotion of partnerships to develop and distribute extensive datasets for BC segmentation. This approach would necessitate the pooling of resources from multiple organizations and the adoption of anonymization techniques to safeguard data privacy. Through this lens, the analysis aims to provide a thorough analysis of the practical implications of segmentation methods in BC diagnosis and management, paving the way for future advancements in the field.

Keywords: Breast cancer, Segmentation, Deep learning, Diagnosis

1. Introduction

Breast imaging presents a unique landscape characterized by both opportunities and challenges for the development and integration of artificial intelligence (AI). BC screening initiatives globally heavily rely on mammography to mitigate the morbidity and mortality associated with BC. Nevertheless, AI holds vast potential for diverse applications within breast imaging, including decision support, risk assessment, quantification of breast density, workflow optimization, triage, quality evaluation, assessment of responses to image enhancement and NAC.

Various modalities have been employed in BC screening, detection, and diagnosis, including mammography, positron emission tomography (PET), BUS CT, magnetic resonance imaging (MRI) [1], [2], [3], and DBT. Notably, mammography is frequently employed due to its proven efficacy in early-stage tumor detection [4], [5]. In the breast, precise segmentation of the region of interest (ROI) realizes accurate tumor detection. ROI segmentation represents a pivotal component of computer-assisted diagnosis (CAD), with segmentation quality closely tied to the efficacy of employed filters for artifact removal from mammographic images.

The challenge of limited radiologist resources amidst a deluge of daily mammogram images can lead to diagnostic inaccuracies with potentially significant implications for patients. False-positive diagnoses can result in unnecessary financial burdens and, in severe cases, delayed cancer detection in advanced stages, which can be life-threatening. Conversely, false-negative diagnoses can trigger unwarranted expenses and distress stemming from invasive biopsy procedures and treatments for non-existent cancer.

Despite notable advancements in medical image segmentation, the practical utility of segmentation methodologies still cannot meet clinical demands. Collaborative endeavors between medical professionals and machine learning experts should be intensified to bridge this gap and cater to clinical requirements. This synergy empowers machine learning experts to devise DL models tailored to clinical needs, ultimately alleviating the workload on medical practitioners.

This study aims to comprehensively review and analyze the diverse segmentation methodologies employed in medical imaging, particularly focusing on BC detection and diagnosis. Through a detailed examination of classical, machine learning, DL, and manual segmentation techniques, this study aims to elucidate the strengths, limitations, and potential use of those methods in BC imaging. By synthesizing insights from current research findings, challenges, and implications, this study helps comprehensively understand the role of segmentation in facilitating the accurate identification of breast tumors, guiding treatment planning, and enhancing overall patient care. Ultimately, the aim is to contribute to the advancement of segmentation methodologies and their integration into clinical practice for improved BC detection and management.

This study provides a comprehensive review of a survey on advanced image segmentation methods for various breast screening techniques. What makes this survey unique is that it provides a combination of both technical screening techniques and a medical background on the subject. This study aims to provide readers with either a medical or technical background with an overview of the subject.

2. Background

Early detection of BC is a paramount concern on a global scale, as it substantially enhances patient survival rates. Mammography stands as a pivotal tool for early-stage BC detection. Effective tumor identification within breast images hinges on robust segmentation techniques. Segmentation plays a pivotal role in image analysis, encompassing detection, classification, feature extraction, and treatment planning, enabling the quantification of breast tissue volume.

In previous eras, the timely identification of BC predominantly depended on manual assessment and mammography imaging. Manual examination entailed the involvement of healthcare professionals and patients, who engaged in tactile examinations to detect anomalies such as lumps or alterations in breast tissue. BC ranks as the most prevalent cancer among women in the United States, excluding skin cancers, constituting nearly one-third of all new female cancer cases annually. In 2023, it is anticipated that nearly 300,000 cases of invasive BC and over 50,000 cases of ductal carcinoma in situ will be diagnosed, with over 43,000 BC-related deaths expected in the United States alone [6]. The World Health Organization (WHO) emphasizes that BC ranks as the most commonly diagnosed condition globally, leading to around 626,700 female deaths annually from cancer-related causes [6], [7]. BC progression typically follows three stages: normal, benign, and malignant, with symptoms often appearing as masses, microcalcifications, or architectural distortions visible in mammographic images. Figure 1 illustrates the incidence and mortality rates for the top ten most prevalent cancers among women in 2020, with BC occupying the highest percentages at 24.48% and 15.52% in the two pie charts, respectively.

Figure 1. Cancer distribution types among women in 2020 [8]
2.1 Cancer Risk Assessment

Contemporary risk assessment models are designed to estimate the collective risk of BC occurrence among individuals who share similar risk-related characteristics rather than providing individualized assessments. These models incorporate a multitude of factors into their calculations, including but not limited to age, age of menarche, obstetric history, familial BC history spanning both first-degree and multi-generational relatives, genetic information, number of prior biopsies, racial and ethnic background, and body mass index, among other pertinent variables. The output of these models typically quantifies the likelihood of developing BC over specific time intervals, such as 5 years, 10 years, or over one's lifetime. These risk assessments serve a crucial role in the identification of women who may derive benefits from supplemental high-risk BC screening, chemo-preventive interventions, or lifestyle modifications.

2.2 Cancer Detection

BC screening initiatives are predominantly anchored in the utilization of screening mammography to curtail the morbidity and mortality associated with BC. Beyond its primary role in cancer detection, the domain of breast imaging offers a spectrum of prospective applications for AI. These encompass both interpretive and non-interpretive domains, encompassing decision support, risk assessment, quantification of breast density, workflow optimization, triage facilitation, quality evaluation, assessment of responses to NAC, and image enhancement. These multifaceted AI applications augment the comprehensive landscape of breast imaging [9], contributing to advancements in patient care and diagnostic efficacy.

External assessments aiming to gauge the performance of AI algorithms in breast imaging have yielded variable outcomes. Notably, a well-performing AI model displayed diminished performance levels when applied at an external site in its native configuration [10]. A concerning issue further arises from the observation that AI algorithms, once developed and made available, may not exhibit consistent performance across diverse patient subpopulations or demographic groups. It is noteworthy that the development and evaluation of AI tools for BC detection in imaging has predominantly concentrated on mammography.

A previous external validation endeavor scrutinizing a high-performing AI algorithm, using an independent and diverse population, disclosed markedly reduced performance levels in specific patient cohorts, in stark contrast to previously reported performances. These concerns underscore the potential unintended consequences stemming from the insufficient inclusion of diverse patient groups in testing and validation datasets [11].

Comparative assessments between AI-based screening and radiologist-led screening have unveiled non-inferior sensitivity and superior specificity, accompanied by a significant 25.1% reduction in false-positive findings [12]. Remarkably, these findings were achieved concomitantly with a substantial 62.6% reduction in workload. Furthermore, a retrospective simulation study showcased that AI-based triage of mammograms into “no-radiologist assessment” and “enhanced assessment” categories had the potential to reduce workloads by over 50% while preemptively identifying a substantial portion of cancers that might otherwise be diagnosed at later stages [13]. Another investigation focused on an AI system employed for lesion detection in Digital Breast Tomosynthesis (DBT) and disclosed that the algorithm, when integrated into mammogram interpretation, led to a nearly 50% reduction in reading times. Notably, this efficiency enhancement was achieved without compromising diagnostic accuracy, as indicated by a statistically significant average improvement of 0.057 in the AUC [14].

3. BC Screening Techniques

In order to not only detect but also classify BC in its early stages, several imaging modalities are being made good use of. BC diagnosis and detection rely on the analysis of medical images. Below are some of the most widely used techniques.

3.1 Mammogram

Mammography stands as the prevailing and foremost imaging modality employed in the diagnosis of BC. The American Cancer Society (ACS) deems mammograms as a standard protocol for the early detection of BC [15]. In this method, X-ray imaging is harnessed to generate detailed images of both breast structures. For each breast, a pair of mammographic views is acquired: the cranio-caudal perspective (top-down view) and the mediolateral oblique perspective (a side view captured from a specific angle). These views are obtained by carefully compressing the breast in a nearly vertical orientation. Mammography is still the most reliable and affordable method of BC screening. It is especially efficient for recognizing microcalcifications, a prevalent early indicator of BC. However, they face difficulties because of their limited sensitivity to dense breast tissue. It is possible to have false positives and negatives; to do so, more images are required for confirmation.

3.2 Ultrasound

Ultrasound serves as a prevalent modality in medical diagnosis and is frequently employed as a complementary screening technique alongside mammography. It utilizes the sound waves by producing real-time images that provide a dynamic view of the breast tissue. The combined use of both modalities enhances sensitivity and specificity in BC detection. This approach holds promise as it is potentially effective, noninvasive, safe, cost-effective, and widely accessible for BC detection. Notably, it exhibits greater sensitivity than mammography in identifying anomalies within dense breast tissue, rendering it particularly valuable for women under the age of 35 [16]. It is radiation-free, which makes it well suited not only for dense breast tissue but also for characterizing lesions that are detected on mammography.

The process of ultrasonography entails the transmission of high-frequency sound waves through the breast, with the returning signals transformed into visual images displayed on a monitor. A transducer is utilized to define tissue boundaries and interpret the reflected signals, allowing for real-time visualization of internal organ shapes and movements. Despite its numerous advantages in BC diagnosis, ultrasound is seldom used as a standalone primary modality due to its reliance on operator skill and limited resolution quality. Since a portable transducer is employed for scanning, image quality predominantly hinges on the clinician's proficiency in performing the scan. Consequently, the accuracy of BC diagnosis, especially in assessing lesion size and shape, is significantly influenced by the clinician's transducer placement and the pressure applied to the breast. Ultrasound imaging has found application in various methods, such as Generative Adversarial Networks (GANs), Stacked Denoising Autoencoders (SDAEs), Spatially-Uniform Approximation Surfaces (SUAS), DL Reconstruction (DLR), Non-Uniform Rational B-Spline (NURBS), Image Reconstruction Diagnostics (IRDx), Residual Networks with Global Average Pooling (ResNet-GAP), Hybrid Multimodal Breast-Density Learning with Generative Adversarial Hybrid Architecture (HMB-DLGAHA), Automated Breast Ultrasound Systems (ABUS), and so on. It is radiation-free and is especially beneficial in identifying lesions observed on mammography. It is also well-suited for dense breast tissue. and useful for directing biopsies.

3.3 Breast MRI (BMRI)

BMRI is a technology harnessing the formidable magnetic strength of 1.5 Tesla along with radio waves to produce highly detailed breast images [17]. To produce detailed images, it uses radio waves and strong magnetic fields. The visibility of lesions is improved by contrast-enhanced MRI. In recent times, MRI has been integrated into BC diagnosis as an initial screening tool for women who have been newly diagnosed with the condition. It proves invaluable in uncovering cancerous cells that traditional imaging methods may overlook. Additionally, BMRI finds significant application in screening individuals at high risk for BC, assessing disease staging, evaluating genetic mutations, aiding in surgical planning, and monitoring patients post-neoadjuvant chemotherapy (NAC) [18]. The technique offers support for multi-planar scanning and 3D reconstruction methods, as seen in Figure 2, enabling precise depiction of the size, structure, and location of breast lesions [18]. Various approaches, such as Enhanced Screening for BC (ESBC), Abbreviated MRI (Ab-MRI), Multiparametric MRI (mpMRI), the European Society of Breast Imaging (EUSOBI) protocols, Diffusion-Weighted Imaging (DWI), T1-Weighted MRI (T1-W MRI), and Dynamic Contrast-Enhanced MRI (DCE-MRI), have been employed combined with BMRI.

Figure 2. 3D BMRI volume [18]

Nonetheless, BMRI does come with certain limitations. It is time-intensive and expensive, and its Positive Predictive Value (PPV) is compromised by a relatively high rate of false-positive findings, potentially leading to unnecessary breast biopsies. Moreover, its capability to detect microcalcifications is limited. Pregnant individuals are advised against undergoing this examination due to the use of powerful magnets and contrast agents [19].

3.4 Computed Tomography (CT)

CT offers exceptional three-dimensional visualization of anatomical structures, allowing for precise assessment of soft tissue lesions, including their location and size [20]. While not commonly employed as an early BC detection tool, CT plays a crucial role in staging BC cases that have already been diagnosed, aiding healthcare professionals in treatment planning.

CT scans typically exhibit low contrast, necessitating the intravenous administration of contrast agents such as iodine-based compounds, barium sulfate, gadolinium, or air mixtures. These agents enhance visibility and facilitate distinguishing malignant lesions from benign ones [21]. It is worth noting that patients undergoing multiple CT scans for screening purposes may potentially be exposed to higher levels of radiation, as CT scans generally entail greater radiation exposure than digital mammograms (DMs). Consequently, CT scans may be recommended for individuals who are contraindicated for MRI. Furthermore, CT exhibits a lower threshold for detecting microcalcifications compared to mammography. In the preoperative evaluation of BC, CT has the potential to serve as an alternative to 3D MRI. Various strategies have been proposed to leverage this technique in the context of BC assessment [22]. However, its routine usage in BC screening is constrained by its limited sensitivity for tiny breast lesions, possible radiation risks, and costs. Figure 3 presents a visual representation of two distinct DCE-MRI slices. On the left-hand side, a solid tumor is clearly displayed, and this reveals a concentrated mass of abnormal tissue present within the breast. On the other hand, a non-mass tumor can be seen on the right side of Figure 3. In this case, the cancerous growth appears to be more diffused.

Figure 3. Two DCE-MRI slices of patients with BC: solid tumor (left) and non-mass tumor (right) [23]

4. Modalities for Diagnosis and Treatment

BC diagnosis primarily relies on two fundamental imaging techniques: radiography and histology. The field of radiology is dedicated to the acquisition of internal body structure images to diagnose and treat patients by assessing the presence or absence of diseases, injuries, and abnormalities. Radiographic imaging methods employed in BC diagnosis encompass Digital Mammography (DM), Breast Ultrasound (BUS), MRI, CT, and PET scans.

Histopathology, on the other hand, involves the meticulous examination of cancer cells and tissues at an ultrahigh magnification level under a microscope. A small segment of breast tissue is extracted for this analysis, serving as a confirmatory diagnostic test for BC. Various biopsy types are available, contingent upon the patient's condition at the time of the procedure. Each of the imaging modalities plays an integral role within the BC diagnostic toolkit. Once a tumor has been accurately identified, and its characteristics delineated through imaging techniques, the subsequent step involves selecting the most suitable treatment approach. Effective BC treatments encompass surgical interventions, radiation therapy, and chemotherapy. These treatment modalities aim to eradicate cancer cells from their original sites [24].

In a research endeavor, an AI algorithm was developed employing a dataset comprising BMRI images, encompassing both contrast-enhanced and non-contrast acquisitions. Subsequently, this AI model was subjected to inputs exclusively comprising non-contrast images extracted from BMRI studies. Remarkably, the AI model demonstrated the capability to generate simulated contrast-enhanced BMRI images, showcasing its potential for image enhancement and synthesis [25].

A notable illustration of AI-driven methodology involves a process wherein suspicious regions of interest identified within DBT data are amalgamated or consolidated to create maximum suspicion projections. These synthesized images serve to accentuate and highlight the suspicious findings, rendering them more discernible. Subsequently, these novel synthesized images are employed as input data for an AI-driven cancer detection model, effectively mitigating the demands associated with image and data preparation and streamlining the process of BC detection [26].

5. Segmentation in BC

In the realm of image processing, segmentation refers to the process of partitioning an image into distinct regions with the primary objective of isolating the ROI within mammographic images, specifically targeting the identification of masses [27], [28]. It is noteworthy that the accuracy of this detection can be influenced by the presence of pectoral muscles, necessitating the prior removal of artifacts and pectoral muscles to facilitate accurate segmentation. Additionally, when segmentation is directly applied to raw images characterized by noise and inadequate contrast, the potential for over-segmentation and erroneous identification of breast tumors becomes a concern. Therefore, it becomes imperative to employ filtering techniques to eliminate noise and address local irregularities within noisy images, thereby enhancing their overall quality [29], [30]. The fundamental objective of segmentation lies in the extraction of ROIs harboring potential masses, involving the partitioning of breast images into discrete, non-overlapping regions [31]. Nonetheless, it is crucial to acknowledge that segmentation methodologies may encounter various influencing factors that can impede the identification of abnormalities within images, including pixel resolution, integration scale, and preprocessing steps.

The automation of image analysis and segmentation processes implies a minimal degree or complete absence of human intervention. Computer-aided diagnosis (CAD) is widely embraced within the medical domain to support radiologists in detecting and characterizing breast masses [26], [32], [33]. After CAD is incorporated into clinical practice, it serves to diminish the occurrence of misclassifications, consequently enhancing diagnostic accuracy and optimizing time management [25]. The pivotal roles of image segmentation within the ambit of image analysis encompass the processes of detection, feature extraction, and classification. Breast images are used as follows:

• Detection: Segmentation aids radiologists in the facile identification of BC, given the dissimilar morphologies of benign and malignant tumors.

• Feature extraction: Segmentation serves as a pivotal preprocessing step, enhancing image suitability for specific applications by facilitating subsequent feature extraction.

• Classification: Contour-based segmentation assumes a pivotal role in CAD systems, particularly in mass classification. Segmented images can be categorized as normal, benign, or malignant.

• Treatment: Segmentation directly influences BC treatment dosage determination, as tumor size is a critical output. Segmentation of breast and lymph node volumes is integral to defining irradiation volumes in the context of BC treatment.

5.1 Segmentation Methods

Various segmentation methodologies are presently employed in segmentation, encompassing classical, machine learning, and DL segmentation techniques. The primary objective of segmentation lies in facilitating medical professionals in quantifying tissue volumes, identifying pathological regions, conducting diagnoses and anatomical studies, and devising treatment strategies [34]. The process of segmentation facilitates the partitioning of image regions [35], thereby simplifying the task of medical experts in distinguishing between normal and abnormal characteristics within a given medical image.

5.1.1 Classical segmentation

Among various filtering techniques, the median filter stands out as the predominant choice in image processing, exhibiting greater prevalence in usage compared to alternative filter types. Furthermore, in the domain of image segmentation, the region-growing method has demonstrated superior performance when juxtaposed with alternative segmentation methodologies. Notably, an exceptional level of accuracy, reaching 99.0%, has been attained with the utilization of a specified threshold limit [36]. Remarkably, the highest degree of accuracy achieved in threshold-based segmentation reached 100.0%, and this pinnacle performance was accomplished through the employment of the morphological filter [37]. Moreover, in the context of edge-based segmentation, the Gabor filter emerged as the filter of choice, attaining the highest sensitivity level at 100.0% [38]. Some of the most commonly used segmentation techniques are as follows:

• Edge-driven segmentation: Techniques such as Canny edge detection, active contour models, Sobel operators, minimization-based approaches, and contour-based methods fall under this category.

• Threshold-driven segmentation: This category encompasses various methods, including Otsu thresholding, morphological thresholding, adaptive thresholding, manual thresholding, Kittler's optimal thresholding, as well as global and local thresholding techniques.

• Region-centric segmentation: Methods like watershed segmentation, rough set theory-based approaches, partial region growth, and marker-controlled segmentation are representative of this segmentation paradigm.

5.1.2 Machine learning segmentation

In medical image analysis using machine learning, the prerequisite typically entails the utilization of images annotated by qualified medical professionals, complemented by conclusive pathology reports delineating the benign or malignant nature of the depicted conditions. Notably, unsupervised machine learning techniques are prevalently favored over their supervised counterparts within this context. Nevertheless, it is worth highlighting that the best performance attained in supervised machine learning reached an impressive detection rate of 99.82% [39]. Machine learning techniques are as follows: unsupervised and supervised machine learning.

Within the realm of unsupervised machine learning, the repertoire comprises hierarchical k-clustering, k-means clustering, fuzzy C-clustering, and techniques concerning vector machines and naïve Bayes models.

In the realm of breast tumor segmentation on mammograms, a method employing hierarchical k-means was introduced by Ramadijanti et al. [40]. This approach incorporates automatic breast tumor detection through valley tracing, thereby determining the optimal quantity of clusters within mammographic images. Experimental outcomes revealed a 61.1% error detection rate, with an associated accuracy rate of 38.8%. Gu et al. [39] introduced a methodology for segmenting breast masses in mammogram images, based on a mathematical model designed to pinpoint the mass's location. The pixel values were subjected to classification via fuzzy c-means clustering, categorizing them into three distinct classes: background, initial mass, and boundary. The evaluation was conducted on 100 mammogram images sourced from the MIAS database, with noise removal achieved through median filtering. The experimental findings demonstrated a notable mass detection rate of 98.82%. An amalgamation of a novel enhancement technique and fuzzy c-means clustering for BC detection was proposed [41], which involves CAD and incorporates modifications to the local range modification (LRM), resulting in the modified LRM (MLRM) technique for noise reduction and enhancement. Mammogram images obtained from the MIAS database were employed in this endeavor. The integration of MLRM and FCMC yielded a commendable accuracy rate of 98.1%.

The prominent methods of supervised machine learning encompass support vector machines (SVMs), extreme learning machines, and k-means and fuzzy c-means methodologies.

A structured SVM for BC detection, utilizing mammography images sourced from the DDSM-BCRP and INbreast databases, was introduced in a scientific context [42]. This method exhibited superior performance compared to contemporary approaches, achieving a remarkable computational efficiency of 0.8 seconds and a Dice index of 87.0%. Figure 4 shows the DDSM dataset sample.

Figure 4. DDSM dataset sample [40]

Oliver et al. [43] presented an automatic breast density analysis technique validated through comparison with manual expert annotations and automatic estimations. A dataset comprising 130 mammogram images obtained from the Spanish screening program specifications (SSPS), including mediolateral and craniocaudal angles, was utilized. Noise mitigation was accomplished using a median filter. The study showed a strong 0.96 connection between the left and right breast mammography density percentages. Additionally, a comparison of both mammogram views exhibited a correlation coefficient of 0.95, facilitated by the implementation of a SVM classifier. Wirtti and Salles [44] proposed a method for mammogram density assessment employing a multiscale wavelet transform. Using wavelet processing, density data were used to train a multilayer perceptron network (MLP). After being trained, this network was applied to mass detection in 19 mammography pictures, producing an 8.7% false-positive rate and a 68.2% true-positive rate (sensitivity).

5.1.3 DL segmentation

In medical image segmentation, the U-Net model has garnered significant attention due to its efficacy, particularly in scenarios with limited annotated data, a common constraint in medical imaging tasks. This model has proven especially effective, achieving a remarkable Dice coefficient of 98.87%, surpassing other competing models. Shen et al. [45] presented the Residual-Aided and Mixed-Supervision-Guided Classification U-Net model (ResCUNet), designed specifically for the joint segmentation and classification of mammography images. Utilizing the INbreast dataset and employing convolutional filters for noise reduction, the proposed MS-ResCUNet model exhibited an impressive accuracy rate of 94.16%, surpassing the performance of SegNet, U-Net, and BreastNet. Saffari et al. [46] proposed the Full-Resolution Convolutional Network (FrCN), a novel segmentation model specifically designed for mammogram image segmentation. In addition, the identified and segmented breast lesions were classified as benign or malignant using three traditional DL models: InceptionResNet-V2, ResNet-50, and a normal feedforward CNN. The FrCN-based breast lesion segmentation method, which used mammography images from the INbreast database, produced noteworthy results, such as an overall accuracy of 92.97%, a Dice coefficient of 92.69%, a Matthews Correlation Coefficient (MCC) of 85.93%, and a Jaccard similarity coefficient of 86.37%.

Hu et al. [47] presented a fully automated method for segmenting breast density utilizing Conditional Generative Adversarial Networks (cGAN), paired with DL for classification. The cGAN network was utilized to segment dense tissues in mammogram images, with an evaluation performed on the INbreast dataset and noise reduction achieved through median filtering. The findings demonstrated a segmentation accuracy of 98.0%, highlighting the efficacy of the cGAN-based strategy. Hu et al. [47] presented a novel method that utilizes deep transfer learning along with four-dimensional data to enhance the contrast of MRI images. This approach involves employing a CNN for Maximum Intensity Projection (MIP) of image features. The authors utilized established architectures, including Densenet169, Resnet50, and Resnet101. Dewangan et al. [48] introduced the BPBRW model, which incorporates a Hybrid Krill Herd African Buffalo Optimization (HKHABO) mechanism to enhance MRI images.

Dewangan et al. [48] introduced a modified convolutional layer within a CNN framework, inspired by the U-Net model. Their approach was evaluated using two distinct datasets: DDSM-400 and CBIS DDSM. The results indicated a diagnostic performance of 89.8% and an AUC of 86.20% when utilizing ground-truth segmentation maps. Additionally, the method attained a maximum accuracy of 88.0% and 86.0% for U-Net-based segmentation on the DDSM-400 and CBISDDSM datasets, respectively. Tsochatzidis et al. [49] presented a DL model tailored for segmenting and classifying mammography images. They were adapted to the U-Net model, which effectively delineated the breast area within mammogram images. The model underwent thorough evaluation across three diverse mammographic datasets: MIAS, DDSM, and CBIS DDSM. The results demonstrated exceptional performance metrics, with an impressive 98.87% accuracy, 98.88% AUC, 98.98% sensitivity, 98.79% precision, and 97.99% F1-score observed when tested on DDSM datasets.

A DL system incorporating residual architecture was presented, which integrates mass segmentation using a residual attention U-Net model (RU-Net) and classification using the ResNet classifier [50]. Their approach was evaluated on three datasets: DDSM, BCDR-01, and INbreast, with noise reduction implemented through the cLare filter. The proposed model demonstrated outstanding performance metrics, achieving an average test pixel accuracy of 98.0%, a mean Dice coefficient index (DI) of 98.0%, and an average IOU of 94.0%. Hossain [51] presented a method for segmenting microcalcifications using a modified U-Net segmentation network applied to mammogram images. The model was trained on images sourced from the DDSM database, with noise reduction achieved through the Laplacian filter. This approach yielded promising results, including an F-measure of 98.50%, a Dice score of 97.80%, a Jaccard index of 97.40%, and an average accuracy rate of 98.20%. Sun et al. [52] developed a novel attention-guided dense-upsampling network called AUNet for segmenting breast masses in full mammograms. AUNet employs an asymmetrical encoder-decoder architecture and integrates an efficient upsampling block known as the attention-guided dense upsampling block (AUblock). A comprehensive evaluation was carried out on the publicly available datasets of CBIS-DDSM and INbreast. The method demonstrated significant performance, with an average Dice similarity coefficient of 81.80% for CBIS-DDSM and 79.10% for the INbreast datasets.

5.1.4 Manual segmentation

In clinical practice, the identification of breast tumors within the ROI is typically accomplished through manual examination by medical professionals. Subsequently, abnormal regions are discerned by juxtaposing these manually identified areas with the remaining anatomical components [53]. Manually scrutinizing numerous images from various imaging modalities poses significant inefficiencies and challenges, potentially leading to misdiagnoses and an increased false-positive rate. Consequently, there is a pressing need for an automated approach to address these issues. In the realm of early BC detection, medical image analysis facilitated by CAD systems has emerged as the most efficient methodology. However, it's important to note that CAD systems tend to identify a higher number of false features compared to genuine anomalies, necessitating the clinician's involvement in result interpretation. This characteristic of CAD systems extends the reading time. In addition, it imposes limitations on the volume of cases that can be effectively analyzed by radiologists.

6. Challenges and Implications

Recent advancements in AI, specifically DL methodologies, hold the promise of substantially expediting the image analysis procedure, thereby aiding radiologists in achieving earlier BC diagnoses. Empirical investigations have demonstrated that DL-based CAD systems can yield commendable results within the domain of medical image analysis. Nonetheless, several challenges hinder the incorporation of these approaches into clinical practice.

Despite the achievements garnered by DL models, certain challenges and limitations exist, which necessitate resolution in the context of BC detection, classification, and segmentation. Some of the challenges and implications that face the adoption of automated medical imaging-based diagnosis in the field of BC are as follows:

Several challenges and constraints are associated with the utilization of DL models for BC detection, classification, and segmentation:

• Limited availability of comprehensive datasets: DL algorithms for medical imaging heavily rely on large, high-quality training datasets. However, creating such datasets is challenging because of the labor-intensive and error-prone nature of medical image annotation. Mammogram datasets have been more readily available than datasets for other modalities like MRI and PET/CT. Although some solutions achieved high accuracy and had a short runtime, such as the Deep lab v3+ model, the sample size of the data set was insufficient [54].

• Inadequate range of data: DL models require extensive training data to achieve desirable results. Limited datasets hinder the effectiveness of DL algorithms. Strategies to address this issue include pooling data from multiple healthcare centers, with strict adherence to patient confidentiality guidelines.

• Reliance on private datasets: Many studies utilize confidential private datasets, making it difficult to compare model efficiency across different studies [55]. Although the private dataset allowed for the development of a string segmentation model, it is specific to a single institution. This resulted in problems such as compatibility with other methods and generalizability. In addition to its dependency on a private dataset, the model's performance could not be assessed across a wide variety of patient populations, nor could its effectiveness be compared to models trained on other datasets. It was difficult to assess the model's generalizability to broader populations, an alternative imaging technique, or other clinical contexts because there was no benchmarking against publicly available datasets.

• Limitations of data augmentation: Some studies use data augmentation to expand their datasets artificially. However, this method has limitations, as it does not provide significant additional information compared to new independent images.

• Lack of completely labeled datasets: DL-based CAD systems often struggle with insufficient completely labeled data. Supervised DL approaches require extensive, accurately labeled data, which is challenging and time-consuming to acquire. Unsupervised techniques have been explored but tend to yield less reliable results.

• Class imbalance: The issue of class imbalance can skew results in favor of the majority class, necessitating the development of more diverse and representative datasets.

• Confidentiality concerns: Protecting sensitive medical information is crucial. Collaborative and autonomous training of CNNs without patient data disclosure is an emerging area of research. Integrating non-imaging data, such as cancer history and genetic information, with imaging data is an ongoing challenge.

• Label noise: BC can affect different regions of the same breast differently, leading to varying stages of cancer in a single image. This multi-classification scenario can pose challenges for DL models.

• Lack of transparency: DL algorithms are often considered “black boxes,” making decision-making processes less transparent. To enhance trust and reliability in DL tools, interpretable methodologies and explanations of DL algorithms are required to be adopted.

• Potential use of omics data: Exploring omics data (proteomics, transcriptomics, genomics, etc.) as an alternative to imaging data may lead to improved classification accuracy. However, processing omics data is costlier than working with images, and comprehensive omics datasets are less readily available.

It is crucial to address these challenges, thereby advancing the capabilities and reliability of DL-based approaches for BC diagnosis and management.

The promise of better performance is finally beginning to emerge as a result of recent developments in machine learning and AI. At present, over 20 breast imaging applications using AI have been approved by the Food and Drug Administration (FDA); nevertheless, overall acceptance and use remain low and highly diverse [56]. Because of its special characteristics, breast imaging presents both opportunities and difficulties for the development and application of AI. On a global scale, screening mammography is the main tool used in BC screening programmes, as mentioned earlier, to lower the disease's morbidity and death rate. In fact, several of the most innovative research initiatives and AI applications already in use are centered around mammography cancer detection. However, there are several other possible uses for AI in breast imaging, such as workflow and the triage phase, evaluation of performance, quantification of breast density, risk assessment, and decision assistance.

A significant number of mammograms are conducted each year as a result of population-based screening initiatives; in the United States alone, about 40 million mammograms are performed annually. Optimising screening mammography performance is crucial for BC screening programmes due to the high number of mammograms involved and the significance of screening mammography performance. The FDA in the US strictly regulates this through the Mammography Quality Standards Act (MQSA), with a current focus on the Enhancing Quality Using the Inspection Programme (EQUIP) procedure, which was started in 2017. These procedures have made screening mammography conducted in the US more consistent and of high quality. Nevertheless, there is opportunity for improvement in the screening mammography performance indicators despite these efforts. An example of this requirement can be found in the 86.9% sensitivity and 88.9% specificity for screening mammography found in an assessment of performance conducted by the BC Surveillance Consortium. Of the radiologists analysed, nearly half had abnormal interpretation rates (false positives), which emphasises areas for improvement.

The FDA approved CAD in 1998 for mammography. In addition, by 2002, the Centre for Medicare and Medicaid Services had funded it, leading to 74% of mammograms in 2008. In the field of breast imaging and AI applications, a lot of work and enthusiasm have been devoted to the identification of cancer, particularly in screening mammography, as shown in Table 1. AI applications in breast imaging face challenges like unreliable performance, high costs, and IT needs. Confidence among radiologists, patients, and providers is low. Recent studies show a decline in AI algorithm performance when tested with updated systems, raising concerns about the generalizability of AI algorithms. The adoption of AI in breast imaging faces challenges due to performance monitoring, a lack of reimbursement, and radiologists' ignorance about interactions between AI and themselves. Concerns include biases and the potential impairment of clinical judgement due to overreliance on AI tools.

High-level declarations on AI’s ethical, legal, and social implications (ELSI) have been released by governmental, international, professional, and private sector organizations recently. These statements show excitement about AI’s potential benefits as well as concerns about potential risks and downsides. High-quality data is required by AI systems for training and validation, which requires ownership, consent, and data privacy. Major tech firms and BC start-ups exacerbate these challenges. Certain governments have released healthcare data to developers. For instance, without obtaining the patients' individual consent, the Italian government gave IBM Watson access to the anonymized health records of all 61 million Italians, including genetic data [57]. The government also offered IBM Watson exclusive use rights. These disclosures give rise to serious questions about what public or commercial values ought to be exchanged for granting private companies access to these extremely rich data collections. The healthcare industry's strict medicolegal and ethical standards may lead to the ban of non-explainable AI. Access to high-quality data is crucial for AI development, but it poses risks of data breaches and harm. Public support for AI use depends on privacy, control, governance, and the public good.

Table 1. Summary of FDA-approved AI applications for BC detection

Product Name

Vendor

Country of Origin

Modality

cmAssist®

CureMetrix

United States

Mammography

ProFound AI®

iCAD, Inc.

United States

Mammography and tomosynthesis

Lunit INSIGHT MMG

Lunit

South Korea

Mammography

MammoScreen® 2.0

Therapixel

France

Mammography and tomosynthesis

Genius AI™ Detection

Hologic®, Inc.

United States

Mammography and tomosynthesis

Transpara®

ScreenPoint Medical B.V.

Netherlands

Mammography and tomosynthesis

Saige-Dx™

DeepHealth, Inc.

United States

Mammography

7. Conclusion

In conclusion, the comprehensive analysis of segmentation methodologies presented in this paper underscores their crucial role in BC detection and diagnosis within medical imaging. Classical techniques such as region-growing and threshold-based segmentation offer robust performance, while machine learning approaches, both supervised and unsupervised, demonstrate promising results in automated tumor identification. DL methods, particularly U-Net models, exhibit remarkable efficacy, especially in scenarios with limited annotated data, presenting opportunities for enhanced accuracy in segmentation tasks. Despite the advancements, challenges such as the limited availability of comprehensive datasets, class imbalance, and confidentiality concerns persist, highlighting the need for continued research and innovation. It is paramount to address these challenges, thereby harnessing the full potential of segmentation techniques in improving early detection, treatment planning, and patient outcomes in BC care. Overall, this review underscores the importance of segmentation methodologies in facilitating precise and efficient analysis of medical images, ultimately contributing to advancements in BC diagnosis and management.

The review distinguishes itself by providing ideas for overcoming obstacles, prospective solutions, and forward-looking insights into future directions, in addition to summarizing the existing situation. The evaluation acts as a roadmap for academics and practitioners, pointing them in the direction of areas that need more investigation and providing insightful information to advance techniques for BC segmentation. It can demonstrate its distinctiveness and highlight how it has advanced the knowledge of BC segmentation in both clinical and research settings.

Further studies can concentrate on merging data from several imaging modalities, like mammography, ultrasound, MRI, and perhaps molecular imaging. A more thorough and precise assessment of BC can be obtained by multimodal techniques. The creation of AI models that can be understood and interpreted becomes essential for therapeutic usage. Subsequent investigations could focus on improving the DL models' transparency so that physicians can comprehend and have confidence in the decision-making procedures. AI systems may develop to offer more individualized risk assessments based on personal patient data, such as genetics, lifestyle choices, and other pertinent health information. This may help create screening and treatment regimens that are more individualized.

Data Availability

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
C. D. Lehman, R. D. Wellman, D. S. M. Buist, K. Kerlikowske, A. N. A. Tosteson, and D. L. Miglioretti, “Diagnostic accuracy of digital screening mammography with and without computer-aided detection,” JAMA Intern. Med., vol. 175, no. 11, pp. 1828–1837, 2015. [Google Scholar] [Crossref]
2.
J. J. Fenton, L. Abraham, S. H. Taplin, B. M. Geller, P. A. Carney, C. D’Orsi, J. G. Elmore, and W. E. Barlow, “Effectiveness of computer-aided detection in community mammography practice,” J. Natl. Cancer Inst., vol. 103, no. 15, pp. 1152–1161, 2011. [Google Scholar] [Crossref]
3.
J. D. Keen, J. M. Keen, and J. E. Keen, “Utilization of computer-aided detection for digital screening mammography in the United States, 2008 to 2016,” J. Am. Coll. Radiol., vol. 15, no. 1, pp. 44–48, 2018. [Google Scholar] [Crossref]
4.
American Cancer Society, “Mammography quality standards act and program,” 2023. https://www.fda.gov/radiation-emitting-products/mammography-quality-standards-act-and-program [Google Scholar]
5.
P. Samantha  Zuckerman, L. Brian  Sprague, L. Donald  Weaver, D. Sally  Herschorn, and F. Emily  Conant, “Survey results regarding uptake and impact of synthetic digital mammography with tomosynthesis in the screening setting,” J. Am. Coll. Radiol., vol. 17, no. 1, pp. 31–37, 2020. [Google Scholar] [Crossref]
6.
American Cancer Society, “Breast cancer statistics-How common is breast cancer,” 2023. https://www.cancer.org/cancer/types/breast-cancer/about/how-common-is-breast-cancer.html [Google Scholar]
7.
Food and Drug Administration, “MQSA national statistics,” 2023. https://www.fda.go v/radiation-emitting-products/mqsa-insights/mqsa-national-statistics [Google Scholar]
8.
B. Abhisheka, S. K. Biswas, and B. Purkayastha, “A comprehensive review on breast cancer detection, classification and segmentation using deep learning,” Arch. Comput. Methods Eng., vol. 30, pp. 5023–5052, 2023. [Google Scholar] [Crossref]
9.
C. R. Taylor, N. Monga, C. Johnson, J. R. Hawley, and M. Patel, “Artificial intelligence applications in breast imaging: Current status and future directions,” Diagnostics, vol. 13, no. 12, p. 2041, 2023. [Google Scholar] [Crossref]
10.
J. J. J. Condon, L. Oakden-Rayner, K. A. Hall, M. Reintals, A. Holmes, G. Carneiro, and L. J. Palmer, “Replication of an open-access deep learning system for screening mammography: Reduced performance mitigated by retraining on local data,” medRxiv 2021.05.28.21257892, 2021. [Google Scholar] [Crossref]
11.
W. Hsu, D. S. Hippe, N. Nakhaei, P. C. Wang, B. Zhu, N. Siu, M. E. Ahsen, W. Lotter, A. G. Sorensen, A. Naeim, D. S. M. Buist, T. Schaffter, J. Guinney, J. G. Elmore, and C. I. Lee, “External validation of an ensemble model for automated mammography interpretation by artificial intelligence,” JAMA Netw. Open, vol. 5, no. 11, p. e2242343, 2022. [Google Scholar] [Crossref]
12.
A. D. Lauritzen, A. Rodríguez-Ruiz, M. C. von Euler-Chelpin, E. Lynge, I. Vejborg, M. Nielsen, N. Karssemeijer, and M. Lillholm, “An artificial intelligence–based mammography screening protocol for breast cancer: Outcome and radiologist workload,” Radiology, vol. 304, no. 1, pp. 41–49, 2022. [Google Scholar] [Crossref]
13.
K. Dembrower, E. Wåhlin, Y. Liu, M. Salim, K. Smith, P. Lindholm, M. Eklund, and F. Strand, “Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: A retrospective simulation study,” Lancet Digit. Health, vol. 2, no. 9, pp. e468–e474, 2020. [Google Scholar] [Crossref]
14.
E. F. Conant, Y. Alicia  Toledano, S. Periaswamy, V. Sergei  Fotin, J. Go, E. Justin  Boatsman, and W. Jeffrey  Hoffmeister, “Improving accuracy and efficiency with concurrent use of artificial intelligence for digital breast tomosynthesis,” Radiol. Artif. Intell., vol. 1, no. 4, p. e180096, 2019. [Google Scholar] [Crossref]
15.
A. Singh, A. Dhillon, N. Kumar, M. S. Hossain, G. Muhammad, and M. Kumar, “eDiaPredict: An ensemblebased framework for diabetes prediction,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 17, no. 2s, pp. 1–26, 2021. [Google Scholar] [Crossref]
16.
X. F. Qi, F. S. Yi, L. Zhang, Y. Chen, Y. Pi, Y. Y. Chen, J. X. Guo, J. Y. Wang, Q. Guo, J. L. Li, Y. Chen, Q. Lv, and Z. Yi, “Computer-aided diagnosis of breast cancer in ultrasonography images by deep learning,” Neurocomputing, vol. 472, pp. 152–165, 2022. [Google Scholar] [Crossref]
17.
J. L. Thompson and G. P. Wright, “The role of breast MRI in newly diagnosed breast cancer: An evidence-based review,” Am. J. Surg., vol. 221, no. 3, pp. 525–528, 2021. [Google Scholar] [Crossref]
18.
G. Piantadosi, M. Sansone, R. Fusco, and C. Sansone, “Multi-planar 3D breast segmentation in MRI via deep convolutional neural networks,” Artif. Intell. Med., vol. 103, p. 101781, 2020. [Google Scholar] [Crossref]
19.
C. R. Saccarelli, A. G. V. Bitencourt, and E. A. Morris, “Breast cancer screening in high-risk women: Is MRI alone enough?,” J. Natl. Cancer Inst., vol. 112, no. 2, pp. 121–122, 2019. [Google Scholar] [Crossref]
20.
E. Desperito, L. Schwartz, K. M. Capaccione, B. T. Collins, S. Jamabawalikar, B. Peng, R. Patrizio, and M. M. Salvatore, “Chest CT for breast cancer diagnosis,” Life (Basel), vol. 12, no. 11, p. 1699, 2022. [Google Scholar] [Crossref]
21.
E. Nicolas, N. Khalifa, C. Laporte, S. Bouhroum, and Y. Kirova, “Safety margins for the delineation of the left anterior descending artery in patients treated for breast cancer,” Int. J. Radiat. Oncol. Biol. Phys., vol. 109, no. 1, pp. 267–272, 2021. [Google Scholar] [Crossref]
22.
J. Koh, Y. Y. Yoon, S. Kim, K. Han, and E. Kim, “Deep learning for the detection of breast cancers on chest computed tomography,” Clin. Breast Cancer, vol. 22, no. 1, pp. 26–31, 2022. [Google Scholar] [Crossref]
23.
M. Benjelloun, M. E. Adoui, M. A. Larhmam, and S. A. Mahmoudi, “Automated breast tumor segmentation in DCE-MRI using deep learning,” in 2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech), Brussels, Belgium, 2018, pp. 1–6. [Google Scholar] [Crossref]
24.
R. Thawani, L. Gao, A. Mohinani, A. Tudorica, X. Li, Z. Mitri, and W. Huang, “Quantitative DCE-MRI prediction of breast cancer recurrence following neoadjuvant chemotherapy: A preliminary study,” BMC Med Imaging, vol. 22, no. 1, p. 182, 2022. [Google Scholar] [Crossref]
25.
T. F. Majeed, N. Al-Jawad, and H. Sellahewa, “Breast border extraction and pectoral muscle removal in MLO mammogram images,” in 2013 5th Computer Science and Electronic Engineering Conference (CEEC), Colchester, UK, 2013, pp. 119–124. [Google Scholar] [Crossref]
26.
R. L. Birdwell, D. M. Ikeda, K. F. O’Shaughnessy, and E. A. Sickles, “Mammographic characteristics of 115 missed cancers later detected with screening mammography and the potential utility of computer-aided detection,” Radiology, vol. 219, no. 1, pp. 192–202, 2001. [Google Scholar] [Crossref]
27.
N. Roussel, J. Sprenger, S. J. Tappan, and J. R. Glaser, “Robust tracking and quantification of C. elegans body shape and locomotion through coiling,entanglement, and omega bends,” Worm, vol. 3, no. 4, p. e982437, 2015. [Google Scholar] [Crossref]
28.
J. Jebamony and D. Jacob, “Classification of benign and malignant breast masses on mammograms for large datasets using core vector machines,” Curr. Med. Imaging, vol. 16, no. 6, pp. 703–710, 2020. [Google Scholar] [Crossref]
29.
H. Abdellatif, T. E. Taha, O. F. Zahran, W. Al-Nauimy, and F. E. Abd El-Samie, “K9. automatic segmentation of digital mammograms to detect masses,” in 2013 30th National Radio Science Conference (NRSC), Cairo, Egypt, 2013, pp. 557–565. [Google Scholar] [Crossref]
30.
A. K. Chaubey, “Breast segmentation in mammograms using manual thresholding,” World J. Res. Rev., vol. 1, no. 1, p. 262995, 2015. [Google Scholar]
31.
S. J. S. Gardezi, A. Elazab, B. Y. Lei, and T. F. Wang, “Breast cancer detection and diagnosis using mammographic data: Systematic review,” J. Med. Internet Res., vol. 21, no. 7, p. e14464, 2019. [Google Scholar] [Crossref]
32.
S. H. Gu, Y. Ji, Y. J. Chen, J. Wang, and J. U. Kim, “Study on breast mass segmentation in mammograms,” in Information and Application 2015 3rd International Conference on Computer, Yeosu, Korea (South), 2015, pp. 22–25. [Google Scholar] [Crossref]
33.
D. Ribli, A. Horvath, Z. Unger, P. Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Sci. Rep., vol. 8, no. 1, 2018. [Google Scholar] [Crossref]
34.
D. L. Pham, C. Xu, and J. L. Prince, “Current methods in medical image segmentation,” Annu. Rev. Biomed. Eng., vol. 2, pp. 315–337, 2000. [Google Scholar] [Crossref]
35.
E. Michael, H. J. Ma, H. Li, F. Kulwa, and J. Li, “Breast cancer segmentation methods: Current status and future potentials,” BioMed. Res. Int., vol. 2021, p. e9962109, 2021. [Google Scholar] [Crossref]
36.
S. Dehghani and M. A. Dezfooli, “A method for improve preprocessing images mammography,” Int. J. Inf. Educ. Technol., vol. 1, no. 1, p. 90, 2011. [Google Scholar]
37.
K. Loizidou, G. Skouroumouni, C. Nikolaou, and C. Pitris, “Automatic breast mass segmentation and classification using subtraction of temporally sequential digital mammograms,” IEEE J. Transl. Eng. Health. Med., vol. 10, pp. 1–11, 2022. [Google Scholar] [Crossref]
38.
S. S. Mohamed, G. Behiels, and P. Dewaele, “Mass candidate detection and segmentation in digitized mammograms,” in 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON, Canada, 2009, pp. 557–562. [Google Scholar] [Crossref]
39.
S. H. Gu, Y. Chen, F. Q. Sheng, T. M. Zhan, and Y. J. Chen, “A novel method for breast mass segmentation: From superpixel to subpixel segmentation,” Mach. Vis. Appl., vol. 30, no. 7, pp. 1111–1122, 2019. [Google Scholar] [Crossref]
40.
N. Ramadijanti, A. R. Barakbah, and F. A. Husna, “Automatic breast tumor segmentation using hierarchical K-means on mammogram,” in 2018 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC), Bali, Indonesia, 2018, pp. 170–175. [Google Scholar] [Crossref]
41.
B. Senthilkumar and G. Umamaheswari, “Combination of novel enhancement technique and fuzzy C means clustering technique in breast cancer detection,” Biomed. Res., vol. 24, no. 2, 2013. [Google Scholar]
42.
N. Dhungel, G. Carneiro, and A. P. Bradley, “Deep structured learning for mass segmentation from mammograms,” arXiv preprint arXiv:1410.7454, 2014. [Google Scholar] [Crossref]
43.
A. Oliver, M. Tortajada, X. Lladó, J. Freixenet, S. Ganau, L. Tortajada, M. Vilagran, M. Sentís, and R. Martí, “Breast density analysis using an automatic density segmentation algorithm,” J. Digit. Imaging, vol. 28, no. 5, pp. 604–612, 2015. [Google Scholar] [Crossref]
44.
T. T. Wirtti and E. O. T. Salles, “Segmentation of masses in digital mammograms,” in ISSNIP Biosignals and Biorobotics Conference, Vitoria, Brazil, 2011, pp. 1–7. [Google Scholar] [Crossref]
45.
T. Y. Shen, C. Gou, J. G. Wang, and F. Y. Wang, “Simultaneous segmentation and classification of mass region from mammograms using a mixed-supervision guided deep model,” IEEE Signal Process. Lett., vol. 27, pp. 196–200, 2020. [Google Scholar] [Crossref]
46.
N. Saffari, H. A. Rashwan, M. Abdel-Nasser, V. K. Singh, M. Arenas, E. Mangina, B. Herrera, and D. Puig, “Fully automated breast density segmentation and classification using deep learning,” Diagnostics (Basel), vol. 10, no. 11, p. 988, 2020. [Google Scholar] [Crossref]
47.
Q. Y. Hu, H. M. Whitney, H. Li, Y. Ji, P. F. Liu, and M. L. Giger, “Improved classification of benign and malignant breast lesions using deep feature maximum intensity projection MRI in breast cancer diagnosis using dynamic contrast-enhanced MRI,” Radiol. Artif. Intell., vol. 3, no. 3, p. e200159, 2021. [Google Scholar] [Crossref]
48.
K. K. Dewangan, D. K. Dewangan, S. P. Sahu, and R. Janghel, “Breast cancer diagnosis in an early stage using novel deep learning with hybrid optimization technique,” Multimed. Tools Appl., vol. 81, no. 10, pp. 13935–13960, 2022. [Google Scholar] [Crossref]
49.
L. Tsochatzidis, P. Koutla, L. Costaridou, and I. Pratikakis, “Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses,” Comput. Methods Programs Biomed., vol. 200, p. 105913, 2021. [Google Scholar] [Crossref]
50.
D. Abdelhafiz, S. Nabavi, R. Ammar, C. Yang, and J. Bi, “Residual deep learning system for mass segmentation and classification in mammography,” in Proceedings of the 10th ACMInternational Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, USA, 2019, pp. 475–484. [Google Scholar] [Crossref]
51.
M. S. Hossain, “Microcalcification segmentation using modified U-net segmentation network from mammogram images,” J. King Saud Univ. Comput. Inf. Sci., vol. 34, no. 2, pp. 86–94, 2022. [Google Scholar] [Crossref]
52.
H. Sun, C. Li, B. Q. Liu, Z. Y. Liu, M. Y. Wang, H. R. Zheng, D. D. Feng, and S. S. Wang, “AUNet: Attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms,” Phys. Med. Biol., vol. 65, no. 5, p. 055005, 2020. [Google Scholar] [Crossref]
53.
J. E. Ball, T. W. Butler, and L. M. Bruce, “Towards automated segmentation and classification of masses in mammograms,” in The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 2004, pp. 1814–1817. [Google Scholar] [Crossref]
54.
K. C. Zhou, W. Li, and D. Z. Zhao, “Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+,” Technol. Health Care, vol. 30, no. S1, pp. 173–190, 2022. [Google Scholar] [Crossref]
55.
V. K. Singh, H. A. Rashwan, S. Romani, F. Akram, N. Pandey, M. M. K. Sarker, A. Saleh, M. Arenas, M. Arquez, D. Puig, and J. Torrents-Barrena, “Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network,” arXiv preprint arXiv:1809.01687, 2018. [Google Scholar] [Crossref]
56.
C. R. Taylor, N. Monga, C. Johnson, J. R. Hawley, and M. Patel, “Artificial intelligence applications in breast imaging: Current status and future directions,” Diagnostics, vol. 13, no. 12, p. 2041, 2023. [Google Scholar] [Crossref]
57.
S. M. Carter, W. Rogers, K. T. Win, H. Frazer, B. Richards, and N. Houssami, “The ethical, legal and social implications of using artificial intelligence systems in breast cancer care,” The Breast, vol. 49, pp. 25–32, 2020. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Abo-El-Rejal, A., Ayman, S. E., & Aymen, F. (2024). Advances in Breast Cancer Segmentation: A Comprehensive Review. Acadlore Trans. Mach. Learn., 3(2), 70-83. https://doi.org/10.56578/ataiml030201
A. Abo-El-Rejal, S. E. Ayman, and F. Aymen, "Advances in Breast Cancer Segmentation: A Comprehensive Review," Acadlore Trans. Mach. Learn., vol. 3, no. 2, pp. 70-83, 2024. https://doi.org/10.56578/ataiml030201
@review-article{Abo-el-rejal2024AdvancesIB,
title={Advances in Breast Cancer Segmentation: A Comprehensive Review},
author={Ayah Abo-El-Rejal and Shehab Eldeen Ayman and Farah Aymen},
journal={Acadlore Transactions on AI and Machine Learning},
year={2024},
page={70-83},
doi={https://doi.org/10.56578/ataiml030201}
}
Ayah Abo-El-Rejal, et al. "Advances in Breast Cancer Segmentation: A Comprehensive Review." Acadlore Transactions on AI and Machine Learning, v 3, pp 70-83. doi: https://doi.org/10.56578/ataiml030201
Ayah Abo-El-Rejal, Shehab Eldeen Ayman and Farah Aymen. "Advances in Breast Cancer Segmentation: A Comprehensive Review." Acadlore Transactions on AI and Machine Learning, 3, (2024): 70-83. doi: https://doi.org/10.56578/ataiml030201
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.