Javascript is required
1.
I. Hussain and J. Muhammad, “Efficient convex region-based segmentation for noising and inhomogeneous patterns,” Inverse Probl. Imaging, vol. 17, no. 3, 2022. [Google Scholar] [Crossref]
2.
A. Suneetha and E. S. Reddy, “Robust gaussian noise detection and removal in color images using modified fuzzy set filter,” J. Intell. Syst., vol. 30, no. 1, pp. 240–257, 2021. [Google Scholar] [Crossref]
3.
R. M. Abdelazeem, D. Youssef, J. El-Azab, S. Hassab-Elnaby, and M. Agour, “Three-dimensional visualization of brain tumor progression based on accurate segmentation via comparative holographic projection,” PLOS ONE, vol. 15, no. 7, 2020. [Google Scholar] [Crossref]
4.
H. Ibrar, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imaging, vol. 16, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
5.
I. Hussain, J. Muhammad, and R. Ali, “Enhanced global image segmentation: Addressing pixel inhomogeneity and noise with average convolution and entropy-based local factor,” Int. J. Knowl. Innov. Stud., vol. 1, no. 2, pp. 116–126, 2023. [Google Scholar] [Crossref]
6.
R. Maini and H. Aggarwal, “Study and comparison of various image edge detection techniques,” Int. J. Image Process., vol. 3, no. 1, pp. 1–15, 2009, [Online]. Available: https://d1wqtxts1xzle7.cloudfront.net/31159717/Maini-libre.pdf?1392250617=&response-content-disposition=inline%253B+filename%253DStudy_and_comparison_of_various_image_ed.pdf&Expires=1755002259&Signature=JxJlw-hJk93HYRktw1VuA-B3EOtSan7GPVmwO8cT9QnCfDlpfQPPoUm7D2RN1W6LlspRmdFdGNk9QuTznXTeW~KmVPUH1k3qGvu51l8yAWEN2qM8GTdlVVY1O0fxPBAJlm5SVlWRcs2mwyaZHOZixTB2WLWFWxvTNT1BqEhQ-bmYefvQW5FRif4LTeyY3cMMdn4RL-0FZZ~grd9tODE13EfICPvQrjDAJ2DrfyMTYBB2QFuWz~33cz9dKfBsYbbP46xUU4H9S9pQx-Vo16r9OAOYuEvyK5TgsUj5UulNDZjB0YFVf6V6nUDhpne6yueEEWcxBfn74460ltsX3VqO8w__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA [Google Scholar]
7.
G. Deng, F. Galetto, M. Al-Nasrawi, and W. Waheed, “A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution,” IEEE Open J. Signal Process., vol. 2, pp. 119–135, 2021. [Google Scholar] [Crossref]
8.
M. S. Khan, H. Ali, M. Zakarya, S. Tirunagari, A. A. Khan, R. Khan, A. A., and R. L., “A convex selective segmentation model based on a piece-wise constant metric-guided edge detector function,” Soft Comput., 2023. [Google Scholar] [Crossref]
9.
Y. Yu, C. Wang, Q. Fu, R. Kou, F. Huang, B. Yang, T. Yang, and M. Gao, “Techniques and Challenges of Image Segmentation: A Review,” Electronics, vol. 12, no. 5, p. 1199, 2023. [Google Scholar] [Crossref]
10.
T. Zhou, W. Xia, F. Zhang, B. Chang, W. Wang, Y. Yuan, E. Konukoglu, and D. Cremers, “Image segmentation in foundation model era: A survey,” arXiv, 2024. [Google Scholar] [Crossref]
11.
B. H. N. Jereni and I. Sundire, “Enhanced detection of COVID-19 in Chest X-ray images: A comparative analysis of CNNs and the DL+ ensemble technique,” Inf. Dyn. Appl., vol. 2, no. 4, pp. 186–198, 2023. [Google Scholar] [Crossref]
12.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 2015, pp. 3431–3440. [Google Scholar] [Crossref]
13.
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Cham: Springer International Publishing, 2015, pp. 234–241. [Online]. Available: [Google Scholar] [Crossref]
14.
A. Abo-El-Rejal, S. E. Ayman, and F. Aymen, “Advances in Breast Cancer Segmentation: A Comprehensive Review,” Acadlore Trans. Mach. Learn., vol. 3, no. 2, pp. 70–83, 2024. [Google Scholar] [Crossref]
15.
S. Balovsyak, O. Derevyanchuk, V. Kovalchuk, H. Kravchenko, Y. Ushenko, and Z. Hu, “STEM project for vehicle image segmentation using fuzzy logic,” Int. J. Mod. Educ. Comput. Sci., vol. 16, no. 2, pp. 45–57, 2024. [Google Scholar] [Crossref]
16.
I. Hussain and R. Ali, “Robust leaf disease detection using complex fuzzy sets and hsv-based color segmentation techniques,” Acadlore Trans. Mach. Learn., vol. 3, no. 3, pp. 183–192, 2024, 30305. [Google Scholar] [Crossref]
17.
M. C. Jobin Christ and R. M. S. Parvathi, “Fuzzy c-means algorithm for medical image segmentation,” in 2011 3rd International Conference on Electronics Computer Technology, 2011, pp. 33–36. doi: 10.1109/ ICECTECH.2011.5941851. [Google Scholar]
18.
O. Sojodishijani, V. Rostami, and A. R. Ramli, “Real-time colour image segmentation with non-symmetric gaussian membership functions,” in 2008 Fifth International Conference on Computer Graphics, Imaging and Visualisation, 2008, pp. 165–170. [Google Scholar] [Crossref]
19.
M. E. Celebi, H. A. Kingravi, and P. A. Vela, “A comparative study of efficient initialization methods for the k-means clustering algorithm,” Expert Syst. Appl., vol. 40, no. 1, pp. 200–210, 2012. [Google Scholar] [Crossref]
20.
E. N. Ganesh, “Image segmentation using contemporary fuzzy logic,” Int. J. Comput. Sci. Eng. Technol., vol. 8, no. 1, pp. 4–10, 2017, [Online]. Available: https://ijcset.com/docs/IJCSET17-08-01-005.pdf [Google Scholar]
21.
N. Mohamed , A. K. Jumaat, and R. Mahmud, “Total variation selective segmentation-based active contour model with distance function and local image fitting energy for medical images,” Math. Sci. Inform. J., vol. 5, no. 2, pp. 57–69, 2024. [Google Scholar] [Crossref]
Search
Open Access
Research article

Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization

Uzair Ahmad*
Department of Mathematics, Abdul Wali Khan University, 23200 Mardan, Pakistan
International Journal of Knowledge and Innovation Studies
|
Volume 3, Issue 2, 2025
|
Pages 74-88
Received: 03-31-2025,
Revised: 05-15-2025,
Accepted: 05-24-2025,
Available online: 05-29-2025
View Full Article|Download PDF

Abstract:

Accurate selective image segmentation continues to pose substantial challenges, particularly under conditions of noise interference, intensity inhomogeneity, and irregular object boundaries. To address these complexities, a novel framework is introduced that integrates fuzzy Einstein–Dombi (ED) operators with level set energy minimization, guided by marker-based initialization. The proposed approach departs from traditional intensity-driven models by jointly incorporating intensity, texture, and gradient-based features, thereby facilitating improved boundary delineation and enhanced regional homogeneity. A spatially adaptive regularization term has been embedded within the level set formulation to reinforce contour stability and robustness in the presence of artefacts and signal degradation. The fuzzy ED operators enable nuanced fusion of multiple features through non-linear aggregation, yielding a more expressive and resilient energy functional. In contrast to conventional segmentation schemes, the developed method achieves superior convergence and delineation accuracy, particularly within complex grayscale and noisy medical image datasets. Experimental validation has been conducted across a range of imaging conditions, with performance quantitatively assessed using established metrics, including segmentation accuracy (0.95), intersection over union (IoU: 0.89), and Dice similarity coefficient (DSC: 0.94). These results demonstrate statistically significant improvements over comparative models. Additionally, qualitative evaluations reveal enhanced contour fidelity and resistance to local intensity fluctuations. The methodological simplicity and computational efficiency of the framework render it highly suitable for real-time applications in medical imaging diagnostics, object detection, and related image analysis tasks. By offering a robust, interpretable, and generalizable solution, this work establishes a new reference point for selective image segmentation under non-ideal conditions, and paves the way for further exploration of fuzzy operator integration within variational segmentation paradigms.
Keywords: Image processing, Selective segmentation, Fuzzy set theory, Einstein–dombi (ED) operators, Level set evolution, Medical image analysis

1. Introduction

Image segmentation is a fundamental task in computer vision, playing a pivotal role in applications such as object detection, medical imaging, autonomous driving, and content-based image analysis [1-3]. The process involves partitioning an image into meaningful and coherent regions based on attributes like color, intensity, texture, and shape. This step is crucial for isolating relevant features or objects, thereby enabling more advanced image analysis. However, achieving accurate and reliable segmentation in real-world scenarios is challenging due to factors such as complex backgrounds, varying lighting conditions, overlapping objects, and noise [4]. In critical domains like medical imaging, these challenges directly impact clinical outcomes. For example, noise and intensity inhomogeneity can obscure tumor boundaries in MRI scans or CT images, leading to inaccurate diagnoses and treatment planning. Similarly, in remote sensing, intensity variations can result in misclassification of land cover, affecting environmental monitoring and urban planning.

Conventional segmentation techniques, including thresholding, edge detection, and clustering, rely on deterministic criteria, which often make them susceptible to noise and ineffective in handling smooth intensity transitions [5-7]. For instance, edge detection methods excel at identifying regions with high gradients but perform poorly in noisy environments or when edges are blurred. Clustering-based approaches, such as k-means, divide an image into k clusters by minimizing intra-cluster variance. However, they are prone to local minima and are highly sensitive to initialization. While these methods are simple, they struggle to address the complexities of real-world images, where boundaries between regions are often gradual rather than sharp. In medical scenarios, this limitation may result in under-segmentation of organs or over-segmentation of healthy tissues, both of which compromise the reliability of automated diagnostic tools.

To overcome these limitations, more sophisticated techniques have been developed. Region-growing algorithms [4-8] and graph-based methods [9-10] incorporate spatial information to improve segmentation. Region-growing approaches start from seed points and expand regions based on homogeneity criteria, but their performance heavily depends on accurate seed placement and can be adversely affected by noise. Graph-based techniques, such as graph cuts, partition images by minimizing a global energy function. Although effective, these methods often require extensive parameter tuning and significant computational resources, which can hinder their practical use. Such computational demands may not be suitable for time-sensitive medical applications, such as intraoperative image analysis, where fast and precise segmentation is required.

The rise of machine learning has introduced supervised and unsupervised learning techniques for image segmentation. Convolutional neural networks (CNNs) have achieved state-of-the-art results in semantic segmentation by learning hierarchical features from data [11-12]. Architectures like fully convolutional networks (FCNs) and U-Net have shown remarkable success in fields such as medical imaging and autonomous driving. However, these methods depend heavily on large annotated datasets and substantial computational power, which may not always be available [13]. Additionally, the lack of interpretability in deep learning models poses a challenge in critical applications like medical imaging, where understanding the decision-making process is essential [14]. This lack of transparency can be problematic for radiologists and clinicians, who need to validate and trust the automated segmentation results before making medical decisions.

In response to these challenges, fuzzy logic-based segmentation has emerged as a promising approach for handling uncertainty and imprecision in image data [15-16]. Fuzzy logic provides a mathematical framework to model gradual transitions between regions in an image. Unlike traditional methods that use crisp thresholds, fuzzy segmentation assigns a degree of membership to each pixel, indicating its likelihood of belonging to a specific region. For example, the fuzzy c-means (FCM) algorithm clusters pixels based on intensity while considering spatial coherence [17]. However, FCM is sensitive to initialization and noise, prompting the development of robust variants like spatially constrained FCM.

Membership functions are central to fuzzy logic and play a key role in defining the relationship between pixel features and their corresponding regions. For instance, Gaussian membership functions model the degree of belongingness of a pixel $p(x, y)$ based on its intensity $I(x, y)$:

$\mu(I(x, y))=\exp \left(-\frac{(I(x, y)-c)^2}{2 \sigma^2}\right),$
(1)

where \(c\) represents the center of the intensity range, and \(\sigma\) controls the spread [18]. By adjusting these parameters, fuzzy membership functions can adapt to diverse image characteristics, making them well-suited for complex segmentation tasks. This adaptability is particularly valuable in medical images where soft tissues exhibit gradual intensity transitions that are difficult to segment with crisp threshold-based methods.

Recent advancements in fuzzy logic-based segmentation include hybrid models that combine fuzzy logic with other computational techniques. For example, integrating fuzzy logic with genetic algorithms and swarm intelligence has been explored to optimize membership functions and segmentation parameters. Additionally, fuzzy-rule-based systems have been developed to incorporate domain knowledge into the segmentation process, improving both accuracy and interpretability [19-20]. Such hybrid approaches have shown promise in segmenting organs, lesions, and other anatomical structures in MRI and CT scans, where both accuracy and explainability are crucial.

In medical imaging, fuzzy logic has been successfully applied to segment structures like brain tissues and lesions, where intensity variations are subtle and boundaries are unclear. For instance, fuzzy region-growing techniques have been used to identify tumors in MRI images by leveraging both intensity and texture features [17]. In remote sensing, fuzzy logic-based methods have been employed to classify land cover types, addressing challenges posed by spectral similarities between classes. These real-world successes demonstrate the robustness of fuzzy approaches in scenarios where uncertainty and noise dominate, further motivating the development of enhanced fuzzy-based segmentation models.

Among the various segmentation tasks, selective segmentation focuses on extracting specific objects or regions while ignoring irrelevant background information. This task is particularly challenging in complex images where the target object may have weak boundaries, overlap with other objects, or be embedded in noisy backgrounds. Traditional segmentation methods, such as thresholding, edge detection, and region-growing, often struggle with these challenges. For example, thresholding methods may fail when the intensity distribution of the target object overlaps with the background, while edge detection methods may produce fragmented boundaries in noisy images. More advanced techniques, such as active contours and level set methods, have shown promise but still face limitations in handling heterogeneous regions and weak boundaries.

To address these challenges, we propose a novel image selective segmentation model that integrates ED operators, marker points, and level set methods. The proposed model leverages the flexibility of ED operators to combine multiple image features (e.g., intensity, texture, and gradient) into a unified representation, enabling accurate segmentation of complex regions. Marker points provide prior knowledge about the location of the target object, guiding the segmentation process and reducing the influence of background noise. Level set methods, on the other hand, allow the contour to adapt to complex shapes and topological changes, ensuring robust boundary detection.

The proposed model overcomes several limitations of existing segmentation methods. First, the use of ED operators provides a flexible framework for combining multiple features, addressing the challenge of heterogeneous regions. Unlike traditional methods that rely on a single feature (e.g., intensity or gradient), the proposed model integrates intensity, texture, and gradient information, leading to better region homogeneity and more accurate segmentation. Second, the incorporation of marker points ensures robustness to noise and weak boundaries. By providing prior knowledge about the target object, marker points guide the contour evolution process, reducing the risk of mis-segmentation due to background clutter or weak edges. Finally, the level set framework allows the contour to handle complex shapes and topological changes, overcoming the limitations of rigid models that assume simple geometries.

The proposed model integrates three fundamental components, each playing a crucial role in enhancing the accuracy and robustness of the segmentation process. These components include ED Operators, Marker Points, and Level Set Methods, which collectively contribute to an improved segmentation framework by effectively handling intensity variations, noise, and complex object boundaries.

• ED Operators

– These are fuzzy logic-based aggregation functions that combine multiple image features such as intensity, texture, and gradient in a nonlinear and flexible manner.

– They provide an effective mechanism for handling uncertainty and imprecision in image data.

– Einstein Product and Sum: Used to fuse multiple features while preserving essential details and suppressing noise.

– Dombi Operator: Controls the trade-off between different features by adjusting parameters, enabling adaptive fusion based on local image characteristics.

– Enhancement of Region Homogeneity: Ensures that similar regions are grouped effectively while maintaining clear object boundaries, leading to better region separation and contrast enhancement.

• Marker Points

– These serve as guiding cues for the segmentation process and can be manually selected or automatically detected.

– Providing Prior Knowledge: Offers initial information about the target object’s location, reducing ambiguity and improving boundary delineation.

– Facilitating Convergence: Helps in faster convergence of the segmentation process, reducing computational complexity.

• Level Set Method

– A geometric framework that evolves contours to capture object boundaries accurately.

– Implicit Representation: The contour is represented as a zero level of a higher dimensional function, allowing for smooth and topologically adaptable boundary evolution.

– Robustness to Noise and Occlusions: Can handle partial occlusions, intensity inhomogeneities, and complex shapes without requiring explicit contour initialization.

– Energy-Based Evolution: Driven by an energy functional that incorporates edge-based, region-based, and prior shape constraints, allowing for fine detail capture while maintaining global consistency.

In summary, the proposed model offers a robust and flexible solution for image selective segmentation, addressing the limitations of existing methods in handling complex images with heterogeneous regions, weak boundaries, and noise. By combining the strengths of ED operators, marker points, and level set methods, the proposed model achieves superior segmentation accuracy and robustness, making it suitable for a wide range of applications in computer vision and image analysis.

2. Literature Review

Nadiah et al. [21] proposed the Total Variation Selective Segmentation (TVSS)-based Active Contour Model (TV-SSM), which integrates a Total Variation (TV) regularizer, a distance function, and local image fitting energy to enhance segmentation performance, particularly for medical images with inhomogeneous intensity. Their approach effectively addresses the limitations of traditional Active Contour Models (ACMs) by ensuring better edge preservation and reduced sensitivity to noise through the incorporation of the TV regularizer. The distance function aids in refining the segmentation boundary by adapting to object contours, while the local image fitting energy improves the model’s adaptability to varying intensity levels within medical images. The energy functional for the TVSS model is defined as:

$\begin{aligned} E\left(\min _\phi\right) & =\nu \int_{\Omega}\left(\delta(\phi)|\nabla \phi|+\frac{1}{2}\left(I-\left(n_1 H(\phi)+n_2(1-H(\phi))\right)\right)^2\right) d x \\ & +\theta \int_{\Omega} P_{\partial} H(\phi) d x d y\end{aligned},$
(2)

where, \( n_1(x,y) \) and \( n_2(x,y) \) are formulated as:

$n_1(x, y)=k_\sigma * \frac{H(\phi) I}{k_\sigma * H(\phi)},$
(3)
$n_2(x, y)=k_\sigma * \frac{(1-H(\phi)) I}{k_\sigma *(1-H(\phi))}.$
(4)

Here, \( k_{\sigma} \) represents a Gaussian kernel with a standard deviation \( \sigma \). The parameter \( \theta \) serves to regulate the contour evolution, preventing it from deviating significantly from the targeted object. In general, a lower value of \( \theta \) is preferred when the target object exhibits clear contrast with the background, allowing better segmentation. On the other hand, the total variation (TV) term, represented by the first integral, plays a crucial role in smoothing the segmentation boundary. The regularization strength is controlled by \( \nu \), which can be set higher for images containing significant noise to ensure a stable contour evolution.

However, this method has certain limitations, including the staircasing effect introduced by the TV term in smoother regions, which may affect the segmentation of fine structures. Additionally, the model requires parameter tuning, which can be challenging for different types of medical images. While TV-SSM enhances segmentation accuracy for noisy medical data, its performance deteriorates in complex selective segmentation scenarios where weak boundaries or overlapping objects exist. This highlights the need for more adaptive models that can combine local feature information with global shape priors.

Ibrar et al. [4] introduced a local statistical features selective segmentation model (LSFM) that enhances object detection by integrating local statistical features with edge-based constraints. The model's energy functional is formulated as:

$\begin{aligned} F(\psi, x) & =\int_{\Omega} T(x)\left[\int_{\Omega} H(\psi(x)) d x-R_1\right]^2+\left[\int_{\Omega}(1-H(\psi(x))) d x-R_2\right]^2 \\ & +\lambda \int_{\Omega}\left[\frac{\log u(y)}{u(x)}\right] \cdot H(\psi(x)) d x+(1-\lambda) \int_{\Omega}\left[\frac{\left|u(x)-I_{c 2}(x)\right|^2}{d(y, x)}\right] d x \\ & +\int_{\Omega} \frac{\left|u(x)-I_{c 1}(x)\right|^2}{d(y, x)}(1-H((\psi(x)) d x\end{aligned}.$
(5)

Where, the regularization parameter is set to \( \mu = 0.1 \), ensuring a smooth level-set function while allowing flexibility in boundary adaptation. The contour evolution strength is controlled by \( v = 1.5 \), making segmentation more aggressive in capturing object boundaries. The weighting factor \( \lambda = 0.8 \) determines the balance between edge-based and region-based energy terms, where a higher value prioritizes statistical region constraints over edge information. Edge sensitivity is fine-tuned using \( \nu = 0.2 \), which adjusts the impact of edge detection on segmentation. The regional area constraints are defined as \( R_1 = 0.6 \) and \( R_2 = 0.4 \), ensuring proper differentiation between the target region and background.

The model demonstrates high accuracy in segmenting objects, particularly in noisy and intensity-inhomogeneous environments. Its combination of edge detection and statistical region constraints improves boundary localization and robustness. Additionally, it effectively balances global and local image properties, ensuring precise segmentation outcomes.

However, the approach has notable limitations. The computational complexity remains high due to iterative optimization, making real-time applications challenging. Parameter selection plays a critical role in performance, requiring careful tuning for different image datasets. Moreover, the reliance on manually placed markers can limit automation, reducing its scalability for large-scale segmentation tasks. Furthermore, while LSFM performs well on selective segmentation tasks, its dependency on precise marker placement and sensitivity to parameter tuning limits its applicability in complex medical images where intensity variations and weak edges prevail.

Although existing methods such as clustering, graph-based models, and machine learning approaches have shown improvements in general segmentation, they often fail to handle selective segmentation in complex environments effectively. For instance, clustering-based approaches struggle when target and background intensities overlap, while deep learning methods require large annotated datasets and lack interpretability for medical diagnostics. These gaps highlight the necessity for a more robust and interpretable model capable of handling noise, intensity inhomogeneity, and weak boundaries simultaneously.

The primary research objectives of this study are as follows:

• To develop a selective segmentation model that effectively handles intensity inhomogeneity, noise, and complex object boundaries.

• To integrate ED operators for adaptive feature fusion, improving region homogeneity and boundary preservation.

• To incorporate marker points and a level set framework to guide contour evolution, reducing mis-segmentation in complex images.

• To validate the proposed approach against state-of-the-art models using both qualitative and quantitative metrics.

3. Mathematical Framework of the Proposed Model

The proposed model is formulated as an energy minimization problem, where the goal is to evolve a contour (represented by a level set function) to accurately segment the target object. The energy functional incorporates region-based and edge-based terms, regularized using ED operators and marker points. The proposed model consists of the following steps.

3.1 Preprocessing

In the preprocessing stage, the input to the model is an image \( I: \Omega \rightarrow \mathbb{R} \), where \( \Omega \subset \mathbb{R}^2 \) represents the image domain. To reduce noise and enhance the quality of the image, a Gaussian smoothing filter is applied. This smoothing operation is defined as:

$ I_{\text{smooth}} = G_\sigma * I, $
(6)

where, \( G_\sigma \) is a Gaussian kernel with a standard deviation \( \sigma \). The Gaussian kernel effectively blurs the image while preserving important edges, which is crucial for accurate segmentation.

In this work, the set of marker points \( \mathcal{M} = \{m_1, m_2, \dots, m_N\} \) is selected manually, where each \( m_i \) represents a point located within the target object. Manual selection allows domain experts (e.g., radiologists for medical images) to provide precise guidance for segmentation. Although manual selection may reduce scalability, it ensures high accuracy and reproducibility for challenging cases with weak or overlapping boundaries. These marker points serve as prior knowledge about the location of the object and are used to initialize the level set function, ensuring that the segmentation process starts close to the target region. The marker points play a critical role in guiding the segmentation, particularly in complex images where the target object may have weak boundaries or overlap with other regions. By combining the smoothed image and the marker points, the preprocessing stage sets a robust foundation for the subsequent steps in the segmentation pipeline.

3.2 Feature Extraction

In the feature extraction stage, relevant features are extracted from the image to guide the segmentation process. The first feature is the intensity feature, which is simply the smoothed image \( I_{\text{smooth}} \) obtained during preprocessing. This feature captures the overall brightness and intensity distribution of the image. The second feature is the texture feature, which is computed using local texture descriptors such as Gabor filters or local binary patterns. These descriptors capture the spatial variation of pixel intensities, providing information about the texture of the target object and its surroundings. The third feature is the gradient feature, which is computed as the image gradient \( \nabla I_{\text{smooth}} \). This feature highlights edges and boundaries in the image, making it easier to distinguish between different regions. To combine these features into a unified representation, ED operators are used. These fuzzy logic operators provide a flexible and smooth way to integrate intensity, texture, and gradient information, resulting in a unified feature map \( F \). This feature map serves as the basis for the subsequent steps in the segmentation process, ensuring that the model can accurately identify and segment the target object.

3.3 ED Operators

ED operators are a class of fuzzy logic operators that provide a flexible and smooth framework for combining multiple image features, making them particularly well-suited for image selective segmentation. These operators are defined for two fuzzy sets \( A \) and \( B \) as follows: the Einstein Product is given by:

$ A \otimes B = \frac{A \cdot B}{1 + (1 - A) \cdot (1 - B)}, $

and the Einstein Sum is defined as:

$ A \oplus B = \frac{A + B}{1 + A \cdot B}. $

In the context of image selective segmentation, ED operators are used to combine intensity, texture, and gradient features into a unified feature map \( F \). Specifically, the intensity feature is derived from the smoothed image \( I_{\text{smooth}} \), the texture feature is computed using local texture descriptors (e.g., Gabor filters or local binary patterns), and the gradient feature is obtained from the image gradient \( \nabla I_{\text{smooth}} \). The combined feature map \( F \) is computed as:

$ F = (I_{\text{smooth}} \otimes \text{Texture}) \oplus \nabla I_{\text{smooth}}, $

where, \( \otimes \) and \( \oplus \) are applied pixel-wise. This combination ensures that the feature map captures the most relevant information from the image, enabling the segmentation model to distinguish between the target object and the background effectively. The smooth and flexible nature of ED operators allows the model to handle complex images with heterogeneous regions, weak boundaries, and overlapping objects, making them a powerful tool for accurate and robust image selective segmentation.

3.4 Level Set Initialization

The segmentation boundary is represented as the zero level set of a higher-dimensional function \( \phi: \Omega \rightarrow \mathbb{R} \), where \( \Gamma = \{x \in \Omega \mid \phi(x) = 0\} \) defines the contour. The level set function \( \phi \) is initialized using the marker points \( \mathcal{M} \). Specifically, a signed distance function (SDF) is used to define \( \phi \) such that:

$ \phi(x) = \begin{cases} -d(x, \mathcal{M}) & \text{if } x \text{ is inside region}, \\ d(x, \mathcal{M}) & \text{if } x \text{ is outside region}, \end{cases} $

where \( d(x, \mathcal{M}) \) is the distance from \( x \) to the nearest marker point.

The choice of SDF for initialization is motivated by its robustness and stability compared to other initialization techniques, such as random contours or simple binary masks. SDF ensures that the level set function has smooth and well-defined signed distances, which reduces numerical instabilities during evolution and accelerates convergence. Furthermore, SDF-based initialization minimizes the need for reinitialization steps, thereby improving computational efficiency. In contrast, arbitrary initial contours often require additional regularization to maintain a proper level set structure, which increases both complexity and computation time. This initialization ensures that the level set function starts close to the target object, providing a robust starting point for the contour evolution process.

3.5 Energy Functional

The energy functional \( E(\phi) \) is designed to guide the evolution of the level set function \( \phi \) and consists of three key terms: the region-based term, the edge-based term, and the regularization term. Each term is specifically designed to address a particular challenge in selective segmentation. The region-based term improves homogeneity in regions affected by noise or intensity variations, the edge-based term ensures accurate alignment with object boundaries, and the regularization term prevents contour irregularities, making the method robust against artifacts and uneven shapes.The energy functional is defined as:

$ E(\phi) = E_{\text{region}}(\phi) + E_{\text{edge}}(\phi) + E_{\text{reg}}(\phi). $

3.5.1 Region-based term

The region-based term ensures that the contour separates regions with distinct feature properties. It uses ED operators to combine intensity, texture, and gradient features into a unified feature map. This term plays a crucial role in tackling intensity inhomogeneity by grouping pixels based on feature similarity rather than relying on a single intensity value, thereby improving segmentation accuracy in medical and noisy images. The term is formulated as:

E_{\text{region}}(\phi) &=& \lambda_1 \int_{\Omega} (F \otimes c_1)^2 H(\phi)dx\nonumber\\

&+& \lambda_2 \int_{\Omega} (F \oplus c_2)^2 (1 - H(\phi))dx,

where, \( \lambda_1 \) and \( \lambda_2 \) are weighting parameters that control the influence of the region-based term. \( c_1 \) and \( c_2 \) represent the average feature values inside and outside the contour, respectively.

The term \( (F \otimes c_1)^2 \) measures the difference between the feature map \( F \) and the average feature value \( c_1 \) inside the contour. This term ensures that the contour evolves to align with regions where the feature map closely matches the average feature value inside the target object. Similarly, \( (F \oplus c_2)^2 \) measures the difference between \( F \) and the average feature value \( c_2 \) outside the contour, encouraging the contour to separate regions with distinct feature properties. The Heaviside function \( H(\phi) \) ensures that the energy term is active only in the relevant regions (inside or outside the contour). Specifically, \( H(\phi) \) is defined as:

$ H(\phi) = \begin{cases} 1 & \text{if } \phi \geq 0, \\ 0 & \text{if } \phi < 0, \end{cases} $

where, \( \phi \geq 0 \) corresponds to the inside of the contour and \( \phi < 0 \) corresponds to the outside. By incorporating the Heaviside function, the region-based energy term effectively distinguishes between the target object and the background, ensuring accurate segmentation.

3.5.2 Edge-based term

The edge-based term plays a crucial role in attracting the contour to object boundaries by leveraging the image gradient. This term directly addresses the challenge of weak or blurred edges, ensuring that the evolving contour locks onto the most prominent gradient changes. It is mathematically expressed as:

$ E_{\text{edge}}(\phi) = \mu \int_{\Omega} g(|\nabla I|) \delta(\phi) |\nabla \phi| \, dx, $

where, \( g(|\nabla I|) = \frac{1}{1 + |\nabla I|^2} \) is an edge indicator function. This function takes small values in regions with strong gradients (edges) and large values in homogeneous regions, effectively emphasizing object boundaries. The Dirac delta function \( \delta(\phi) \), defined as \( \delta(\phi) = \frac{dH(\phi)}{d\phi} \), ensures that the energy term is active only near the zero level set (the contour). This restriction prevents unnecessary computations in regions far from the contour. The parameter \( \mu \) controls the weight of the edge-based term, balancing its influence relative to the other terms in the energy functional.

3.5.3 Regularization term

The regularization term ensures the smoothness of the contour and prevents irregularities such as sharp corners or jagged edges. This is particularly important in medical imaging where noisy regions or artifacts can cause the contour to become irregular. The regularization term enforces a smooth boundary, ensuring clinical interpretability of the segmented results.It is given by:

$ E_{\text{reg}}(\phi) = \nu \int_{\Omega} |\nabla H(\phi)| \, dx, $

where, \( \nu \) is a weighting parameter that balances the trade-off between contour smoothness and adherence to image features. The term \( |\nabla H(\phi)| \) penalizes abrupt changes in the contour, ensuring that it evolves smoothly and maintains a regular shape. This term is particularly important in noisy images, where the contour might otherwise become fragmented or irregular.

3.6 Energy Minimization

The energy functional \( E(\phi) \), which combines the region-based, edge-based, and regularization terms, is minimized using gradient descent. The evolution of the level set function \( \phi \) is governed by the partial differential equation:

$ \frac{\partial \phi}{\partial t} = -\frac{\partial E(\phi)}{\partial \phi}. $

This equation describes how the level set function changes over time to minimize the energy functional. The gradient descent update rule is:

$ \phi^{k+1} = \phi^k - \Delta t \cdot \frac{\partial E(\phi^k)}{\partial \phi}, $

where, \( \Delta t \) is the time step controlling the rate of evolution. A smaller \( \Delta t \) ensures stability but may require more iterations, while a larger \( \Delta t \) speeds up convergence but risks instability. The term \( \frac{\partial E(\phi)}{\partial \phi} \) represents the derivative of the energy functional with respect to \( \phi \), guiding the contour toward the optimal segmentation.

Computational Complexity: The proposed approach involves iterative updates of the level set function, where each iteration requires evaluating feature maps and gradient terms. The overall complexity is approximately \( O(n \cdot m) \), where \( n \) is the number of iterations and \( m \) is the number of pixels. Although ED-operator-based feature fusion introduces additional computations, it significantly reduces the number of iterations needed for convergence compared to conventional region-based models, thus providing a practical trade-off between accuracy and computational cost. On a standard CPU implementation, convergence is typically achieved within 1–2 seconds for \( 256 \times 256 \) images.

3.7 Contour Evolution

The level set function \( \phi \) is evolved iteratively to refine the segmentation boundary. During each iteration, the contour is updated based on the gradient descent rule, moving closer to the target object's boundaries. The evolution process continues until the change in \( \phi \) between consecutive iterations falls below a predefined threshold \( \epsilon \), indicating convergence. This stopping criterion is expressed as:

$ \|\phi^{k+1} - \phi^k\| < \epsilon. $

Once the evolution process terminates, the final contour represents the segmentation boundary of the target object. The combination of the three energy terms ensures that the final boundary is both smooth and accurately aligned with edges, while remaining robust against noise and intensity inhomogeneity.

After the contour evolution process terminates, the final segmentation result may still contain minor imperfections, such as irregularities in the boundary or small artifacts within the segmented region. To address these issues, postprocessing techniques are applied to refine the segmentation result. First, morphological operations, such as dilation and erosion, are used to smooth the contour and eliminate small irregularities. Dilation expands the boundary of the segmented region, filling in small gaps, while erosion shrinks the boundary, removing small protrusions. These operations are often applied sequentially (e.g., opening or closing) to achieve a balance between smoothing and preserving the overall shape of the target object. Additionally, small artifacts or holes within the segmented region are removed using connected component analysis. This involves identifying and filtering out regions that are too small to be part of the target object, ensuring that the final segmentation is clean and accurate. By applying these postprocessing steps, the segmentation result is further refined, resulting in a smooth and well-defined boundary that accurately represents the target object.

4. Experimental Validation

The experimental validation of the proposed segmentation model was conducted using a structured approach to ensure robustness and reproducibility. The model integrates three key components: ED Operators, Marker Points, and Level Set Methods—to enhance segmentation accuracy by effectively managing intensity variations, noise, and complex object boundaries. The experiments were carried out on a diverse dataset comprising grayscale medical images (e.g., MRI brain scans and CT slices) and synthetic images with varying noise levels and intensity inhomogeneity. The dataset included 150 images, with approximately 60% medical images and 40% synthetic test cases, providing a balanced evaluation of both real-world and controlled scenarios. Gaussian noise with variances ranging from 0.01 to 0.05 was added to certain synthetic images to evaluate noise robustness, while intensity inhomogeneity was simulated using bias field distortions.

All images were resized to a resolution of $255 \times 255$ pixels to ensure a standardized evaluation across different test cases. MATLAB R2015a was used as the primary software environment for implementing and testing the proposed model, with custom scripts designed to handle image preprocessing, feature extraction, and segmentation. Given the computational constraints of MATLAB R2015a, special considerations were made to optimize performance while ensuring the accuracy of the results.

The evaluation framework included both qualitative assessments—such as visual inspection of segmented contours—and quantitative metrics, including accuracy, IoU, and DSC. For a fair comparison, all competing methods were tested under identical conditions with the same dataset and noise configurations.

The parameters for optimal segmentation performance are set empirically. Gaussian smoothing uses a standard deviation of \( \sigma = 1.5 \) to reduce noise while preserving edges. The region-based term weights are \( \lambda_1 = 1.2 \) and \( \lambda_2 = 1.0 \), while the edge-based term is controlled by \( \mu = 0.8 \). The regularization term is \( \nu = 0.5 \), and the gradient descent time step is \( \Delta t = 0.1 \), ensuring stable convergence. The process halts when \( \epsilon = 10^{-3} \) is met, stabilizing the segmentation boundary. The ED operators integrate multiple image features, with Einstein Product and Sum applied pixel-wise using parameters \( \alpha = 1.2 \) and \( \beta = 1.0 \). Feature integration weights are set as \( w_1 = 0.5 \) for intensity, \( w_2 = 0.3 \) for texture, and \( w_3 = 0.2 \) for gradient, ensuring robust segmentation across diverse image conditions.

The proposed model demonstrates an effective segmentation performance, as shown in Figure 1. The model utilizes a feature map-based approach to enhance boundary detection, improving segmentation accuracy compared to traditional methods. The first column presents the original images, while the second column provides ground truth segmentation with purple contours for reference. The third column illustrates the intermediate segmentation results obtained using the fuzzy feature map-based method, highlighting significant structural details. Finally, the fourth column presents the final results of the segmentation produced by the proposed model, where the blue contours accurately delineate the boundaries of the objects. The consistency and precision of these results indicate the robustness of the proposed method in handling diverse image structures and intensity variations.

Figure 2 presents segmentation results on real noisy medical images, comparing the performance of the TV-SSM \cite{21}, LSFM \cite{4}, and Khan et al. \cite{8} modelled with the proposed model. The first column displays the original images, followed by segmentation outputs from the competing models. The primary challenge in these images is the presence of noise, weak boundaries, and intensity variations, which significantly affect the accuracy of traditional segmentation techniques. Competing models TV-SSM, LSFM, and Khan et al. struggle to maintain clear boundary separation due to their reliance on conventional edge-based or region-based energy minimization. However, the proposed model integrates ED operators, and fuzzy energy functional to achieve superior segmentation performance. The Einstein Product and Sum effectively fuse intensity, texture, and gradient features while preserving crucial boundary details and reducing noise interference. Additionally, the Dombi operator adapts to local image characteristics, enhancing region homogeneity while ensuring smooth boundary evolution. The use of marker points as guiding cues significantly improves segmentation precision by offering prior knowledge about object locations, thereby reducing ambiguity and improving convergence speed. The last column of Figure 2 clearly illustrates that the proposed model, highlighted with blue contours, provides more accurate segmentation by effectively distinguishing objects from noisy backgrounds and preserving fine anatomical structures.

Figure 3 extends the segmentation analysis to X-ray images, emphasizing the importance of ED operators, and level set evolution in handling complex anatomical structures. The first column presents the original images, followed by segmentation results from TV-SSM , LSFM, and Khan et al.'s model. While these models provide reasonable approximations of object boundaries, they exhibit sensitivity to intensity inhomogeneities and weak edges, leading to segmentation errors such as boundary leakage and over-segmentation. The proposed model, shown in the last column, integrates entropy-based marker point selection, which guides the level set initialization process and improves boundary detection. The ED operators play a crucial role in adaptive feature fusion, enabling a more robust segmentation process by maintaining a balance between texture, gradient, and intensity variations. This ensures better region separation and contrast enhancement, reducing false region detection observed in competing models. The level set method, with its implicit representation and energy-based evolution, further enhances segmentation robustness by ensuring smooth boundary progression even in the presence of occlusions or intensity variations. As seen in the results, the proposed model exhibits strong boundary adherence, enhanced anatomical detail preservation, and superior segmentation accuracy compared to traditional methods.

To evaluate the performance of selective segmentation models, we employ a suite of well-established metrics that quantify both the accuracy and efficiency of the segmentation process. These include Accuracy, Precision, Recall, F1 Score, IoU, DSC. Additionally, CIs are computed to assess the statistical reliability of these metrics, ensuring a robust performance evaluation. Below, we present the mathematical formulations for each metric.

Figure 1. Segmentation results on synthetic images
Note: The first column shows the original images, the second column presents ground truth with purple contours, the third column displays the proposed fuzzy feature map based segmentation, and the fourth column shows the proposed model’s results with blue contours.
Figure 2. Segmentation results on real noisy medical images
Figure 3. Segmentation results on X-ray images
4.1 Accuracy (\(Acc\))

Accuracy is defined as the proportion of correctly classified pixels (both true positives and true negatives) relative to the total number of pixels in the image. It is mathematically expressed as:

$ Acc = \frac{TP + TN}{TP + TN + FP + FN}, $

where:

\(TP\) = True Positives (correctly predicted foreground pixels),,\(TN\) = True Negatives (correctly predicted background pixels),,\(FP\) = False Positives (incorrectly predicted foreground pixels),,\(FN\) = False Negatives (incorrectly predicted background pixels).

4.2 Precision (\(P\)) and Recall (\(R\))

Precision quantifies the proportion of predicted positive pixels that are actually correct. Recall, also referred to as sensitivity, measures the proportion of actual positive pixels correctly identified by the model. These are computed as:

$ P = \frac{TP}{TP + FP}, \quad R = \frac{TP}{TP + FN}. $

These metrics emphasize the accuracy of positive predictions.

4.3 F1 Score (\(F_1\))

The F1 Score is the harmonic mean of Precision and Recall, providing a balanced measure between the two. It is especially useful when there is an imbalance between foreground and background pixels. The F1 Score is computed as:

$ F_1 = 2 \cdot \frac{P \cdot R}{P + R}. $

This metric combines both the sensitivity and precision into a single value.

4.4 IoU and DSC

The IoU, also known as the Jaccard Index, measures the overlap between the predicted segmentation and the ground truth. The DSC also quantifies the overlap between the predicted and ground truth segmentation regions. These are expressed as:

$ IoU = \frac{|P \cap G|}{|P \cup G|}, \quad DSC = \frac{2 \cdot |P \cap G|}{|P| + |G|}, $

where, \(P\) = Predicted segmentation region, and \(G\) = Ground truth region.

A higher IoU indicates better segmentation performance, particularly in delineating object boundaries. The DSC is widely used in applications such as medical imaging and natural image segmentation, where precise boundary detection is critical.

4.5 CIs

CIs are used to quantify the uncertainty of the performance metrics. They provide a statistical range within which the true value of a metric is likely to fall with a specified confidence level (usually 95%). The CI for a given metric \(\mu\) is calculated as:

$ CI = \mu \pm Z_{\alpha/2} \cdot \frac{\sigma}{\sqrt{n}}, $

where, \(\mu\) is the mean value of the metric, \(Z_{\alpha/2}\) refers to the Z-score corresponding to the desired confidence level, \(\sigma\) is the standard deviation of the metric, and \(n\) is the number of samples.

By incorporating these performance metrics and CIs, we ensure a comprehensive and statistically sound evaluation of the selective segmentation model. This approach not only provides a quantitative assessment of accuracy but also highlights the computational efficiency and the reliability of the results.

Table~\ref{table_2} presents the performance evaluation of the proposed model in comparison with three other models (TV-SSM, LSFM and Khan et al.) using six key metrics: Accuracy, Precision, Recall, F1 Score, IoU, and DSC. The proposed model outperforms all competing models across all metrics, achieving the highest Accuracy (0.95), Precision (0.93), Recall (0.90), F1 Score (0.91), IoU (0.89), and DSC (0.94). TV-SSM shows competitive performance with an Accuracy of 0.90 and a DSC of 0.88, followed by LSFM, which achieves an Accuracy of 0.88 and a DSC of 0.86. Khan et al. demonstrates the lowest performance, with an Accuracy of 0.86 and an IoU of 0.75. The higher IoU and DSC values of the proposed model indicate better segmentation quality and overlap with ground truth data. These results highlight the effectiveness of the proposed model in achieving superior segmentation performance compared to the existing approaches (see Figure 4).

Table 1. Performance metrics for proposed and competing models}\label{table_2
undefined
Figure 4. Quantitative performance comparison of the proposed segmentation model with competing models
Note: The bar charts display Accuracy, Precision, Recall, F1 Score, IoU, and DSC metrics. The proposed model outperforms competing models, demonstrating higher segmentation accuracy and robustness.

Table \ref{tab:2} shows that the proposed selective segmentation model achieves the highest performance across all metrics, with consistently narrower 95\% confidence intervals compared to the competing models (TV-SSM, LSFM, and Shahkar et al.). For instance, its Accuracy of 0.95 is accompanied by a tight CI of [0.931, 0.969], indicating high precision and reliability in the estimates. Similarly, Precision ([0.910, 0.950]), Recall ([0.880, 0.920]), F1 Score ([0.891, 0.929]), and IoU ([0.870, 0.910]) all display superior central values and smaller intervals, reflecting reduced variability. In contrast, the competing models not only have lower central metric values but also exhibit wider intervals, suggesting less consistent performance. These results confirm that the proposed model offers both statistically higher accuracy and greater stability.

Table 2. Confidence intervals (95% CI) for performance metrics of the proposed selective segmentation model and competing models (TV-SSM, LSFM and Khan et al. model)
undefined

The proposed model holds significant potential for real-world applications, particularly in medical imaging. By providing accurate and robust segmentation of anatomical structures even in the presence of noise and intensity inhomogeneity, the model can assist clinicians in tasks such as tumor boundary delineation, organ volume estimation, and pre-operative planning. Its ability to reduce manual intervention through marker-based guidance can streamline clinical workflows, minimize inter-observer variability, and support more reliable decision-making in diagnostic and therapeutic procedures.

5. Conclusion

This paper introduced a novel selective segmentation model that integrated region- and edge-based energy terms with ED operators to achieve robust and accurate image segmentation. The proposed approach effectively combined intensity, texture, and gradient information through weighted feature integration, enhancing segmentation precision, particularly in challenging conditions such as noise, blur, and intensity inhomogeneity. Experimental validation was conducted using a dataset comprising blurred and noisy images, where the model was compared with existing state-of-the-art techniques. The results demonstrate superior performance in terms of accuracy, precision, recall, F1 score, and IoU, confirming the effectiveness of the proposed method. Statistical significance tests further validate that the improvements are not random but rather a result of the novel fusion strategy.

The broader implications of the findings are noteworthy. The ability of the model to handle noisy and intensity-inhomogeneous images highlights its potential for real-world applications, especially in fields like medical imaging, where accurate segmentation of degraded or low-quality scans is critical for diagnosis and treatment planning. Such robustness can reduce manual corrections, thereby improve workflow efficiency and reduce diagnostic errors.

Despite its advantages, the proposed model has certain limitations. The computational complexity remains relatively high due to the involvement of multiple feature fusion mechanisms and iterative optimizations. Moreover, the model exhibits sensitivity to parameter settings (e.g., weighting parameters and time-step size), which can affect segmentation performance if not carefully tuned. In addition, while the model performs well under moderate noise and blur, potential failure cases may arise in scenarios with extreme occlusions, very low contrast, or highly irregular textures where feature fusion alone may be insufficient. To address these challenges, future work will focus on optimizing the algorithm’s computational efficiency, exploring deep learning-based feature extraction for improved robustness, and extending the framework to multi-modal image segmentation. We also plan to investigate automated parameter selection strategies and adaptive feature weighting to reduce sensitivity and improve generalization across diverse image conditions. Further improvements in real-time processing capabilities will also be explored to make the model suitable for time-sensitive applications such as medical imaging and autonomous navigation.

Author Contributions

The author solely conducted the conceptualization, methodology, data analysis, and writing of this manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References
1.
I. Hussain and J. Muhammad, “Efficient convex region-based segmentation for noising and inhomogeneous patterns,” Inverse Probl. Imaging, vol. 17, no. 3, 2022. [Google Scholar] [Crossref]
2.
A. Suneetha and E. S. Reddy, “Robust gaussian noise detection and removal in color images using modified fuzzy set filter,” J. Intell. Syst., vol. 30, no. 1, pp. 240–257, 2021. [Google Scholar] [Crossref]
3.
R. M. Abdelazeem, D. Youssef, J. El-Azab, S. Hassab-Elnaby, and M. Agour, “Three-dimensional visualization of brain tumor progression based on accurate segmentation via comparative holographic projection,” PLOS ONE, vol. 15, no. 7, 2020. [Google Scholar] [Crossref]
4.
H. Ibrar, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imaging, vol. 16, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
5.
I. Hussain, J. Muhammad, and R. Ali, “Enhanced global image segmentation: Addressing pixel inhomogeneity and noise with average convolution and entropy-based local factor,” Int. J. Knowl. Innov. Stud., vol. 1, no. 2, pp. 116–126, 2023. [Google Scholar] [Crossref]
6.
R. Maini and H. Aggarwal, “Study and comparison of various image edge detection techniques,” Int. J. Image Process., vol. 3, no. 1, pp. 1–15, 2009, [Online]. Available: https://d1wqtxts1xzle7.cloudfront.net/31159717/Maini-libre.pdf?1392250617=&response-content-disposition=inline%253B+filename%253DStudy_and_comparison_of_various_image_ed.pdf&Expires=1755002259&Signature=JxJlw-hJk93HYRktw1VuA-B3EOtSan7GPVmwO8cT9QnCfDlpfQPPoUm7D2RN1W6LlspRmdFdGNk9QuTznXTeW~KmVPUH1k3qGvu51l8yAWEN2qM8GTdlVVY1O0fxPBAJlm5SVlWRcs2mwyaZHOZixTB2WLWFWxvTNT1BqEhQ-bmYefvQW5FRif4LTeyY3cMMdn4RL-0FZZ~grd9tODE13EfICPvQrjDAJ2DrfyMTYBB2QFuWz~33cz9dKfBsYbbP46xUU4H9S9pQx-Vo16r9OAOYuEvyK5TgsUj5UulNDZjB0YFVf6V6nUDhpne6yueEEWcxBfn74460ltsX3VqO8w__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA [Google Scholar]
7.
G. Deng, F. Galetto, M. Al-Nasrawi, and W. Waheed, “A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution,” IEEE Open J. Signal Process., vol. 2, pp. 119–135, 2021. [Google Scholar] [Crossref]
8.
M. S. Khan, H. Ali, M. Zakarya, S. Tirunagari, A. A. Khan, R. Khan, A. A., and R. L., “A convex selective segmentation model based on a piece-wise constant metric-guided edge detector function,” Soft Comput., 2023. [Google Scholar] [Crossref]
9.
Y. Yu, C. Wang, Q. Fu, R. Kou, F. Huang, B. Yang, T. Yang, and M. Gao, “Techniques and Challenges of Image Segmentation: A Review,” Electronics, vol. 12, no. 5, p. 1199, 2023. [Google Scholar] [Crossref]
10.
T. Zhou, W. Xia, F. Zhang, B. Chang, W. Wang, Y. Yuan, E. Konukoglu, and D. Cremers, “Image segmentation in foundation model era: A survey,” arXiv, 2024. [Google Scholar] [Crossref]
11.
B. H. N. Jereni and I. Sundire, “Enhanced detection of COVID-19 in Chest X-ray images: A comparative analysis of CNNs and the DL+ ensemble technique,” Inf. Dyn. Appl., vol. 2, no. 4, pp. 186–198, 2023. [Google Scholar] [Crossref]
12.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 2015, pp. 3431–3440. [Google Scholar] [Crossref]
13.
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Cham: Springer International Publishing, 2015, pp. 234–241. [Online]. Available: [Google Scholar] [Crossref]
14.
A. Abo-El-Rejal, S. E. Ayman, and F. Aymen, “Advances in Breast Cancer Segmentation: A Comprehensive Review,” Acadlore Trans. Mach. Learn., vol. 3, no. 2, pp. 70–83, 2024. [Google Scholar] [Crossref]
15.
S. Balovsyak, O. Derevyanchuk, V. Kovalchuk, H. Kravchenko, Y. Ushenko, and Z. Hu, “STEM project for vehicle image segmentation using fuzzy logic,” Int. J. Mod. Educ. Comput. Sci., vol. 16, no. 2, pp. 45–57, 2024. [Google Scholar] [Crossref]
16.
I. Hussain and R. Ali, “Robust leaf disease detection using complex fuzzy sets and hsv-based color segmentation techniques,” Acadlore Trans. Mach. Learn., vol. 3, no. 3, pp. 183–192, 2024, 30305. [Google Scholar] [Crossref]
17.
M. C. Jobin Christ and R. M. S. Parvathi, “Fuzzy c-means algorithm for medical image segmentation,” in 2011 3rd International Conference on Electronics Computer Technology, 2011, pp. 33–36. doi: 10.1109/ ICECTECH.2011.5941851. [Google Scholar]
18.
O. Sojodishijani, V. Rostami, and A. R. Ramli, “Real-time colour image segmentation with non-symmetric gaussian membership functions,” in 2008 Fifth International Conference on Computer Graphics, Imaging and Visualisation, 2008, pp. 165–170. [Google Scholar] [Crossref]
19.
M. E. Celebi, H. A. Kingravi, and P. A. Vela, “A comparative study of efficient initialization methods for the k-means clustering algorithm,” Expert Syst. Appl., vol. 40, no. 1, pp. 200–210, 2012. [Google Scholar] [Crossref]
20.
E. N. Ganesh, “Image segmentation using contemporary fuzzy logic,” Int. J. Comput. Sci. Eng. Technol., vol. 8, no. 1, pp. 4–10, 2017, [Online]. Available: https://ijcset.com/docs/IJCSET17-08-01-005.pdf [Google Scholar]
21.
N. Mohamed , A. K. Jumaat, and R. Mahmud, “Total variation selective segmentation-based active contour model with distance function and local image fitting energy for medical images,” Math. Sci. Inform. J., vol. 5, no. 2, pp. 57–69, 2024. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Ahmad, U. (2025). Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization. Int J. Knowl. Innov Stud., 3(2), 74-88. https://doi.org/10.56578/ijkis030202
U. Ahmad, "Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization," Int J. Knowl. Innov Stud., vol. 3, no. 2, pp. 74-88, 2025. https://doi.org/10.56578/ijkis030202
@research-article{Ahmad2025SelectiveIS,
title={Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization},
author={Uzair Ahmad},
journal={International Journal of Knowledge and Innovation Studies},
year={2025},
page={74-88},
doi={https://doi.org/10.56578/ijkis030202}
}
Uzair Ahmad, et al. "Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization." International Journal of Knowledge and Innovation Studies, v 3, pp 74-88. doi: https://doi.org/10.56578/ijkis030202
Uzair Ahmad. "Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization." International Journal of Knowledge and Innovation Studies, 3, (2025): 74-88. doi: https://doi.org/10.56578/ijkis030202
AHMAD U. Selective Image Segmentation Through Fuzzy Einstein–Dombi Operators and Level Set Energy Minimization[J]. International Journal of Knowledge and Innovation Studies, 2025, 3(2): 74-88. https://doi.org/10.56578/ijkis030202
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.