Javascript is required
1.
R. M. Abdelazeem, D. Youssef, J. El-Azab, S. Hassab-Elnaby, and M. Agour, “Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection,” Plos One, vol. 16, no. 5, p. e0251614, 2020. [Google Scholar] [Crossref]
2.
J. Bertels, T. Eelbode, M. Berman, D. Vandermeulen, F. Maes, R. Bisschops, and M. B. Blaschko, “Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019, pp. 92–100. [Online]. Available: https://arxiv.org/abs/1911.01685 [Google Scholar]
3.
L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, pp. 834–848, 2018. [Google Scholar] [Crossref]
4.
E. Calli, E. Sogancioglu, B. van Ginneken, K. G. van Leeuwen, and K. Murphy, “Deep learning for chest X-ray analysis: A survey,” Med. Image Anal., vol. 72, p. 102125, 2021. [Google Scholar] [Crossref]
5.
G. J. Deng, F. Galetto, M. Al-nasrawi, and W. Waheed, “A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution,” IEEE Open J. Signal Process., vol. 2, pp. 119–135, 2021. [Google Scholar] [Crossref]
6.
A. Distante and C. Distante, Handbook of Image Processing and Computer Vision. Springer, 2021. [Online]. Available: https://link.springer.com/book/10.1007/978-3-030-42378-0 [Google Scholar]
7.
K. Zhang, L. Zhang, and S. L. Zhang, “Avariational multiphase level set approach to simultaneous segmentation and bias correction,” in Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 2010, pp. 4105–4108. [Google Scholar] [Crossref]
8.
B. Peng, L. Zhang, and J. Yang, “Iterated graph cuts for image segmentation,” in Computer Vision - ACCV 2009, 9th Asian Conference on Computer Vision, Xi’an, China, 2009. [Google Scholar] [Crossref]
9.
K. Zhang, L. Zhang, H. Song, and W. Zhou, “Active contours with selective local or global segmentation: A new formulation and level set method,” Image Vis. Comput., vol. 28, pp. 668–676, 2010. [Google Scholar] [Crossref]
10.
S. Niu, Q. Chen, L. de Sisternes, Z. X. Ji, Z. Zhou, and D. L. Rubin, “Robust noise region-based active contour model via local similarity factor for image segmentation,” Pattern Recognit., vol. 61, pp. 104–119, 2017. [Google Scholar] [Crossref]
11.
T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, 2001. [Google Scholar] [Crossref]
12.
A. Tsai, A. Yezzi, and A. S. Willsky, “Curve evolution implementation of the mumford-shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process., vol. 10, pp. 1169–1186, 2001. [Google Scholar] [Crossref]
13.
L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the mumford and shah model,” Int. J. Comput. Vis., vol. 50, pp. 271–293, 2002. [Google Scholar] [Crossref]
14.
H. Ibrar, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imaging, vol. 16, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
15.
K. Zhang, H. Song, and L. Zhang, “Active contours driven by local image fitting energy,” Pattern Recognit., vol. 43, pp. 1199–1206, 2010. [Google Scholar] [Crossref]
16.
V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” Int. J. Comput. Vis., vol. 22, pp. 61–79, 1997. [Google Scholar] [Crossref]
17.
Y. F. Gao, M. Zhou, and D. N. Metaxas, “Utnet: A hybrid transformer architecture for medical image segmentation,” in 24th International Conference, Strasbourg, France, 2021, pp. 61–71. [Google Scholar] [Crossref]
18.
C. L. Guyader and C. Gout, “Geodesic active contour under geometrical conditions: Theory and 3D applications,” Numer. Algorithms, vol. 48, pp. 105–133, 2008. [Google Scholar] [Crossref]
19.
C. M. Li, C. Y. Kao, J. C. Gore, and Z. H. Ding, “Implicit active contours driven by local binary fitting energy,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, MN, USA, 2007, pp. 1–7. [Google Scholar] [Crossref]
20.
H. Ali, L. Rada, and N. Badshah, “Image segmentation for intensity inhomogeniety in presence of high noise,” IEEE Trans. Image Process, vol. 8, pp. 3729–3738, 2018. [Google Scholar] [Crossref]
21.
A. A. Kumar, N. Lal, and R. N. Kumar, “A comparative study of various filtering techniques,” in 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 2021, pp. 26–31. [Google Scholar] [Crossref]
22.
S. Liu and Y. Peng, “A local region-based chan-vese model for image segmentation,” Pattern Recognit., vol. 45, pp. 2769–2779, 2012. [Google Scholar] [Crossref]
Search
Open Access
Research article

Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor

ibrar hussain1*,
jan muhammad2,
rifaqat ali3
1
Department of Mathematics, University of Peshawar, 25120 Peshawar, Pakistan
2
Department of Mathematics, Shanghai University, 200444 Shanghai, China
3
Department of Mathematics, College of Science and Arts, Muhayil, King Khalid University, 61413 Abha, Saudi Arabia
International Journal of Knowledge and Innovation Studies
|
Volume 1, Issue 2, 2023
|
Pages 116-126
Received: 10-24-2023,
Revised: 11-27-2023,
Accepted: 12-09-2023,
Available online: 12-30-2023
View Full Article|Download PDF

Abstract:

In the field of computer vision and digital image processing, the division of images into meaningful segments is a pivotal task. This paper introduces an innovative global image segmentation model, distinguished for its ability to segment pixels with intensity inhomogeneity and robustly handle noise. The proposed model leverages a combination of randomness measurement and spatial techniques to accurately segment regions within and outside contours in challenging conditions. Its efficacy is demonstrated through rigorous testing with images from the Berkeley image database. The results significantly surpass existing methods, particularly in the context of noisy and intensity inhomogeneous images. The model's proficiency lies in its unique ability to differentiate between minute, yet crucial, details and outliers, thus enhancing the precision of global segmentation in complex scenarios. This advancement is particularly relevant for images plagued by unknown noise distributions, overcoming limitations such as the inadequate handling of convex images at local minima and the segmentation of images corrupted by additive and multiplicative noise. The model's design integrates a region-based active contour method, refined through the incorporation of a local similarity factor, level set method, partial differential equations, and entropy considerations. This approach not only addresses the technical challenges posed by image segmentation but also sets a new benchmark for accuracy and reliability in the field.
Keywords: Image segmentation, Local-similarity factor, Level set method, Partial differential equations, Entropy, Objects

1. Introduction

Image segmentation, or partition, is the machine-assisted process of dividing a digital image into meaningful objects that tie in with objects or surfaces [1], [2], [3]. It is a fundamental problem in the fields of image processing and computer science, as reconstruction and recognition often depend on this information. To cope with the solution to this problem, researchers contributed their efforts and developed a huge diversity of segmentation frames for image partition [4], [5], [6], [7], [8].

Various active contour (AC) models studied under the level-set techniques can be classified into region-based or statistical-based [9], [10], [11], [12], [13], [14], [15] and edge-based [16], [17], [18] models. The edge- or boundary-based technique uses the function incline information to set up strength to direct the active contour in the direction of edges to desired areas in the image. These proposed methods are not only sensitive to a small amount of noise but even so stuck to catch the thin layer edges of the objects. The region-based framework methods employ image-statistical data to build fidelity constraints that have multiple interests when contrasted with edge-based methods. The region-based approaches don’t rely on the image inline information and can adequately segment the pixels with thin boundaries. Secondly, by employing global region information, the region-based models are generally robust to noise. Image processing must deal with noise and intensity inhomogeneity information. These are basically random and statistical variations of brightness and color information and occur in a random process, as further explained by Eq. (1), where $I(x)$ is the clean image, corrupted by intensity inhomogeneity $h(x)$ and noise $\eta(x)$.

$T(x)=I(x) h(x)+\eta(x)$
(1)

Various techniques have been designed for global segmentation, wherein active contour methods are of particular interest. During the past decades, different framework models have been developed to capture such a task. In the multitude of multiple methods mentioned above, we would like to distinguish the level set variational model introduced by Chan-Vese (CV) [11], categorized as a region-based model. The application simplicity of this variational model covered the novel framework's evolution path. This algorithmic framework causes the partition of multiple phases and their corresponding images. However, the algorithm framework demonstrates that it is very efficient for extracting corresponding and homogeneous regions, and segmenting the regions on the basis of boundaries would fail when there is high noise and an inhomogeneity factor. Images with inhomogeneity and noise factors are still an uphill task for statistical-based frameworks. This algorithm framework [11] was further extended to a new multi-phase algorithm framework [12], where number of level-set functions (LSF) have been utilized for various multiple region partitions. To further improve the algorithm framework [11] for the cases with the noise factor and inhomogeneity pixels, Li et al. [19] developed the algorithm framework and its various-phase functional by producing a kernel operator (smoothing filter) in the energy functional to enhance and demonstrate the working of the CV algorithm [11]. Ali et al. [20] present a prominent algorithm for the partition of images. The core and main approach of this algorithm is to apply signed generalized pressure averages (SGPA) to extract and take out the desired regions. An advantage of this algorithmic method is that it enables it to cope with multiple inhomogeneous regions in an image in the presence of some level of noise. Although it cannot freely analyze homomorphisms like the overlapping and allocating of the evolving curve, multiplying the noise level, etc. Furthermore, recently, Ibrar et al. [14] introduced a new region-based algorithm using the local similarity fidelity term, which segments the rough regions in images with both noise and inhomogeneity. It abstracts the Euler equation and emits the interaction, which is characterized by uncertainty as the contour splitting of the given function, and implements a local denoising constraint. In this model [14], each local region is reflected from the minimized point due to the non-convexity of its functional, which stuck in the local minima and was time-consuming.

In this paper, we propose a new global segmentation algorithm based on average convolution local factor with entropy and pixel differentiation in the entire region. The key contributions are listed below:

- We present a global image segmentation model that uses spatial and relative entropy techniques to segment the image in the local sense.

- We capture all or certain objects of interest from the global segmentation by accurately guiding the level set function using some well-known techniques [14], [19], [21] and their fidelity terms.

- We produce fine and improved results by exploiting the relative entropy technique using the kernel operator and the spatial distance technique in a local sense.

- The relative entropy helps measure the randomness of the region to clear the objects and background of the image, respectively, with a noise factor by employing spatial distances in the local region.

- We evaluate the proposed approaches against the latest competing baselines on out-of-air and medical images-from the Berkeley Image Database and show improved accuracy for global image segmentation.

This research paper is structured in the following ways: Section 2 contains a review of some related existing works. Section 3 reflects the proposed segmentation model. In this section, the derivation of the equation and the corresponding discretization have been presented. In Section 4, we display face-finding comparisons on various sets of data and show the accuracy of the proposed framework with other existing algorithmic methods. We conclude our work in Section 5.

2. Previous Work

2.1 Region-Based Local Similarity Factor (RLSF) Segmentation Model

To segment images with high noise and intensity inhomogeneity, Niu et al. [10] designed a statistical framework model, which is given by the following minimization eneroy function:

$\begin{aligned} F_{(\mathbf{x}, \zeta(\mathbf{x}))}^{R L S F} & =\int_{\Omega} \eta_1\left(\int_{\left(\mathbf{y} \in \aleph_x\right)} \frac{\left|u_0(\mathbf{y})-l c_1(\mathbf{x})\right|^2}{d_0(\mathbf{y}, \mathbf{x})} d(\mathbf{y})\right) H_\epsilon^{\prime}(\zeta(\mathbf{x})) d \mathbf{x} \\ & +\int_{\Omega} \eta_2\left(\int_{\left(\mathbf{y} \in \aleph_x\right)} \frac{\left|u_0(\mathbf{y})-l c_2(\mathbf{x})\right|^2}{d_0(\mathbf{y}, \mathbf{x})} d(\mathbf{y})\right)\left(1-H_\epsilon^{\prime}(\zeta(\mathbf{x}))\right) d \mathbf{x} \\ & +\mu \int_{\Omega} \delta_\epsilon(\zeta(\mathbf{x}))|\nabla \zeta(\mathbf{x})| d \mathbf{x}\end{aligned}$
(2)

with $M_0(\mathbf{x}, \mathbf{y})$ a logical matrix (mask) defined as:

$M_0(\mathbf{y}, \mathbf{x})= \begin{cases}1, & d_0(\mathbf{y}, \mathbf{x})<\mathrm{r} \\ 0, & \text { otherwise },\end{cases}$
(3)

with $d_0(\mathbf{y}, \mathbf{x})$ the Euclidean distance between two pixels and $r$ is a parameter specifying the maximum size of the local region, and

$\begin{aligned} l_{c 1} & =\frac{\int_{\left(\mathbf{y} \in \aleph_x\right)} M_0(\varsigma) u_0(\mathbf{y}) H_\epsilon^{\prime}(\zeta(\mathbf{x})) d \mathbf{x}}{\int_{\left(\mathbf{y} \in \aleph_x\right)} M_0(\varsigma) u_0(\mathbf{y}) H_\epsilon^{\prime}(\zeta(\mathbf{x})) d \mathbf{x}} \\ l_{c 2} & =\frac{\int_{\left(\mathbf{y} \in \aleph_x\right)} M_0(\varsigma)\left(1-H_\epsilon^{\prime}(\zeta(\mathbf{x}))\right) d \mathbf{x}}{\int_{\left(\mathbf{y} \in \aleph_x\right)} M_0(\varsigma) u_0(\mathbf{y})\left(1-H_\epsilon^{\prime}(\zeta(\mathbf{x}))\right) d \mathbf{x}} . \end{aligned}$
(4)

where, $\varsigma=(\mathbf{y}, \mathbf{x})$. The minimization of Eq. (2) with respect to $\psi(\mathbf{x})$ lead to following gradient descent flow:

$\begin{aligned} \frac{\partial \psi(\mathbf{x})}{\partial t} & =\delta_\epsilon(\zeta(\mathbf{x}))\left[\lambda_2\left(\int_{\left(\mathbf{y} \in \aleph_x\right)} \frac{\left|u_0(\mathbf{y})-l c_2(\mathbf{x})\right|^2}{d_0(\mathbf{y}, \mathbf{x})} d \mathbf{y}\right)\right. \\ & \left.-\lambda_1\left(\int_{\left(\mathbf{y} \in \aleph_x\right)} \frac{\left|u_0(\mathbf{y})-l c_1(\mathbf{x})\right|^2}{d_0(\mathbf{y}, \mathbf{x})} d \mathbf{y}\right)\right] \\ & +\mu \delta_\epsilon(\zeta(\mathbf{x})) \operatorname{div}\left(\frac{\nabla \zeta(\mathbf{x})}{|\nabla \zeta(\mathbf{x})|}\right), \quad \text { in } \quad \Omega,\end{aligned}$

$\frac{\delta_\epsilon(\zeta(\mathbf{x}))}{|\nabla \zeta(\mathbf{x})|} \frac{\partial \zeta(\mathbf{x})}{\partial \vec{n}}=0 \quad on \quad \partial \Omega.$

Even though the RLSF model segments the images in the presence of high noise and intensity, inhomogeneity may be stuck in local minima.

2.2 The Local Region Based Chan-Vese (LRCV) Model

Liu and Peng [22], proposed a local region-based Chan-Vese (LRCV) model to segment images efficiently with intensity inhomogeneity. The energy functional of the LRCV model is given by:

$\begin{aligned} F^{L R C V}\left(\gamma_1, \gamma_2, \zeta\right)= & \lambda_1 \int_{\Omega}\left|I_0(x)-\gamma_1(x)\right|^2 H_\epsilon^{\prime}(\zeta(x)) d x \\ & +\lambda_2 \int_{\Omega}\left|I_0(x)-\gamma_2(x)\right|^2\left(1-H_\epsilon^{\prime}(\zeta(x)) d x\right. \\ & +\mu \int_{\Omega} \delta_\epsilon(\zeta(x))|\nabla \zeta(x)| d x \end{aligned}$
(5)

where, $\gamma_1(x)$ and $\gamma_2(x)$ are the functions which are spatially varying.

$\begin{aligned} \gamma_1(x) & =\frac{\int_{\Omega} g_k(y-x) I_0(x) H_\epsilon^{\prime}(\zeta(x)) d x}{\int_{\Omega} g_k(y-x) H_\epsilon^{\prime}(\zeta(x)) d x}, \\ \gamma_2(x) & =\frac{\int_{\Omega} g_k(y-x) I_0(x)\left(1-H_\epsilon^{\prime}(\zeta(x))\right) d x}{\int_{\Omega} g_k(y-x)\left(1-H_\epsilon^{\prime}(\zeta(x))\right) d x} . \end{aligned}$
(6)

In above equations $g_k(y-x)$ taken as the weight assigned to each pixel $I_0(x)$ at $x$. Because of localization effects of the kernel $g_k$, the contribution of the pixel $I_0(x)$ to $\gamma_1(x)$ and $\gamma_2(x)$ reduces and approaches to zero as the point $x$ goes away from the centre point $y$.

The minimization of Eq. (5) with respect to $\zeta$ and fixed $c_1(x)$ and $c_2(x)$, leads to the following equation:

$\delta(\zeta)\left[-\lambda_1\left(I_0(x)-\gamma_1(x)\right)^2+\lambda_2\left(I_0(x)-\gamma_2(x)\right)^2\right]=0$
(7)
2.3 Robust Region Based Active Contour Model (RRBAC)

For a given image $j_0: \Omega \rightarrow \mathbb{R}$, where $\Omega$ is the domain of image the energyfunctional of the RRBAC model [14], is follow by:

$\begin{aligned} F_{\left(\zeta, c_1, c_2\right)}= & \lambda_1\left(\int_{\left(y \in N_x\right) \neq x} \frac{\left|g * j_0-j_0-c_1\right|^2}{d(x, y)} H_\epsilon^{\prime}(\zeta(x)) d x\right. \\ & \left.+\int_{\left(y \in N_x\right) \neq x} \frac{\left|g * j_0-j_0-c_2\right|^2}{d(x, y)}\left(1-H_\epsilon^{\prime}(\zeta(x))\right) d x\right) \\ & +\lambda_2\left(\int_{\left(y \in N_x\right) \neq x}\left|j_0-d_1\right|^2 H_\epsilon^{\prime}(\zeta(x)) d x\right. \\ & +\int_{\left(y \in N_x\right) \neq x}\left|j_0-d_2\right|^2\left(1-H_\epsilon(\zeta(x)) d x\right) \\ & +\mu \int_{\left(y \in N_x\right) \neq x} \delta_\epsilon(\zeta(x))|\nabla \zeta(x)| d x . \end{aligned}$
(8)

where, $g$ is the average filter which is convolved with image $j_0 . N_x$ is a local window define as a neighborhood of pixels surrounding the pixel $x$, and $d(x, y)$ is the spatial Euclidean distance between any two pixels. $c_1, c_2$ and $d_1, d_2$ are the average intensities inside and outside of the contour.

3. Energy Functional of Proposed Model

Our proposed setup incorporates the principles of entropy and relative entropy. Entropy quantifies the level of disorder in a certain region with identical pixel values within a given system. Primarily, we manipulate an image that possesses identical pixel values (see Figure 1). More precisely, we utilize the Shannon entropy of the probability distribution P as stated in Eq. (9). Furthermore, we employ the concept of relative entropy. Let $A=a_1, a_2, \ldots, a_n$ and $B=b_1, b_2, \ldots, b_n$ be two distinct probability distributions. The Shannon entropy of the probability distribution P is defined as:

$\operatorname{Entrphy}(P)=-\sum_{i=1}^n A_i \log A_i$
(9)

The relative entropy between A and B is defined as:

$R(A, B)=-\sum_{i=1}^n A_i \log \left(\frac{A_i}{B_i}\right) $
(10)

Let $C\left(\xi_1, \xi_2\right)$ such that $\left.\xi_1=1,2, \ldots ., M ; \xi_2=1,2, \ldots ., N\right)$ be the brightness of a pixel located at $\left(\xi_1, \xi_2\right)$ in the given image. $C\left(\xi_1, \xi_2\right) \in(0,1, \ldots, L-1)$. Then local relative entropy (LRE) of the pixel $\left(\xi_1, \xi_2\right)$ is calculated in a $n \times n$ neighborhood as:

$J\left(\xi_1, \xi_2\right)=\sum_{i=\frac{-n+1}{2}}^{\frac{n-1}{2}} \sum_{j=\frac{-n+1}{2}}^{\frac{n-1}{2}} \operatorname{Img}\left(\xi_1+i, \xi_2+j\right) \times \frac{\left|\log \left(\operatorname{Img}\left(\xi_1+i, \xi_2+j\right)\right)\right|}{C\left(\xi_1, \xi_2\right)} $
(11)

where, $C\left(\xi_1, \xi_2\right)$ is the mean gray level value of the pixels in the neighborhood, which is given as:

$C\left(\xi_1, \xi_2\right)=\sum_{i=\frac{n+1}{2}}^{\frac{n-1}{2}} \sum_{j=\frac{-n+1}{2}}^{\frac{n-1}{2}} \operatorname{Img}\left(\xi_1+i, \xi_2+j\right)$

By implementing this concept in the proposed model, it will be able to partition the same pixels region-based (see Figure 2). For a given image, Img: $\Omega \rightarrow \mathbb{R}$, the energy functional of the proposed model is given by:

$\begin{aligned} & F_{\left(\zeta, f_1, f_2\right)}= \quad \lambda_1 \int_{\left(y \in N_x\right) \neq x} G_\sigma\left(\frac{\left|J\left(\xi_1, \xi_2\right)-f_1(x)\right|^2}{d_0\left(\xi_1, \xi_2\right)}\right) H_\epsilon^{\prime}(\zeta(x)) d x \\ &+\lambda_2 \int_{\left(y \in N_x\right) \neq x} G_\sigma\left(\frac{\left|J\left(\xi_1, \xi_2\right)-f_2(x)\right|^2}{d_0\left(\xi_1, \xi_2\right)}\right)\left(1-H_\epsilon^{\prime}(\zeta(x)) d x\right. +\mu \int_{\left(y \in N_x\right) \neq x} \delta_\epsilon^{\prime}(\zeta(x))|\nabla \zeta(x)| d x\end{aligned}$
(12)

where, $G_\sigma$ is a Gaussian filter, and defined as:

$G_\sigma=\frac{1}{(2 \pi)^{n / 2} \sigma^2} \cdot e^{-|\varrho|^2} / 2 \sigma^2$
(13)

where, $\varrho=\xi_1-\xi_2$ and with a scale parameter $\sigma>0 .\, N_{\mathbf{x}}$ is a local window defined as a neighborhood of pixels surrounding the pixel $x, d_0\left(\xi_1, \xi_2\right)$ is the Euclidean-distance between any two pixels, $f_1(x)$ and $f_2(x)$ are reflecting the local region mean intensities of the given image $J\left(\xi_1, \xi_2\right), \lambda_1, \lambda_2$ are the parameters that balance the image fidelity terms of the model, and $\mu$ is the rate of the length-term, which balances and control the size of object, such that $\lambda_1, \lambda_2 \in[ 0,1]$ and $\mu$ will be positive.

By minimizing Eq. (12), $f_1(x)$ and $f_2(x)$ are solved as follows:

$\begin{aligned} f_1(x) & =\frac{\int_{\left(y \in N_x\right) \neq x} G_\sigma J\left(\xi_1, \xi_2\right) H_\epsilon^{\prime}((\zeta(x)) d x}{\int_{\left(y \in N_x\right) \neq x} K_\sigma H_\epsilon^{\prime}(\zeta(x)) d x} \\ f_2(x) & =\frac{\int_{\left(y \in N_x\right) \neq x} G_\sigma J\left(\xi_1, \xi_2\right)\left(1-H_\epsilon^{\prime}(\zeta(x)) d x\right.}{\int_{\left(y \in N_x\right) \neq x} K_\sigma\left(1-H_\epsilon^{\prime}(\zeta(x)) d x\right.} \end{aligned}$
(14)

Minimize Eq. (12) the below variational formulation can be obtained:

$\begin{aligned} \frac{\partial \zeta}{\partial t} & =\delta_\epsilon(\zeta(x))\left[\mu \nabla \cdot \frac{\nabla \zeta(x)}{|\nabla \zeta(x)|}+\lambda_1\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_1(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right)\right. \\ & \left.-\lambda_2\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_2(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right)\right] \end{aligned}$
(15)
3.1 Numerical Scheme

To solve the above Eq. (15), we use the central finite differences for discretization as follows:

$\begin{aligned} \frac{\zeta_{i, j}^{n+1}-\zeta_{i, j}^n}{\Delta t} & =\delta_\epsilon\left(\zeta_{i, j}^n\right)\left[\mu \nabla \cdot \frac{\nabla \zeta(x)}{|\nabla \zeta(x)|}+\lambda_1\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_1(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right)\right. \\ & -\lambda_2\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_2(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right]\end{aligned}$

or

$\begin{aligned} \zeta_{i, j}^{n+1} & =\zeta_{i, j}^n+\Delta t \delta_\epsilon\left(\zeta_{i, j}^n\right)\left[\mu \nabla \cdot \frac{\nabla \zeta(x)}{|\nabla \zeta(x)|}+\lambda_1\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_1(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right)\right. \\ & -\lambda_2\left(\frac{G_\sigma\left(J\left(\xi_1, \xi_2\right)-f_2(x)\right)^2}{d\left(\xi_1, \xi_2\right)}\right] \end{aligned}$
(16)

where the curvature term is computed according to the following formula:

$\nabla \cdot \frac{\nabla \zeta(x)}{|\nabla \zeta(x)|}=\frac{\zeta_{x x} \zeta_y^2-2 \zeta_{x y} \zeta_x \zeta_y+\zeta_{y y} \zeta_x^2}{\left(\zeta_x^2+\zeta_y^2\right)^{3 / 2}}$
(17)

where, $\zeta_x, \zeta_y, \zeta_{x x}, \zeta_{y y}$ and $\zeta_{x y}$ are computed as follows:

$\begin{aligned} \zeta_x & =\frac{1}{2 h}\left(\zeta_{i+1, j}-\zeta_{i-1, j}\right), \zeta_y=\frac{1}{2 h}\left(\zeta_{i, j+1}-\zeta_{i, j-1}\right), \\ \zeta_{x x} & =\frac{1}{h^2}\left(\zeta_{i+1, j}+\zeta_{i-1, j}-2 \zeta_{i, j}\right), \zeta_{y y}=\frac{1}{h^2}\left(\zeta_{i, j+1}+\zeta_{i, j-1}-2 \zeta_{i, j}\right), \\ \zeta_{x y} & =\frac{1}{h^2}\left(\zeta_{i+1, j+1}-\zeta_{i-1, j+1}-\zeta_{i+1, j-1}+\zeta_{i-1, j-1}\right), \end{aligned}$
(18)

where, $h$ is the grid size. The algorithm of the proposed method is given by as follows:

Algorithm-1 The Proposed Method Algorithm: $\left.\left(\zeta^{n+1}(\mathbf{x}\right)\right) \leftarrow\left(\zeta^{(n)}, J\left(\xi_1, \xi_2\right), \lambda_1, \lambda_2, \mu\right.$, maxit,tol$)$.

1. Initialize the level set function $\zeta(\mathrm{x})$ with $\zeta^0(\mathrm{x})$.

2. $f_1, f_2$ are updated via (14).

3. Calculate $\zeta^{n+1}(\mathrm{x})$ using Eq. (16).

4. Check weather the solution is stationary, $\left|\zeta_{i, j}^{n+1}-\zeta_{i, j}^n\right| \geq$ tol. If not, repeat step 2 .

4. Experimental Results

The experimental portion includes results demonstrations on outdoor and synthetic images to reveal and show the performance of the proposed model. We have competed with other well-known segmentation models with our proposed method, which contains the region-based local similarity factor (RLSF) segmentation model [10], the local region-based Chan-Vese (LRCV) model [22], and the robust region-based active contour (RRBAC) model [14] on a given image with noise and intensity inhomogeneity at the same time. For a fair comparison of the proposed model with competing methods, we have utilized the parameters that are given in Table 1. In our proposed model, we use these terms to produce the new results of the proposed model. In our case, we use some important parameters in order to finetune the fine and accurate results of the proposed model.

Table 1. The parameters used for RLSF [19], LRCV [22], RRBAC [20] and proposed model

RLSF

LRCV

RRBAC

Proposed Model

$\lambda_1$

1

$\lambda_1$

1

$\lambda$

0.99

$\lambda_1$

1

$\lambda_2$

1

$\lambda_2$

1

$\mu$

0.001

$\lambda_2$

1

$\mu$

0.06

$\mu$

500

$\epsilon$

0.06

$\mu$

0.03

$\epsilon$

0.01

$\epsilon$

0.5

$\mathrm{r}$

10

$\epsilon$

1

$\mathrm{r}$

7

h and $\Delta \mathrm{t}$

0.01 and 0.1

$\sigma$

5

The size of the local window is 5×5 and the image is 110×110, which are used for the proposed model, respectively. All the demonstration experiments and contrast are performed on a 2.0GHz Intel Core i3 PC with 6GB of RAM in Windows 10. In Matlab 7.9, this algorithm was applied. The Matlab running code of our designed model will be provided for research if requested through the email mentioned.

Firstly, the proposed model used the entropy and local relative entropy concepts in energy functionals to measure the disorderliness of the pixels (or the same pixel values) in the image, and it can be seen that local relative entropy (LRE) counts the difference in brightness of the pixels in the image between the mean brightness of pixel $C(x, y)$ and its neighbors in the local sense. If the difference remains the same, then the LRE will be small, and if the difference is not the same between the mean brightness of pixel $C(x, y)$ and its neighbors, then the LRE will be large.

Figure 1 is the adequately explained machine result in the proposed model, and Figure 2 is the typical example of the proposed segmentation model on a synthetic image.

Figure 1. Perform some results, using entropy filter technique which is used in proposed method
Figure 2. Intensity distribution and segmentation results of the proposed model

We take the proposed global segmentation model further and compare the segmentation accuracy in contrast with different competing models, as shown in Figure 3. In this figure, the two given images have intensity inhomogeneity, and especially the second row has the same pixels. The images are segmented row-wise from left to right. The second column included in this comparison is the RLSF model. In order to achieve high inhomogeneity intensity in the region, the RLSF algorithm model is poor at segmenting the entire region of the image properly, and with the same pixel intensity in the second row image, the RLSF model can't detect the right boundary of the object. The third column involved in this contrast is the LRCV model, which takes the object region boundary properly, but because of the high inhomogeneity intensity in the background region in the first row image, this model is incapable of segmenting the image adequately, and consequently, various layers have been captured and segmented. The contrast has been chased by the RRBAC algorithm and the further proposed model (Figure 3, columns 4 and 5), respectively. The RRBAC model also stuck in intensity inhomogeneity in the image background due to the non-convexity, and this model also can't detect the right boundary of the object. The proposed model segments the images locally and calculates the difference in brightness of pixels in the image between the mean brightness of pixel $C(x, y)$ and its neighbors in the local sense, and then balances and controls the pixels distance by $d(x, y)$.

Figure 3. Comparative performance of RLSF, LRCV, RRBAC, and proposed model across five columns
Figure 4. Performance of RLSF, LRCV, RRBAC and proposed model in first to fifth column respectively using window size 5 and $\sigma$ = 10 in proposed model

Due to the high noise in the images, we contrast the results and performance of the proposed algorithm method with those of other competing models, including the RLSF method, the LRCV method, and the RRBAC method. In the proposed method, we set the choice parameters $\lambda_1=1, \lambda_2=1$, and $\sigma=10$. For a fine and accurate contrast of the proposed model with other competing models, we added Gaussian noise $(\delta)$, which is $\delta$=(0.01, 0.02, 0.02) to the images by putting the imnoise command function and the same initial contour in Figure 4. By adding noise, we can conclude that the noise impacts the segmentation outputs of other methods. In this figure, all the competing models failed to segment the images properly due to intensity inhomogeneity and robust noise.

Figure 5. Segmentation results of RLSf, LRCV, RRBAC and proposed model in second to fifth column respectively
Figure 6. Noise impact on LRCV, RRBAC, and proposed model segmentation performance
Note: Image is corrupted from top to bottom with Gaussian noise, the LRCV and RRBAC model segment the image in first, second row adequately,but stuck in third row. The proposed model has the same fine performance in fourth column.
Figure 7. Segmentation accuracy of proposed model with JSC metric
Note: The JSC is used to measure the difference and similarity of the sample pixels or sets. The proposed model segmentation accuracy is lie in [0.9, 1], which reflects that the framework of proposed model has fine segmentation accuracy in the contrast with competing models

Furthermore, in Figure 5, we can easily notice that an image has two objects of different pixel inhomogeneity; the competing models are incapable of tackling the intensity inhomogeneity, and the proposed model properly segmented the image. We can see that the RLSF model, LRCV model, and RRBAC model are stuck in robust noise and homogenous intensity to capture the right boundary of the object.

In Figure 6, an image is corrupted by noise $\delta$ = (0.001, 0.01, 0.02). The competing models LRCV and RRBAC segment the images in the first and second rows, respectively, but by increasing the level of noise up to $\delta$ = 0.02, then the LRCV and RRBAC models are incapable of tackling the noisy object boundary. Our proposed model shows fine segmentation in this figure. It shows that the choice of entropy in the regularizing term in the proposed model has a wonderful impact on the segmentation result.

The proposed method has fine accuracy as well as being faster than the already existing competing methods. Table 2 shows the speed comparison of competing models with the proposed model. In Table 2, the efficiency of the proposed model is reflected in the sense of time cost and the counting of iterations of competing models. To further demonstrate the proposed algorithm segmentation demonstration in comparison with competing models, we use the Jaccard similarity coefficient (JSC). The JSC measures the difference and similarity of the data sets, or pixels. The JSC measures values in the given interval [0, 1]. Value 1 reflects perfect and fine segmentation. In Figure 7, the proposed model segmentation accuracy lies in [0.9, 1], which finds that the designed method has satisfactory and fine segmentation accuracy in contrast with competing models.

Table 2. Proficiency comparison of the RLSF [19], LRCV [22], RRBAC [20] and proposed models

RLSF

LRCV

RRBAC

Proposed Model

Iter

CPU

Iter

CPU

Iter

CPU

Iter

CPU

500

80.52

150

30.91

200

45.45

15

6.91

500

60.87

150

29.75

200

42.97

15

7.34

1000

117.89

300

57.71

400

76.82

20

9.89

1000

112.15

300

59.29

400

80.52

40

10.72

1000

131.49

300

62.94

400

85.03

40

14.61

5. Conclusions

This research study presents a new and sophisticated segmentation model that offers precise and validated outcomes for global image segmentation. After comparing the suggested segmentation model with other competing designs, we have confirmed and concluded that our model is capable of effectively processing images that contain gaussian noise and intensity inhomogeneity during the iterative process. While many variational segmentation techniques are susceptible to significant noise, our suggested algorithm excels at accurately refining object edges. The suggested algorithm's excellent accuracy results in enhanced computing efficiency. The proposed approach has the potential to be advantageous for practical applications, such as the segmentation of medical images or the identification of tumors or any cancer-affected regions. The project team is willing to share the code for research purposes upon request through email.

In addition to its robustness, the suggested model has several drawbacks, one of which is its limitation in multi-phase segmentation. By incorporating spatial distance in the suggested model, it disregards numerous pixels, leading to the exclusion of significant regions. If researchers in computation, mathematics, and image processing take into account these limits, they can transform them into valuable future study avenues.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
R. M. Abdelazeem, D. Youssef, J. El-Azab, S. Hassab-Elnaby, and M. Agour, “Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection,” Plos One, vol. 16, no. 5, p. e0251614, 2020. [Google Scholar] [Crossref]
2.
J. Bertels, T. Eelbode, M. Berman, D. Vandermeulen, F. Maes, R. Bisschops, and M. B. Blaschko, “Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019, pp. 92–100. [Online]. Available: https://arxiv.org/abs/1911.01685 [Google Scholar]
3.
L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, pp. 834–848, 2018. [Google Scholar] [Crossref]
4.
E. Calli, E. Sogancioglu, B. van Ginneken, K. G. van Leeuwen, and K. Murphy, “Deep learning for chest X-ray analysis: A survey,” Med. Image Anal., vol. 72, p. 102125, 2021. [Google Scholar] [Crossref]
5.
G. J. Deng, F. Galetto, M. Al-nasrawi, and W. Waheed, “A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution,” IEEE Open J. Signal Process., vol. 2, pp. 119–135, 2021. [Google Scholar] [Crossref]
6.
A. Distante and C. Distante, Handbook of Image Processing and Computer Vision. Springer, 2021. [Online]. Available: https://link.springer.com/book/10.1007/978-3-030-42378-0 [Google Scholar]
7.
K. Zhang, L. Zhang, and S. L. Zhang, “Avariational multiphase level set approach to simultaneous segmentation and bias correction,” in Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 2010, pp. 4105–4108. [Google Scholar] [Crossref]
8.
B. Peng, L. Zhang, and J. Yang, “Iterated graph cuts for image segmentation,” in Computer Vision - ACCV 2009, 9th Asian Conference on Computer Vision, Xi’an, China, 2009. [Google Scholar] [Crossref]
9.
K. Zhang, L. Zhang, H. Song, and W. Zhou, “Active contours with selective local or global segmentation: A new formulation and level set method,” Image Vis. Comput., vol. 28, pp. 668–676, 2010. [Google Scholar] [Crossref]
10.
S. Niu, Q. Chen, L. de Sisternes, Z. X. Ji, Z. Zhou, and D. L. Rubin, “Robust noise region-based active contour model via local similarity factor for image segmentation,” Pattern Recognit., vol. 61, pp. 104–119, 2017. [Google Scholar] [Crossref]
11.
T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, 2001. [Google Scholar] [Crossref]
12.
A. Tsai, A. Yezzi, and A. S. Willsky, “Curve evolution implementation of the mumford-shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process., vol. 10, pp. 1169–1186, 2001. [Google Scholar] [Crossref]
13.
L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the mumford and shah model,” Int. J. Comput. Vis., vol. 50, pp. 271–293, 2002. [Google Scholar] [Crossref]
14.
H. Ibrar, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imaging, vol. 16, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
15.
K. Zhang, H. Song, and L. Zhang, “Active contours driven by local image fitting energy,” Pattern Recognit., vol. 43, pp. 1199–1206, 2010. [Google Scholar] [Crossref]
16.
V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” Int. J. Comput. Vis., vol. 22, pp. 61–79, 1997. [Google Scholar] [Crossref]
17.
Y. F. Gao, M. Zhou, and D. N. Metaxas, “Utnet: A hybrid transformer architecture for medical image segmentation,” in 24th International Conference, Strasbourg, France, 2021, pp. 61–71. [Google Scholar] [Crossref]
18.
C. L. Guyader and C. Gout, “Geodesic active contour under geometrical conditions: Theory and 3D applications,” Numer. Algorithms, vol. 48, pp. 105–133, 2008. [Google Scholar] [Crossref]
19.
C. M. Li, C. Y. Kao, J. C. Gore, and Z. H. Ding, “Implicit active contours driven by local binary fitting energy,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, MN, USA, 2007, pp. 1–7. [Google Scholar] [Crossref]
20.
H. Ali, L. Rada, and N. Badshah, “Image segmentation for intensity inhomogeniety in presence of high noise,” IEEE Trans. Image Process, vol. 8, pp. 3729–3738, 2018. [Google Scholar] [Crossref]
21.
A. A. Kumar, N. Lal, and R. N. Kumar, “A comparative study of various filtering techniques,” in 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 2021, pp. 26–31. [Google Scholar] [Crossref]
22.
S. Liu and Y. Peng, “A local region-based chan-vese model for image segmentation,” Pattern Recognit., vol. 45, pp. 2769–2779, 2012. [Google Scholar] [Crossref]
Nomenclature
$\mu$Length term parameter
$\sigma$Scale parameter
$g$Average filter
$\Omega$Bounded open subset
$u_0, I_0, j_0$Given images
$\zeta$Level set function
$\delta$Delta function
$\nabla$Gradient
$\lambda$Non-negative parameter of the fidelity term
$\epsilon$Diffusion parameter

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Hussain, I., Muhammad, J., & Ali, R. (2023). Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor. Int J. Knowl. Innov Stud., 1(2), 116-126. https://doi.org/10.56578/ijkis010204
I. Hussain, J. Muhammad, and R. Ali, "Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor," Int J. Knowl. Innov Stud., vol. 1, no. 2, pp. 116-126, 2023. https://doi.org/10.56578/ijkis010204
@research-article{Hussain2023EnhancedGI,
title={Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor},
author={Ibrar Hussain and Jan Muhammad and Rifaqat Ali},
journal={International Journal of Knowledge and Innovation Studies},
year={2023},
page={116-126},
doi={https://doi.org/10.56578/ijkis010204}
}
Ibrar Hussain, et al. "Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor." International Journal of Knowledge and Innovation Studies, v 1, pp 116-126. doi: https://doi.org/10.56578/ijkis010204
Ibrar Hussain, Jan Muhammad and Rifaqat Ali. "Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor." International Journal of Knowledge and Innovation Studies, 1, (2023): 116-126. doi: https://doi.org/10.56578/ijkis010204
HUSSAIN I, MUHAMMAD J, ALI R. Enhanced Global Image Segmentation: Addressing Pixel Inhomogeneity and Noise with Average Convolution and Entropy-Based Local Factor[J]. International Journal of Knowledge and Innovation Studies, 2023, 1(2): 116-126. https://doi.org/10.56578/ijkis010204
cc
©2023 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.