Acadlore takes over the publication of IJCMEM from 2025 Vol. 13, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.
Hybridize the Dwarf Mongoose Optimization (DMO) Algorithm to Obtain the Optimal Solution for Solve Optimization Problems
Abstract:
In this paper, two distinct strategies were used to enhance problem-solving abilities. The first strategy involved developing a conjugate gradient algorithm in which several new parameters were derived and proposed. The second strategy included hybridizing the dwarf mongoose optimization (DMO) algorithm in two ways, the first using the community by taking advantage of the developed conjugate gradient algorithm that was extracted from the first strategy and obtaining the hybrid algorithm (CG-DMO) that gives better results than the results of the original algorithm. The second method is to combine the sand cat swarm optimization algorithm (SCSO) and the dwarf mongoose optimization algorithm (DMO), and a hybrid algorithm (SCSO-DMO) is obtained. The dwarf mongoose optimization (DMO) algorithm uses three mongoose social groups: the alpha group, the scout group, and the babysitter group to replicate their foraging behavior. The Alpha group underwent hybridization, using the attack method of sand cats, known for their keen hearing of low-frequency sounds and their adeptness at detecting prey by digging. This hybrid approach led to the development of an equation for identifying candidate food sites within the alpha group. The proposed algorithms (CG-DMO) and (SCSO-DMO) underwent extensive testing on standard test functions, resulting in superior results compared to the original algorithm.
1. Introduction
The great difference between living organisms in trying to survive in nature, searching for food, attacking, or hiding, has prompted many scientists to create mathematical models (algorithms) that simulate the social behavior of these living organisms. These models have become a major role in solving optimization problems, but due to the rapid development in various fields, individual algorithms have become It does not always lead to optimal solutions, so it becomes necessary to combine algorithms and obtain a better mathematical model.
In the realm of solving optimization problems, methodologies are typically categorized into two primary types: deterministic algorithms and stochastic algorithms [1]. Within classical algorithmic frameworks, deterministic strategies have historically held precedence. However, in the realm of stochastic algorithms, two distinct sub-types emerge: heuristic and meta-heuristic algorithms [2]. Heuristic algorithms are grounded in a straightforward trial-and-error methodology. On the other hand, meta-heuristic algorithms commence with a diverse set of initial solutions, which are progressively refined through iterations. This approach is exemplified in various algorithms such as the particle swarm [3], gray wolf [4], bee colony [5], and genetic algorithms [6]. Nature's mechanisms have profoundly influenced the development of many meta-heuristic algorithms. Researchers have harnessed natural processes to create algorithms capable of solving complex problems across Various domains, including but not limited to the traveling salesman problem and optimal control, and medical image processing [7], [8], [9], [10]. The efficacy of these nature-inspired algorithms stems from their capacity to replicate the most efficient characteristics observed in natural phenomena. An example of such innovation is the Dwarf Mongoose Optimization (DMO) algorithm, a meta-heuristic technique inspired by the foraging patterns observed in dwarf mongooses. This algorithm incorporates three primary social groups found within mongoose communities: the alpha group, the babysitter group, and the scout group to mimic their natural foraging strategies. Dwarf mongooses exhibit specialized behavioral adaptations for effective feeding, including strategies related to prey size, spatial utilization, group dynamics, and food distribution. The algorithm models this by having the alpha female lead the group, while a subset of the population, the babysitters, cares for the young during foraging. These babysitters are dynamically replaced as the algorithm progresses. Unlike their real-life counterparts, which don't use permanent nests and frequently change their foraging locations, the algorithm adapts these behaviors into its operational framework [11].
The work that we did was to hybridize the Dwarf Mongoose Optimization (DMO) algorithm using two algorithms, Specifically the Formulated conjugate gradient algorithm that was derived in the first part of this paper, which is a Progress to the initial population of the (DMO) and obtained the (CG-DMO) algorithm. The second hybridization was by merging the Dwarf Mongoose Optimization (DMO) algorithm with the Sand Cat Swarm Optimization (SCSO) algorithm to develop the hybrid SCSO-DMO algorithm. The SCSO algorithm a meta-heuristic technique, draws inspiration from the survival instincts observed in sand cats, particularly their ability to detect low-frequency sounds under 2 kilohertz and their exceptional digging skills for hunting prey [12], [13]. The newly formulated SCSO-DMO algorithm has demonstrated its effectiveness in solving complex optimization problems. Empirical results have shown its ability to achieve optimal solutions, as evidenced by its performance on five global test functions, consistently reaching the optimal solution, which is zero.
The paper follows a structured framework outlined as follows: Section 2 developed conjugate gradient algorithm. Section 3 introduces the sand cat swarm optimization (SCSO) algorithm. Section 4 presents the dwarf mongoose optimization (DMO) algorithm. Section 5 presents the proposed algorithms (CG-DMO) and (SCSO-DMO). Section 6 provides conclusions drawn from the study.
2. Developed Conjugate Gradient Algorithm
It is one of the mathematical methods used to solve problems of finding the minimum or maximum of functions. This method is usually used in problems searching for solutions to systems of linear equations or improving performance in numerical research problems. The conjugate gradient method is based on the use of conjugate directives to quickly optimize the function being optimized. Instead of using traditional regression directions, the conjugate gradient method uses directions that are interconnected with each other. You start moving towards the initial downhill direction. A conjugate direction of progress is then determined based on the previous step and the interconnected trends. This process is repeated until the solutions improve simultaneously in the directions of all regressions.
Definition 1: The optimization [14]
It means finding the best solution to the given problem, and finding the minimum or maximum value of a function consisting of n variables, where n can be any integer greater than zero.
Definition 2: Global and local minimum [14]
A- The global minimum value: represents the lowest value of the function in the entire research field.
B- Local minimum value: represents the lowest value of the function in specific locations within the search field. Algorithms that trend toward a global minimum are known as globally convergent algorithms, while algorithms that trend toward a local minimum are known as locally convergent algorithms.
In three term conjugates gradient direction Babaie-Kafaki and Ghanbari [15] proposed a new value of $t^*$ as follow:
$t^*=\frac{\left\|y_k\right\|}{\left\|s_k\right\|}+\frac{s_k^T y_k}{\left\|s_k\right\|^2}$
Ibrahim and Shareef [16] proposed another value of $t$ as follow:
$t_k=\gamma \frac{\left\|y_k\right\|}{\left\|s_k\right\|}+(1-\gamma) \frac{s_k^T y_k}{\left\|s_k\right\|^2}$, where $\gamma \in(0,1)$
Here, we drive a new conjugacy coefficient with a new value of $t$ as [16], [17]:
$\alpha_k=t_k=\frac{\left(s_k^T g_k / 2\right)^2}{\left(f_{k+1}-f_k\right)^2}$
$d_{k+1}^{C G_3}=-g_{k+1}+\beta_k d_k-t_k\left(\frac{g_{k+1}^T d_k}{d_k^T y_k}\right) y_k$
Let $d_{k+1}^{C G_2}=d_{k+1}^{C G_3}$
$-g_{k+1}+\beta_k^{\mathrm{New}} d_k=-g_{k+1}+\beta_k d_k-t_k\left(\frac{g_{k+1}^T d_k}{d_k^T y_k}\right) y_k$
Multiply both sides of the above equation by $y_k^T$ we get:
$\begin{aligned}-y_k^T g_{k+1}+\beta_k^{N e w} y_k^T d_k & =-y_k^T g_{k+1} \\ & +\beta_k y_k^T d_k-t_k\left(\frac{g_{k+1}^T d_k}{d_k^T y_k}\right)\left\|y_k\right\|^2\end{aligned}$
where, $t_k=\frac{\left(s_k^T g_k / 2\right)^2}{\left(f_{k+1}-f_k\right)^2}>0$
From Ibrahim and Shareef [16],
The first case: If $\beta_k=\beta^{H S}=\frac{g_{k+1}^T y_k}{d_k^T y_k}$ then Eq. (1) become:
$\beta_k^{\text {New } 1}=\frac{g_{k+1}^T y_k}{d_k^T y_k}-t_k\left(\frac{g_{k+1}^T d_k}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2$
The second case: If $\beta_k=\beta^{F R}=\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}$ then Eq. (1) become:
$\beta_k^{\mathrm{New} 2}=\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{g_{k+1}^T d_k}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2$
The third case: If $\beta_k=\beta^{P R}=\frac{g_{k+1}^T y_k}{g_k^T g_k}$ then Eq. (1) become:
$\beta_k^{N e w 3}=\frac{g_{k+1}^T y_k}{g_k^T g_k}-t_k\left(\frac{g_{k+1}^T d_k}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2$
In this section, we will show that the proposed algorithm that has been identified achieves the property of sufficient regression that satisfies the property of convergence.
Assumption (1): If f is bounded on the set $s=\left\{x \in R^n: f(x) \leq f\left(x_0\right)\right\}$ and is differentiable with the gradient $\nabla f$ and there is a Lipchitz Constant $L>0$, then
$\|\nabla f(x)-\nabla f(y)\| \lt L\|x-y\|$ for all $x, y \in s$
Theorem (1): The search direction $d_k$ generated by the proposed algorithm of the developed CG achieves the property of sufficient steepness for each k , when the step size $\lambda_k$ satisfies Wolfe conditions.
Proof: Using the principle of mathematical induction.
When $k=0, d_0=-g_0 \Rightarrow d_0^T g_0=-\left\|g_k\right\|^2<0$ then the theorem is true.
Now we assume that the theorem is true for all values of $k \geq 0$, i.e
$g_k^T d_k<0, g_k^T d_k \leq-c\left\|g_k\right\|^2, c>0$
We will now explain that the theorem is true when $k+1$.
By multiplying both sides of Eq. (2) above by $g_{k+1}^T$ we get:
The first case: If $\beta_k=\beta_k^{\text {New } 1}$
$g_{k+1}^T d_{k+1}=-\left\|g_{k+1}\right\|^2+\beta_k^{\mathrm{New} 1} g_{k+1}^T d_k$
From Wolff's second strong condition, as shown in Eq. (5) below:
It is known that
$g_{k+1}^T y_k=g_{k+1}^T\left(g_{k+1}-g_k\right)\left\|g_{k+1}\right\|^2-g_{k+1}^T g_k$
Taking advantage of one direction of Powell's recovery condition, as shown in Eq. (6):
From Wolff's strong condition we get the following Eq. (7):
Substituting Eqs. (5), (6) and (7) into Eq. (4) results in:
$\begin{gathered}g_{k+1}^T d_{k+1} \leq-\left\|g_{k+1}\right\|^2 \\ +\frac{c \sigma\left\|g_k\right\|^2}{d_k^T y_k}\left[1.2\left\|g_{k+1}\right\|^2-t_k \frac{c \sigma\left\|g_k\right\|^2}{d_k^T y_k} \cdot\left\|y_k\right\|^2\right]\end{gathered}$
Since $d_k^T y_k>0, c>0,0.5<\sigma<1$, therefore the part is $W=\frac{c \sigma\left\|g_k\right\|^2}{d_k^T y_k}>0$. This leads to
$g_{k+1}^T d_{k+1} \leq-\left\|g_{k+1}\right\|^2+1.2 W\left\|g_{k+1}\right\|^2-t_k W^2 \cdot\left\|y_k\right\|^2$
Since $t_k>0, t_k W^2 \cdot\left\|y_k\right\|^2>0$, this leads to
$g_{k+1}^T d_{k+1} \leq-(1-1.2 W)\left\|g_{k+1}\right\|^2$
$g_{k+1}^T d_{k+1} \leq-\Omega_1\left\|g_{k+1}\right\|^2$
where, $\Omega_1=1-1.2 W$, when $1-1.2 W>0$.
The second case: If $\beta_k=\beta_k^{\mathrm{New} 2}$ in Eq. (3), we get:
$g_{k+1}^T d_{k+1}=-\left\|g_{k+1}\right\|^2+\beta_k^{\mathrm{New} 2} g_{k+1}^T d_k$
Substituting Eq. (5) into Eq. (8) we get:
$\begin{gathered}g_{k+1}^T d_{k+1} \leq-\left\|g_{k+1}\right\|^2 \\ +\left[\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{c \sigma\left\|g_k\right\|^2}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right]\left(c \sigma\left\|g_k\right\|^2\right)\end{gathered}$
$\begin{gathered}g_{k+1}^T d_{k+1}=-\left\|g_{k+1}\right\|^2 \\ +c \sigma\left\|g_{k+1}\right\|^2-t_k\left(\frac{c^2 \sigma^2\left\|g_k\right\|^4}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\end{gathered}$
Since $t_k>0, t_k\left(\frac{c^2 \sigma^2\left\|g_k\right\|^4}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2>0$. This leads to
$g_{k+1}^T d_{k+1} \leq-(1-c \sigma)\left\|g_{k+1}\right\|^2$
$g_{k+1}^T d_{k+1} \leq-\Omega_2\left\|g_{k+1}\right\|^2$
where, $\Omega_2=1-c \sigma$, when $1-c \sigma>0, c>0,0.5<\sigma<1$.
The third case: If $\beta_k=\beta_k^{\mathrm{New} 3}$ in Eq. (3) we get:
$g_{k+1}^T d_{k+1}=-\left\|g_{k+1}\right\|^2+\beta_k^{N e w 3} g_{k+1}^T d_k$
Substituting Eqs. (5) and (6) into Eq. (9) we get:
$\begin{gathered}g_{k+1}^T d_{k+1} \leq-\left\|g_{k+1}\right\|^2 \\ +\left[\frac{1.2\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{c \sigma\left\|g_k\right\|^2}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right]\left(c \sigma\left\|g_k\right\|^2\right)\end{gathered}$
$\begin{gathered}g_{k+1}^T d_{k+1}=-\left\|g_{k+1}\right\|^2+1.2 c \sigma\left\|g_{k+1}\right\|^2 \\ -t_k\left(\frac{c^2 \sigma^2\left\|g_k\right\|^4}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\end{gathered}$
Since $t_k>0, t_k\left(\frac{c^2 \sigma^2\left\|g_k\right\|^4}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2>0$. This leads to
$\begin{gathered}g_{k+1}^T d_{k+1} \leq-(1-1.2 c \sigma)\left\|g_{k+1}\right\|^2 \\ g_{k+1}^T d_{k+1} \leq-\Omega_3\left\|g_{k+1}\right\|^2\end{gathered}$
where, $\Omega_3=1-1.2 c \sigma$, when $1-1.2 c \sigma>0, c>0,0.5<\sigma<1$
Lemma (1): Suppose that Assumption (1) is fulfilled and that the conjugate gradient method is fulfilled, since $d_k$ is a slope search direction, and $\propto_k$ is generated from the strong Wolff conditions (SWP) if $\sum_{K=1}^{\infty} \frac{1}{\left\|d_{k+1}\right\|^2}=\infty$ then $\lim {k \rightarrow \infty} \inf \left\|gk\right\|=0$
Theorem (2): Suppose that Assumption (1) is fulfilled and that the proposed conjugate gradient method is fulfilled in the direction of the search slope $d_k$, and that the step length $\propto_k$ is generated from the conditions (SWP) then $\lim {k \rightarrow \infty} \inf \left\|gk\right\|=0$.
Proof: Using Lemma (1), and since the algorithm fulfills theorem (1), and if $g_{k+1} \neq 0$, then we must prove that $\left\|d_{k+1}\right\|$ is constrained from above, we take $\|\cdot\|$ for both sides of the Eq. (2), we get
$\left\|d_{k+1}\right\|=\left\|-g_{k+1}+\beta_k d_k\right\|$
The first case: If $\beta_k=\beta_k^{\text {New } 1}$
$\begin{gathered}\Rightarrow\left\|d_{k+1}^{\text {New } 1}\right\| \leq\left\|g_{k+1}\right\|+\mid \beta_k^{\text {New } 1}\| \| d_k \| \\ \left|\beta_k^{\text {New } 1}\right|=\left|\frac{1}{d_k^T y_k}\left[g_{k+1}^T y_k-t_k\left(\frac{g_{k+1}^T d_k}{d_k^T y_k}\right)\left\|y_k\right\|^2\right]\right|\end{gathered}$
Using Eqs. (5) and (6) we get the following
$\left|\beta_k^{\mathrm{New} 1}\right| \leq\left|\frac{1}{d_k^T y_k}\left[1.2\left\|g_{k+1}\right\|^2-t_k \frac{-\sigma g_k^T d_k}{d_k^T y_k} \cdot\left\|y_k\right\|^2\right]\right|$
$\Rightarrow\left|\beta_k^{\text {New } 1}\right| \leq \mathrm{A}_1$
$\Rightarrow\left\|d_{k+1}^{N e w 1}\right\| \leq\left\|g_{k+1}\right\|+\mathrm{A}_1\left\|d_k\right\|$
$\Rightarrow\left\|d_{k+1}^{N e w 1}\right\| \leq T_1+\mathrm{A}_1 \mathrm{U}_1 \leq \mathrm{M}_1$
$\Rightarrow\left\|d_{k+1}^{\text {New } 1}\right\| \leq \mathrm{M}_1 \Rightarrow \frac{1}{\left\|d_{k+1}^{\text {New } 1}\right\|} \geq \frac{1}{\mathrm{M}_1}$
$\Rightarrow \sum_{k=1}^{\infty} \frac{1}{\left\|d_{k+1}^{\mathrm{New}} 1\right\|^2} \geq \sum_{k=1}^{\infty} \frac{1}{\mathrm{M}_1{ }^2}=\frac{1}{\mathrm{M}_1{ }^2} \sum_{k=1}^{\infty} 1=+\infty$
According to Lemma (1), this leads to
$\lim {k \rightarrow \infty} \inf \left\|gk\right\|=0$
The second case: If $\beta_k=\beta_k^{\mathrm{New} 2}$ in Eq. (10) we get:
$\Rightarrow\left\|d_{k+1}^{N e w 2}\right\| \leq\left\|g_{k+1}\right\|+\mid \beta_k^{\mathrm{New} 2}\| \| d_k \|$
$\left|\beta_k^{\mathrm{New} 2}\right|=\left|\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{g_{k+1}^T d_k}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right|$
Using Eq. (5) we get the following
$\left|\beta_k^{\mathrm{New} 2}\right| \leq\left|\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{c \sigma\left\|g_k\right\|^2}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right|$
$\Rightarrow\left|\beta_k^{\text {New } 2}\right| \leq \mathrm{A}_2$
$\Rightarrow\left\|d_{k+1}^{\text {New }}\right\| \leq\left\|g_{k+1}\right\|+\mathrm{A}_2\left\|d_k\right\|$
$\Rightarrow\left\|d_{k+1}^{N e w}\right\| \leq T_2+\mathrm{A}_2 \mathrm{U}_2 \leq \mathrm{M}_2$
$\Rightarrow\left\|d_{k+1}^{\text {New } 2}\right\| \leq \mathrm{M}_2 \Rightarrow \frac{1}{\left\|d_{k+1}^{\text {New }}\right\|} \geq \frac{1}{\mathrm{M}_2}$
$\Rightarrow \sum_{k=1}^{\infty} \frac{1}{\left\|d_{k+1}^{\mathrm{New} 2}\right\|^2} \geq \sum_{k=1}^{\infty} \frac{1}{\mathrm{M}_2{ }^2}=\frac{1}{\mathrm{M}_2{ }^2} \sum_{k=1}^{\infty} 1=+\infty$
According to Lemma (1), this leads to
$\lim {k \rightarrow \infty} \inf \left\|gk\right\|=0$
The third case: If $\beta_k=\beta_k^{\text {New } 3}$ in Eq. (10) we get:
$\begin{gathered}\Rightarrow\left\|d_{k+1}^{\text {New } 2}\right\| \leq\left\|g_{k+1}\right\|+\mid \beta_k^{\text {New } 2}\| \| d_k \| \\ \left|\beta_k^{\text {New } 2}\right|=\left|\frac{\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{g_{k+1}^T d_k}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right|\end{gathered}$
Using Eqs. (5) and (6) we get the following
$\left|\beta_k^{\mathrm{New} 3}\right| \leq\left|\frac{1.2\left\|g_{k+1}\right\|^2}{\left\|g_k\right\|^2}-t_k\left(\frac{c \sigma\left\|g_k\right\|^2}{\left(d_k^T y_k\right)^2}\right)\left\|y_k\right\|^2\right|$
$\Rightarrow\left|\beta_k^{\text {New } 3}\right| \leq \mathrm{A}_3$
$\Rightarrow\left\|d_{k+1}^{\mathrm{New} 3}\right\| \leq\left\|g_{k+1}\right\|+\mathrm{A}_3\left\|d_k\right\|$
$\Rightarrow\left\|d_{k+1}^{\text {New } 3}\right\| \leq T_3+\mathrm{A}_3 \mathrm{U}_3 \leq \mathrm{M}_3$
$\Rightarrow\left\|d_{k+1}^{\mathrm{New}} 3\right\| \leq \mathrm{M}_3 \Rightarrow \frac{1}{\left\|d_{k+1}^{\mathrm{New} 3}\right\|} \geq \frac{1}{\mathrm{M}_3}$
$\Rightarrow \sum_{k=1}^{\infty} \frac{1}{\left\|d_{k+1}^{N e w 3}\right\|^2} \geq \sum_{k=1}^{\infty} \frac{1}{\mathrm{M}_3{ }^2}=\frac{1}{\mathrm{M}_3{ }^2} \sum_{k=1}^{\infty} 1=+\infty$
According to Lemma (1), this leads to
$\lim {k \rightarrow \infty} \inf \left\|gk\right\|=0$
3. Sand Cat Swarm Optimization (SCSO) Algorithm
Sand cats, belonging to the family of mammals, thrive in harsh desert environments such as the Arabian Peninsula, Central Asia, and the African Sahara. Their ability to endure high temperatures is facilitated by dense fur covering the soles of their feet, providing insulation against extreme desert conditions. Moreover, the unique properties of their fur make detection and tracking processes challenging.
Sand cats typically have a body length ranging from 45 to 57 cm, with a tail length of approximately 28 to 35 cm, and an adult weight ranging from 1 to 3.5 kg. The ears of the sand cat play a crucial role in prey detection and tracking processes. The nocturnal, subterranean, and speedy nature of these cats makes them distinctive, which is why sand cats reveal that their prey (insects and rodents) moves under the ground, Figure 1 shows sand cats in their natural habitat [18], [19].

The Sand Cat Swarm Optimization (SCSO) algorithm draws its inspiration from the unique hunting technique of sand cats, which relies on detecting low-frequency sounds. This aspect is integrated into the SCSO algorithm by assigning a sensitivity range to each virtual 'cat', enabling them to detect frequencies below 2 kilohertz. The algorithm is designed to decrease this detection threshold, \( r_{\vec{G}} \), linearly from 2 kilohertz to zero as the algorithm progresses through its iterations.
To simulate the exploratory behavior of sand cats, the SCSO algorithm begins with a randomly initialized search space. Each virtual cat in the algorithm is assigned a random initial position, emulating the way sand cats explore new territories in their natural habitat. This approach allows the algorithm to cover a wide and varied search area, enhancing its ability to discover optimal solutions [10]. In this way, cats can explore new areas as described in the equations below [13]:
