In the global economy, plastics are considered a versatile and ubiquitous material. It can reach to marine ecosystems through diverse channels, such as road runoff, wastewater pathways, and improper waste management. Therefore, rapid mitigation and reduction are required for this ever-growing problem. The marine habitats are believed to be the highest emitters and absorbers of O_{2} and CO_{2} respectively. As such, every day, the prominence of managing the litter in the ocean is growing effectively and efficiently. One of the most significant challenges in oceanography is creating a comprehensive meshless algorithm to handle the mathematical representation of waste plastic management in the ocean. This research dedicates to studying the dynamics of waste plastic management model governed by a mathematical representation depending on three components viz. Waste plastic (W), Marine litter (M) and Recycling of debris (R), i.e., WMR model. In this regard, an unsupervised machine learning approach, namely Mexican Hat Wavelet Neural Network (M_{h}WNN) refined by the efficient Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm (L-BFGS), i.e., M_{h}WNN-LBFGS model has been implemented for handling the non-linear phenomena of WMR models. Besides, the obtained solution is meshfree and compared with the state-of-art numerical result to establish the precision of the M_{h}WNN-LBFGS model. Furthermore, different global statistical measures (MAPE, TIC, RMSE, and ENSE) have been computed at twenty testing points to validate the stability of the proposed algorithm.
Plastic contamination has spread across a wide swath of the ocean due to its lightness and solidness properties. From zooplankton to cetaceans, marine mega fauna suffers a direct and fatal consequence of plastic contamination. Every year, thousands of seabirds, seals, turtles and other marine reptiles are being killed through ingestion, entrapment and being entangled in plastic. The lower trophic-level lives and their predators in aquatic environments are being affected by consuming diligent natural toxins adhering to plastic. The presence of drifting plastics, extending from huge abandoned nets, docks and cruises that carry fish, green growth, and microbial networks to non-local districts, further exacerbates these consequences [1]. So, the management of waste plastic in the ocean has received increased attention in recent years from researchers around the world as well as from different activists and government bodies. Ocean plastic pollution was specifically mentioned in high-level agreements like the Berlin declaration in 2013 and the resolution of G7 Leaders in 2015 [2]. EU legislation also passed an amendment, notably the Marine Strategy Framework Directive, which helped move this issue up the international agenda [3]. Despite different oceanographic models, there is no one size fits all methodology to waste management of the ocean. To reduce, recycle, and clean up waste plastic, different strategies have been used around the world. When these strategies are supplemented by the science-driven mathematical model, they may be most effective. However, machine learning (ML), in particular neural techniques, has shown its potential by surpassing human levels of accuracy for simulating complex phenomena related to oceanography.
Artificial intelligence (AI) has been a subject of intense media hype in the 21^{st} century. Different branches of AI i.e. machine learning, deep learning, and ANN come up in many articles, irrespective of field. A wide-scale application of various phenomena in machine translation, non-linear pattern recognition, medical diagnosis, image processing, robotics, and speech & face detection, are well described by ANN [4], [5]. As ANN becomes widespread and integrated with human-centric applications and algorithms, so the focus has returned to explainability. Over the last two decades, the implementation of ANN has sprung up for solving different types of differential, integral, and algebraic equations. It is often characterized as being a black box. That is, the closed-form of neural solution is available to predict the value at any testing point of the given domain.
Although NN has the universal approximation power, it still has some shortcomings as it fails to define the local features such as jumps in the objective function, discontinuities in curvature, local minima and slow learning rate [6]. As such, an alternative neural network model based on the combination of some particular wavelet kernels and feed-forward neural networks, namely, the wavelet neural networks (WNN), has been proposed. It seems to be an effective and strong approximate model for universal functions. Additionally, the learning rate of the WNN is relatively faster than conventional NN.
Optimizers are algorithms or methods used to minimize the loss function. It changes the attributes of a neural network, such as weights and biases. A good number of optimization algorithms have emerged during the last few years as remarkable advances both in application areas and research. It can be classified as derivative-based or derivative-free. The most common technique for optimizing a function is using derivative-based algorithms. In this regard, some potential derivative-based optimization algorithms are gradient descent (GD), conjugate gradient (CG), stochastic gradient descent (SGD) and Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS). The derivative-based optimization algorithms are more stable when compared to the derivative-free optimization algorithms [7].
The objective of this article is to illustrate the Mexican Hat Wavelet Neural Network with L-BFGS optimization algorithm for simulating the recycling procedure of waste plastic in the Ocean. The non-linear form of waste plastic management model of the Ocean is represented by three elements: Waste plastic (W), Marine litter (M) and Recycling of debris (R), i.e., WMR model. In this regard, various instances have been discussed, and neural predictions have been made at testing points. As the proposed neural method is meshfree so after training of the network, we can find the solution at any point inside the given domain of the DE.
The rest of our contribution can be outlined as follows: in Section 2, a literature review is presented. Section 3 presents the preliminaries of WMR model and architecture of M_{h}WNN for the sake of completeness. In Section 4, the formulation of M_{h}WNN-LBFGS algorithm to solve WMR model is discussed. In Section 5, two cases of WMR model have been investigated to verify the effectiveness of the M_{h}WNN -LBFGS model. Finally, the conclusions are drawn in Section 6.
In 1943, Warren S. McCulloch and Walter Pits proposed neuron activities that merged the studies of neurophysiology and mathematical logic. In their historical paper [8], they developed the first elementary model of ANN, in which neurons were escorted by the "all-or-none" process. It brings revolution in the field of AI and attracts researchers to work on it.
In a pioneering work, Alizadeh and Kavianpour [9] developed a wavelet-ANN model for accurate predictions of dissolved oxygen, temperature and salinity in the Pacific Ocean. Chen et al. deployed a pre-clustering ANN model, using different components of sea-surface wind speed to estimate the mixed-layer depth in the Indian Ocean [10]. For speed and accurate detection of deep-sea debris, Xue et al. proposed a deep NN model [11]. Nuwairan et al. [12] developed a supervised neural network to simulate the waste plastic management model of ocean. Motivated by the above considerations, it is natural to propose a new and efficient ANN algorithm to understand the dynamical behaviour of waste plastics management in ocean. In this work, we have designed an advanced neural model by combining the properties of mexican hat wavelet basic along with the L-BFGS training algorithm, viz. MhWNN-LBFGS for the study of non-linear phenomena of the WMR model.
ANN has been convincingly used in the field of differential equation (DE) in the past couple of years after Lee and Kang [13] developed a novel Hopfield neural network model to find the solution of first-order DE. In 1998, Lagaris et al. [14] employed the concept of an unconstrained optimization problem and proposed a trial solution for DE with regular boundaries that satisfies the given boundary and initial conditions. However, some researchers have developed different ODE and PDE solvers for DE with specific properties. For instance, Lie symmetry differential equations [15], fractional differential equations [16], fuzzy differential equations [17], and singularly perturbed differential equations [18]. In another approach, symplectic artificial neural network model using curriculum learning has been investigated by Sahoo and Chakraverty [19]. In this regard, a few more potential models are the spherical gas cloud model by Ahmad et al. [20], Legendre artificial neural network method by Verma and Kumar [16], etc. Nonetheless, these methods are associated with problems with well-defined boundaries.
As per the literature, the advantages of NN are further strengthened with the addition of some particular types of wavelets. As it overcomes the drawbacks of NN and converts it into an efficient technique for universal function approximation. WNN has been extensively used by researchers and scientists in different fields such as fault diagnosis [21], daily pan evaporation [22], energy price forecasting [23], industrial robot manipulators [24], Adaptive PID controller design [25], etc.
In order to explore the solution of fractional differential equations, Sabir et al. introduced a fractional Meyer wavelet neural network model to solve nonlinear singular fractional Lane–Emden systems [26]. Tan et al. [6] investigated solutions of PDE using an unsupervised WNN model with meta-heuristic algorithm. Wu et al. [27] proposed a WNN with the structure 1×N×1 to find the numerical solution of fractional differential equations. These literatures stimulate the authors to investigate different wavelet kernels as an alternative, reliable, efficient, and robust computing paradigm for solving non-linear phenomena of the oceanographic WMR model.
The innovative insights of the M_{h}WNN-LBFGS model are summarized as below:
A novel multilayer framework namely Mexican Hat Wavelet Neural Network has been designed under the Jupyter notebook environment.
Neural simulation of the non-linear waste plastic management model of the Ocean i.e., WMR model has been demonstrated by using the proposed algorithm.
The resilience of M_{h}WNN-LBFGS model is observed by comparing the obtained simulation results with RK4 method.
In addition, different global statistical measures (MAPE, TIC, RMSE, and ENSE) have been calculated at testing points to validate the stability of the proposed algorithm.
An unsupervised training algorithm ensures that the M_{h}WNN-LBFGS is a powerful tool to predict the solution of any other non-linear system of equations.
In this section, an overview of the waste plastic management WMR model and architecture of the multilayer Mexican Hat Wavelet Neural Network has been discussed.
The WMR model is represented via three components, i.e., Waste plastic W(γ) Marine litter M(γ) and Recycling of debris R(γ) which construct a non-linear WMR system as shown below [12]:
where, α is the rate of recycled waste to regenerate new waste, β is the rate of waste to be recycled directly, η denotes the rate of waste to enter into the marine, $\bar{b}$ is the new waste rate to be reproduced, δ is the marine litter rate to recycle, and θ stands for the recycled waste rate to be lost.
ANN is a branch of artificial intelligence (AI) that mimics the training process of the human brain to predict patterns from given historical data. Neural networks are processing devices of mathematical algorithms that can be implemented by computer languages.
Wavelet is a ‘small wave’ function written as $\psi(\gamma) \in L^2(R)$ with the property,
and centred in the neighbourhood of 0. Wavelet transform has time-frequency localization property where as NN has self-adaptive, fault tolerance, robustness, and strong inference ability. The network topology of a wavelet neural network is very similar to a feed-forward multi-layer neural network. The hidden layer consists of wavelet neurons commonly known as wavelons, whose activation functions are drawn from a wavelet basis. In accordance with some learning algorithms, the translation and dilation of the wavelets along with the weights are updated.
The mother wavelet is made up of two factors viz. translation and the dilation c_{i}, where i denotes the i^{th} wavelet [28] and is written as:
As such, the following theorem is stated in the literature regarding the properties and rate of convergence of WNN.
Theorem 1. The WNN has L^{2} and universal function approximation properties [29].
Proof of Theorem 1. For the proof of Theorem 1 see Ref. [29] by Zhang et al.
In this work, the Mexican Hat mother wavelet function is used as wavelet basis which can be denoted as
Mexican Hat mother wavelet is obtained from Gaussian function by applying the Laplacian operator and it is a continuously differentiable wavelet. Figure 1 shows graphical representation of the Mexican Hat wavelet.
The output of M_{h}WNN is defined as
where, μ_{i,j} is the weights from the input unit i to the hidden unit j, b_{j} is the bias and $\vec{v}$ is the trainable parameters.
This section explains the formation of the proposed M_{h}WNN-LBFGS technique to solve the WMR model as an unconstrained problem.
An autonomous system of WMR model can be written in matrix form as:
First the governing equation (Eq. (1)) will be transferred into an approximate solution, as a combination of initial/boundary conditions, a user-defined mathematical expression, and M_{h}WNN-LBFGS output. Therefore, we have
where, the first term λ_{i},i=1,2,3 satisfies the initial condition of the given WMR model without adjustable parameters and second term is the neural output. For the given input $\gamma \in R^n$, the unknown function $w n n\left(\gamma, \vec{v}_t\right)$ in the second part of approximate solution is denoted by
where, $z_j=\sum_{i=1}^n \mu_{i, j} \gamma_i+b_j, \mu_{i, j}$ and $w_j^x, x=w, m, r$ represent the weights from the input unit i to the hidden unit j and the hidden unit j to the output unit respectively, b_{j} is the bias and k denotes the number of neurons.
Let us denote the output of the third hidden layer as
where, $O\left(\gamma_i\right)=\left[o_1\left(\gamma_i\right), o_2\left(\gamma_i\right), \ldots, o_k\left(\gamma_i\right)\right]^T \in R^{k \times 1}$. Then Ο(γ_{i}) can be obtained from
Let us take the output matrix of the last hidden layer A and the weight vector ω as follows:
$A=\left[\begin{array}{cccc}\psi\left(\mu_{1,1} \gamma_1+b_1\right) & \psi\left(\mu_{1,2} \gamma_1+b_2\right) & \ldots & \psi\left(\mu_{1, n} \gamma_1+b_n\right) \\ \psi\left(\mu_{2,1} \gamma_2+b_1\right) & \psi\left(\mu_{2,2} \gamma_2+b_2\right) & \ldots & \psi\left(\mu_{2, n} \gamma_2+b_n\right) \\ \cdot & \cdot & . & . \\ \cdot & \cdot & . & . \\ \cdot & \cdot & . & . \\ \psi\left(\mu_{k, 1} \gamma_k+b_1\right) & \psi\left(\mu_{k, 2} \gamma_k+b_2\right) & \ldots & \psi\left(\mu_{k, n} \gamma_k+b_n\right)\end{array}\right]_{\text {and }} \quad \omega=\left[\begin{array}{c}\omega_1 \\ \omega_2 \\ \cdot \\ \cdot \\ \cdot \\ \omega_n\end{array}\right] \in R^{n \times 1 }$
Now define the block matrix E of the following form
$E(\gamma)=\left[\begin{array}{ccc}A_w(\gamma) & 0 & 0 \\ 0 & A_m(\gamma) & 0 \\ 0 & 0 & A_r(\gamma)\end{array}\right]$
$W=\left[\begin{array}{lll}\omega_j^w & \omega_j^m & \omega_j^r\end{array}\right]^T, j=1,2 \ldots n$
$\mathrm{O}=\left[\begin{array}{lll}w n n\left(\gamma, \vec{v}_w\right) & w n n\left(\gamma, \vec{v}_m\right) & w n n\left(\gamma, \vec{v}_r\right)\end{array}\right]^T$
Then the above matrix can be compactly written as
It may be noted that as the trainable parameters such as, weights and biases are fixed arbitrarily generated values, so E(γ) depends only on training points γ.
In the next step, a training algorithm is employed to tune the adjustable parameters of M_{h}WNN-LBFGS, which is embedded in the approximate solution. The M_{h}WNN-LBFGS is trained to predict the solutions of WMR for any testing point inside the given domain by unsupervised learning, where the parameters are updated to minimise the objective function. In order to, find the objective function, we may need to calculate the gradient of the network $\operatorname{wnn}(\gamma, \vec{v})$, that can be computed as follows:
where, δ is order of derivative. Differentiate Eq. (7)
By using the gradient of the approximate solutions and for the given problem, the objective function can be formulated as follows:
On the other hand, we uniformly adopt L-BFGS as optimizer with a learning rate 0.01 to find the optimal parameter. L-BFGS is a potential optimization technique based from the Quasi-Newton family that is widely employed in the field of deep learning. L-BFGS is same as BFGS except for the hessian matrix updation. As the training methodologies of the neural network are often iterative; so, we need to designate a starting point for iterations. Therefore, in our investigation, initial weights have been generated randomly and set to small numbers in [-1,1]\{0}. The graphical abstract of M_{h}WNN-LBFGS for solving WMR model is presented in Figure 2.
In order to manifest that the presented MhWNN-LBFGS algorithm is promising, we have addressed two problems for simulations in this section. The performances have been studied in terms of statistical measures between present results and traditional numerical results. All of the neural results in the following examples are implemented in the Jupyter notebook environment using Python 3.0. For both cases, authors have trained the network for 1,000 epochs. However, after selecting the basic framework, hyperparameter tuning has been done to select the optimal number of hidden layers and the number of nodes in each hidden layer. The accuracy of the proposed M_{h}WNN-LBFGS algorithm has been shown in the tables and graphs. Different global statistical measures (NSE, MAPE, TIC, and RMSE) are evaluated, for convergence analysis of M_{h}WNN-LBFGS model which is defined as below [30]
$\begin{aligned} & \text { NSE= }\left\{1-\frac{\sum_{i=1}^N\left(\gamma_i-\hat{\gamma}_i\right)^2}{\sum_{i=1}^N\left(\gamma_i-\bar{\gamma}_i\right)^2}, \bar{\gamma}_i=\frac{1}{N} \sum_{i=1}^N\left(\gamma_i\right)\right. \\ & \text { ENSE }=1-\mathrm{NSE} \\ & \text { MAPE }=\frac{1}{N} \sum_{i=1}^N\left|\frac{\gamma_i-\hat{\gamma}_i}{\gamma_i}\right| \\ & \mathrm{TIC}= \frac{\sqrt{\frac{1}{N} \sum_{i=1}^N\left(\gamma_i-\hat{\gamma}_i\right)^2}}{\sqrt{\frac{1}{N} \sum_{i=1}^N\left(\gamma_i\right)^2}+\sqrt{\frac{1}{N} \sum_{i=1}^N\left(\hat{\gamma}_i\right)^2}} \\ & \end{aligned}$
Problem 1 Here we have considered a non-linear WMR model by substituting the values of α=0.4, β=0.21, η=0.75, =0.36, δ=0.5, θ=0.05, a_{i}=0, λ_{1}=2, λ_{2}=1.5, and λ_{3}=1 in Eq. (1) [12]
$\begin{cases}W^{\prime}(\gamma)=0.4 R(\gamma)-0.21 W(\gamma)-0.75 M(\gamma) W(\gamma)+0.36, & W(0)=2, \\ M^{\prime}(\gamma)=0.75 M(\gamma) W(\gamma)-0.5 M(\gamma), & M(0)=1.5, \\ R^{\prime}(\gamma)=0.21 W(\gamma)+0.5 M(\gamma)-0.45 R(\gamma) . & R(0)=1 .\end{cases}$
In order to apply the M_{h}WNN-LBFGS algorithm let us reformulate the above problem into an approximation solution:
$\left\{\begin{array}{l}\widetilde{W}\left(\gamma, \vec{v}_w\right)=2.0+\left(1-e^{-(\gamma)}\right) \text { wnn }\left.\left(\gamma, \vec{v}_w\right)\right|_{W \rightarrow 2.0} \\ \widetilde{M}\left(\gamma, \vec{v}_m\right)=1.5+\left.\left(1-e^{-(\gamma)}\right) w n n\left(\gamma, \vec{v}_m\right)\right|_{M \rightarrow 1.5} \\ \widetilde{R}\left(\gamma, \vec{v}_r\right)=1.0+\left.\left(1-e^{-(\gamma)}\right) w n n\left(\gamma, \vec{v}_r\right)\right|_{R \rightarrow 1.0}\end{array}\right.$
Problem 2 In the second case we have considered another non-linear WMR model by putting the values of α=0.4, β=0.21, η=0.75, =0.96, δ=0.5, θ=0.05, a_{i}=0, λ_{1}=2, λ_{2}=1.5, and λ_{3}=1. Then Eq. (1) becomes [12]
$\begin{cases}W^{\prime}(\gamma)=0.4 R(\gamma)-0.21 W(\gamma)-0.75 M(\gamma) W(\gamma)+0.96, & W(0)=2, \\ M^{\prime}(\gamma)=0.75 M(\gamma) W(\gamma)-0.5 M(\gamma), & M(0)=1.5, \\ R^{\prime}(\gamma)=0.21 W(\gamma)+0.5 M(\gamma)-0.45 R(\gamma) . & R(0)=1 .\end{cases}$
Accordingly, the neural approximation solution is written as
$\left\{\begin{array}{l}\widetilde{W}\left(\gamma, \vec{v}_w\right)=2.0+\left(1-e^{-(\gamma)}\right) \text { wnn }\left.\left(\gamma, \vec{v}_w\right)\right|_{W \rightarrow 2.0} \\ \tilde{M}\left(\gamma, \vec{v}_m\right)=1.5+\left.\left(1-e^{-(\gamma)}\right) w n n\left(\gamma, \vec{v}_m\right)\right|_{M \rightarrow 1.5} \\ \widetilde{R}\left(\gamma, \vec{v}_r\right)=1.0+\left.\left(1-e^{-(\gamma)}\right) w n n\left(\gamma, \vec{v}_r\right)\right|_{R \rightarrow 1.0}\end{array}\right.$
A multi-layer neural network has been constructed with a single input, single output, and three hidden layers, such that each hidden layers are set to contain 16 neurons. In addition, L-BFGS optimizer has been used to update the parameters in these test problems with a learning rate 0.01. Then the network has been trained for 100 equidistant points from t=0 sec to t=2 sec. During the training of model, we track the learning performance of each epoch through training loss and validating loss. The training and validation loss for the aforementioned two problems for 1000 epochs (last 100 epochs are depicted in subfigures) are presented in Figure 3 and Figure 4. In other words, the validation loss in these figures represents the evolution of the network’s capability for solving the WMR model. These are the values of the training and validation data set output by the loss function Eq. (14) for each epoch. Training and validation loss both continue to decrease throughout the learning process, showing the robustness of the model.
In order to, show the effectiveness of the neural algorithm, it is vital to compare the neural results obtained by the proposed algorithms with the existing results at different testing points. Moreover, to compare the proposed M_{h}WNN-LBFGS algorithm with the conventional methods we also run RK4 algorithms and obtained the results at 20 different testing points. The forecasted data are graphically portrayed in the plots that illustrate the reliability and consistency of M_{h}WNN-LBFGS. Experimental studies were reported in two subsections. First, the results of the proposed M_{h}WNN-LBFGS algorithms were compared with classical results. Then statistical measures were reported for convergence analysis. Figure 5 and Figure 6 compare the neural solutions obtained by using the proposed algorithm with the existing numerical solutions of WMR system for both problems. From the figures, it can be observed that the neural results are exactly matching with the numerical results. Table 1 and Table 2 present the neural results of WMR models for all two problems at testing points $\gamma \in[0,2], \Delta \gamma=0.1$.
In order to find the precision level, the absolute error (AE) values are delineated in Figure 7 and Figure 8. From the box plots, we can observe that AE of Waste, Marine and Recycle lies in the range 3.8E-05 to 2.8E-04, 3.2E-05 to 5.5E-04 and 3.6E-05 to 4.4E-04 for Problem 1 and 3.4E-05 to 3.5E-04, 4.5E-06 to 4.2E-04 and 2.7E-05 to 2.6E-04 for Problem 2 respectively. One may decipher that the mode of AE values for W(γ), M(γ), and R(γ) classes lie in neighbourhoods of 2E-04 and 1E-04 for Problem-1 and Problem-2 respectively. These AE values demonstrate the efficacy of the designed algorithm M_{h}WNN-LBFGS for solving the proposed system.
The outcomes of the performances based on statistical measures in terms of TIC, ENSE, and MSE are plotted in Figure 9 and Figure 10 for the WMR non-linear system. The bar graph illustrations are used to visualize the trend of the errors. For a clear vision of values, we have presented the errors in (-log) form. It may be noted that the TIC operator magnitude for W(γ), M(γ), and R(γ) lies in 6E-05 to 9E-05 and 2E-05 to 5E-05 for Problem 1 and Problem 2. Similarly, the ENSE value lies in 1E-06 to 9E-08 and the mean performance of RMSE lies in the neighbourhood of 1E-04. Moreover, the comparison of operators MAE and MAPE is shown in Table 3 and Table 4. From tables, it is clearly seen that the obtained MAE errors lie in the close vicinity of 1E-04 for both cases. Whereas the MAPE varies from 1E-05 to 1E-04.
So one can evidently observe that all these global statistical measures (MAPE, TIC, RMSE, and ENSE) are close to 0, so it indicates the correctness, precision and efficacy of the M_{h}WNN-LBFGS model.
It is well known that after training the neural model, it can be utilized as a black box to obtain numerical results for any arbitrary points in the given domain. In this experiment, we have considered three hidden layers with 16 neurons in each hidden layer for modeling the network. One may consider more hidden layers to construct a neural model. However, by increasing the number of hidden layers and by training a network for a long time, it loses its capacity to generalize.
Testing points (γ) | Waste W(γ) | Marine M(γ) | Recycle R(γ) |
0.0 | 2.000000 | 1.500000 | 1.000000 |
0.1 | 1.812748 | 1.646334 | 1.072439 |
0.2 | 1.635189 | 1.781877 | 1.144149 |
0.3 | 1.471149 | 1.903975 | 1.215662 |
0.4 | 1.323093 | 2.010936 | 1.286773 |
0.5 | 1.192037 | 2.101978 | 1.356870 |
0.6 | 1.077988 | 2.177128 | 1.425292 |
0.7 | 0.980389 | 2.237094 | 1.491580 |
0.8 | 0.898324 | 2.283139 | 1.555530 |
0.9 | 0.830553 | 2.316925 | 1.617091 |
1.0 | 0.775548 | 2.340338 | 1.676230 |
1.1 | 0.731594 | 2.355293 | 1.732853 |
1.2 | 0.696961 | 2.363548 | 1.786828 |
1.3 | 0.670068 | 2.366582 | 1.838041 |
1.4 | 0.649599 | 2.365555 | 1.886455 |
1.5 | 0.634526 | 2.361362 | 1.932129 |
1.6 | 0.624049 | 2.354745 | 1.975207 |
1.7 | 0.617471 | 2.346405 | 2.015875 |
1.8 | 0.614074 | 2.337080 | 2.054313 |
1.9 | 0.613043 | 2.327554 | 2.090632 |
2.0 | 0.613484 | 2.318619 | 2.124825 |
Testing points (γ) | Waste W(γ) | Marine M(γ) | Recycle R(γ) |
0.0 | 2.000000 | 1.500000 | 1.000000 |
0.1 | 1.868125 | 1.649759 | 1.072964 |
0.2 | 1.737978 | 1.796453 | 1.146895 |
0.3 | 1.611849 | 1.937415 | 1.222118 |
0.4 | 1.492405 | 2.070328 | 1.298350 |
0.5 | 1.381929 | 2.193418 | 1.375096 |
0.6 | 1.281914 | 2.305565 | 1.451913 |
0.7 | 1.193058 | 2.406309 | 1.528507 |
0.8 | 1.115460 | 2.495781 | 1.604685 |
0.9 | 1.048818 | 2.574578 | 1.680259 |
1.0 | 0.992523 | 2.643625 | 1.754987 |
1.1 | 0.945709 | 2.704036 | 1.828586 |
1.2 | 0.907320 | 2.757009 | 1.900787 |
1.3 | 0.876230 | 2.803739 | 1.971407 |
1.4 | 0.851378 | 2.845357 | 2.040378 |
1.5 | 0.831843 | 2.882893 | 2.107744 |
1.6 | 0.816849 | 2.917246 | 2.173612 |
1.7 | 0.805692 | 2.949172 | 2.238081 |
1.8 | 0.797651 | 2.979268 | 2.301173 |
1.9 | 0.791944 | 3.007966 | 2.362765 |
2.0 | 0.787732 | 3.035539 | 2.422549 |
MAE | 1.6562380952381668 E-04 | 2.2793714285719777 E-04 | 1.9516619047618304 E-04 |
MAPE | 2.0698206226719160 E-04 | 1.0256963610789712 E-04 | 1.2201287228477674 E-04 |
MAE | 1.1274571428569284 E-04 | 1.1430809523808409 E-04 | 1.2962142857148295 E-04 |
MAPE | 1.0470961564736848 E-04 | 4.3931601159521410 E-05 | 7.8345688953517170 E-05 |
The present work shows the application of neural techniques for simulating waste plastic management of ocean in order to conserve marine ecosystems. The advantages of this proposed method are examined by solving different cases which describe various phenomena in oceanography. The values of statistical measures that are very close to 0, confirms the reliability and correctness of MhWNN-LBFGS. The excellent agreement between the neural results and traditional numerical methods shows that the newly developed MhWNN-LBFGS algorithm is extremely accurate for simulating the non-linear WMR model.
The ML algorithm addressed in this article is generic and can be useful for solving relevant problems emerging in various other engineering applications such as, epidemic model [31], fuzzy space-fractional tele-graph model [32], fractional order coupled wave equations [33], astrophysics model [34], etc.
Conceptualization, A.K Sahoo; methodology, A.K Sahoo and S. Chakraverty; validation, A.K Sahoo and S. Chakraverty; formal analysis, A.K Sahoo; investigation, A.K Sahoo and S. Chakraverty; writing—original draft preparation, A.K Sahoo; writing—review and editing, S. Chakraverty; supervision, S. Chakraverty.
All authors have read and agreed to the published version of the manuscript.
The data used to support the findings of this study are available from the corresponding author upon request.
The first author would like to acknowledge the Council of Scientific and Industrial Research (CSIR), New Delhi, India (File no: 09/983(0042)-2019-EMR-I), for the support to pursue the present research work. The first author also acknowledges Miss. Priya Rao and Mr. Mrutyunjaya Sahoo to help in searching related papers for the literature survey.
The authors declare no conflict of interest.