The real-world phenomena can be modelled mathematically as differential equations (Des). Analytical solutions provide exactness, but they are can be achieved only for a limited case of linear and simple problems [1]. In consequence, researchers run to numerical methods for obtaining approximate solutions.
Classical numerical methods, like the Finite Element Method (FEM) and Finite Difference Method (FDM), work using the problem domain’s discretization into a mesh of points or elements. These techniques are powerful and flexible, but they have local accuracy, where it is restricted by a polynomial order of convergence. Attaining high accuracy often necessitates a prohibitively fine mesh, yielding to wide systems of equations, that leads to significant computational cost.
To master these limitations, spectral methods have achieved eminence as a class of highly accurate numerical approaches [2]. Opposed to local methods, spectral methods give global approximate solution utilizing a basis of smooth, infinitely differentiable functions, like orthogonal or trigonometric polynomials. This global technique allows them to attain "spectral" or exponential convergence for problems with smooth solutions. This signifies as the number of basis functions increases the error decreases exponentially, yielding to solutions with high accuracy, accompanied by a relatively small number of degrees of freedom.
However, the principal challenge in spectral methods is the determination of the basis expansion’s coefficients. In classical approaches such as collocation or Galerkin methods the DE is imposed at specific points or in a weighted-integral sense. This generally yields to complex structured systems of algebraic equations, which may become difficult to solve or il-conditioned, particularly for non-linear DEs.
Reframing the coefficient-getting problem as an optimization task is an alternative paradigm. The aim becomes to obtain the set of coefficients that minimizes the residual, or "error”, of the approximate solution among the entire domain. This technique based on transforming the DE problem into a continuous optimization problem, generally high-dimensional. The power of this technique lies in its adaptability and its capability to handle non-linearities implicitly in the objective function.
Metaheuristic algorithms are powerful gradient-free search strategy, for solving such optimization tasks [3-5]. These natural-inspired algorithms, utilize a population of candidate solutions in the aim of exploring the search space and converging towards a global optimum. Notable examples involve:
$\bullet$ Genetic Algorithm (GA): Mimicking the Darwinian evolution, GA utilizes selection, crossover, and mutation operators to develop a population of solutions over generations [6]. It is considered as high effective method at global exploration.
$\bullet$ Particle Swarm Optimization (PSO): Created by Kennedy and Eberhart [7], PSO inspired by the swarm intelligence of birds flocking. Every solution modifies its trajectory depending on its own best-obtained position and the best-obtained position of the entire swarm, this makes an effective balance between individual and social knowledge.
$\bullet$ Artificial Bee Colony (ABC): Developed by Karaboga [8], mimicking the comportment of ants in searching food.
$\bullet$ Firefly Algorithm (FA): Made by Yang [9].
The Flower Pollination Algorithm (FPA), developed by Yang [10], is a newer metaheuristic that imitates the flowers pollination process. It balances global exploration using cross-pollination via Lévy flights, and local exploitation utilizing self-pollination, achieving excellent results for a large range of complex optimization problems.
There are a lots of metaheuristic algorithms that prove their efficiency on solving several problems, including Cuckoo Search [11], Whale Optimization Algorithm [12]. Likewise, recent ones such as Barnacles Mating Optimizer [13], Dandelion Optimizer [14], and Dwarf Mongoose Optimization Algorithm [15].
Artificial intelligence, especially deep learning and Physics-Informed Neural Networks (PINNs) [16-18], has presented another powerful model for solving DEs. PINNs utilize the residual of the DE as part of the loss function for training a neural network that directly constitutes the solution. While extremely powerful, PINNs often necessitate tuning a large number of hyperparameters where their theoretical convergence properties are still a vibrant field of study.
This work deliberately deviates by combining the well-understood, high-accuracy approach of spectral methods with the robust global search of metaheuristics. This framework hybridizes the "best of both worlds" while keeping away from the complexities of deep neural network training.
This paper presents the Chebyshev Metaheuristic Solver Approach (CMSA), an approach that transforms a DE into an optimization task to be solved via Flower Pollination Algorithm.
The remainder of the paper is structured as follows: In Section 2, a description of the proposed approach is given, with an outline of the problem formulation to an optimization task (how to use Chebyschev polynomials and FPA) to clarify its fundamental principles and mechanisms. In section 3, different problems are solved using the method. The results show impressive solutions that underscore the effectiveness of the proposed approach in dealing with various challenges. Finally, a conclusion and future scope of the work are given, where the proposed approach can be extended to a system of DE’s and with other metaheuristic algorithms.