Convolutional Neural Network-Assisted Scattering Inversion in Diverse Noise Environments
Abstract:
In addressing the challenge of obstacle scattering inversion amidst intricate noise conditions, a model predicated on convolutional neural networks (CNN) has been proposed, demonstrating high precision. Five distinct noise scenarios, encompassing Gaussian white noise, uniform distribution noise, Poisson distribution noise, Laplace noise, and impulse noise, were evaluated. Far-field data paired with the Fourier coefficients of obstacle boundary curves were employed as network input and output, respectively. Through the convolutional processes inherent to the CNN, salient features within the far-field data related to obstacles were adeptly identified. Concurrently, the statistical characteristics of the noise were assimilated, and its perturbing effects were diminished, thus facilitating the inversion of obstacle shape parameters. The intrinsic capacity of CNNs to intuitively learn and differentiate salient features from data eradicates the necessity for external intervention or manually designed feature extractors. This adaptability confers upon CNNs a significant edge in tackling obstacle scattering inversion challenges, particularly in light of fluctuating data distributions and feature variability. Numerical experiments have substantiated that the aforementioned CNN model excels in addressing scattering inversion complications within multifaceted noise conditions, consistently delivering solutions with remarkable precision.
1. Introduction
The problem of obstacle scattering inversion has been recognized as a pivotal topic within mathematical-physical inverse problems, finding relevance in diverse fields such as geological exploration, non-destructive testing, and medical imaging [1], [2], [3], [4]. The necessity of acquiring precise and dependable data for this challenge is underscored, yet real-world situations present numerous complexities. It has been observed that measurement environments and instrument conditions often introduce significant errors and unwanted data. The inherent non-linearity and ill-posed nature of the obstacle scattering inversion problem are further exacerbated in real-world conditions where varied forms of noise interfere with data collection and transmission, making the accurate deduction and reconstruction of obstacle geometric features problematic [5], [6], [7].
Machine learning's profound self-learning capabilities have been extensively highlighted in recent literature, pointing towards its potential efficacy in diminishing noise impacts and aptly managing the scattering inversion challenge [8], [9], [10]. In an attempt to tackle the scattering inversion problem, a synthesis of neural networks with the Tikhonov regularization strategy, termed NEET, was introduced by Li et al., displaying promising outcomes in sparse data environments [11]. A distinct model, the fully connected neural network (FCNN) elucidated by Gao et al., demonstrated proficiency in handling obstacle scattering inversion amidst data tainted by Gaussian white noise [12]. Relying on a blend of neural networks and gating concepts, an acoustic scattering model for obstacle shape elucidation named SPIMNNG was proposed by Meng et al., which accommodated a plethora of conditions, notably data affected by Gaussian white noise [13]. A comprehensive review of related studies can be found in references [14], [15], [16], [17]. Despite the concentration of these studies on Gaussian white noise, real-world applications often encounter noise of a multifaceted nature, distinguished by diverse statistical attributes and spectral distributions. This includes Gaussian white noise as well as other noise forms such as uniform distribution noise, Poisson distribution noise, Laplace noise, and impulse noise [18], [19], [20]. In such complex noise settings, noise possessing varied statistical and spectral characteristics intermingles with pivotal information, thereby obstructing the precise extraction of obstacle data from observed samples. This confluence can result in inversion outcomes marked by instability and inaccuracy. Furthermore, the inherent uncertainties present in these intricate noise conditions can make the estimation of model parameters daunting.
To adeptly address the intricate challenges posed by diverse noise scenarios, a model rooted in CNN for the obstacle scattering inversion problem has been proposed. Leveraging the intrinsic adaptive learning and noise attenuation capacities of CNNs, an effective approach to the obstacle scattering inversion problem, even under adverse noise conditions, is presented. In scenarios contaminated with noise, CNNs, utilizing multi-scale feature extraction, are found to discern the statistical traits of the noise, suppressing its effects and consequently refining the precision of obstacle imaging.
2. The Obstacle Scattering Inversion Problem
The direct scattering problem associated with incident plane waves related to acoustically soft obstacles is initially introduced. It is assumed that $D \subset \mathbb{R}^n(n=2,3)$ represents an impenetrable obstacle within a uniform background medium, while $k \in w / c \in \mathbb{R}^{+}$ stands for the wavenumber of the plane incident wave, determined jointly by the wave frequency $w \in \mathbb{R}^{+}$ and wave speed $c \in \mathbb{R}^{+}$ of the background field. Consequently, the incident field $u^i$ is defined as a plane wave and takes the form:
where, $d_{i n} \in S^{n-1}$ denotes the direction of the incident wave and $i=\sqrt{-1}$ represents the imaginary unit. The Helmholtz equation, a partial differential equation describing wave phenomena, is commonly utilized to characterize acoustic and electromagnetic wave problems [21]. In the context of the obstacle scattering inversion problem, the behavior of waves propagating within intricate media is described by the Helmholtz equation. Interactions and subsequent generation of the scattering field $u^s$ occur when the incident wave contacts the obstacle, with the direct scattering problem represented by the following Helmholtz equation:
where, $u:=u^i+u^s$ is the total wave field. The homogeneous Dirichlet boundary condition on $\partial D$ serves as a mathematical condition employed to depict boundary constraints in wave problems, signifying that $D$ is an acoustically soft obstacle. Such a condition aids in delimiting the numerical solution domain and constraining the behavior of the wave field, facilitating information retrieval about the obstacle. The last equation in (2.2) embodies the Sommerfeld radiation condition, traditionally used in the obstacle scattering inversion problem to emulate wave attenuation behaviors at infinity, ensuring the well-definition of boundary problems. Specifically, the Sommerfeld radiation condition mandates that the wave function or its derivative approaches zero at infinity, indicating the gradual attenuation of waves in areas distant from the obstacle. The application of the Sommerfeld radiation condition is believed to affirm the validity of numerical simulations, especially when resolving wave problems within finite computational domains [22]. Through these conditions, wave behavior at distances far from the obstacle can be effectively simulated without introducing unnecessary numerical oscillations or reflections.
A relationship of asymptotic nature is believed to exist between the scattering field $u^s$ and the far field $u^{\infty}$, with the asymptotic expansion of $u^s$ described as:
where, $\hat{x}=\frac{x}{r} \in S^{n-1}$ represents the unit direction vector, and Eq. (3) is valid for all directions with $d_{o b} \in S^{n-1}$ indicating the observation direction.
The obstacle scattering inversion problem of interest involves the reconstruction of the obstacle's shape based on received far-field data:
where, $u^{\infty}$ pertains to the far-field data associated with Eq. (3).
3. Convolutional Neural Network Model and Noise Types
Initially, based on the inverse problem (4), the following assumptions were presented for future reference:
Assumption 2.1: The incident field $u^i(x)$ refers to the initial wave field or wavefront in wave problems. This wave propagates from external space into the scattering region containing the obstacle. Interactions between its characteristics and the obstacle in the scattering region lead to wave phenomena. It was assumed that the incident field is a plane wave, with the unit incident direction being $d_{i n}=(\sin (\alpha), \cos (\alpha))$. The incident angles $\alpha$ are uniformly distributed over $[0,2 \pi)$:
where, $s_{i n}$ denotes the quantity of incident directions.
Assumption 2.2: In the scenario of $s_{i n}$ incident directions and $s_{o b}$ observation directions, the received far-field data is represented as:
Herein, the data represents $i$ incident directions and $j$ observation directions. In practical scenarios, the far-field data in the complex domain is represented as a two-dimensional real array $x_{i j}=\left(a_{i j}, b_{i j}\right)^T$, with a real part denoted by $a_{i j}$ and an imaginary part denoted by $b_{i j}$.
Assumption 2.3: The boundary curve $\partial D_2$ of the two-dimensional obstacle was parameterized as follows:
where, $a_0, b_0, a_i, b_i, c_i$, and $d_i$ are elements of the ordered expression, and $I \in N_{+}$ and $Y_2=\left(y_1, y_2, \cdots y_m\right)$, $m=4 I+2$ signifies the parameter vector of the two-dimensional curve.
At the heart of the obstacle inverse scattering problem based on (4) lies the pursuit of the relationship between far-field data and obstacle shape parameters. A CNN network efficient in addressing the inverse scattering issue was constructed. This model encompasses several convolutional layers, pooling layers, and fully connected layers. Features are extracted from the input data by the convolutional layers. The pooling layers serve to reduce the spatial dimensions of the feature maps, and the fully connected layers are responsible for generating the final inversion results. Techniques such as batch normalization and activation functions have been employed to enhance model performance. Specifically, the network model is composed of a preprocessor, a stager, and a postprocessor, as illustrated in Figure 1.
Given that data in complex noise environments often contain substantial noise, a preprocessor is utilized to handle raw far-field data, which then gets relayed to the first stage. In the preprocessor stage, the input data undergoes normalization and preliminary feature extraction. This step ensures large magnitude disparities in input variables do not adversely affect the efficacy of the solution algorithm and that noise can be better countered during model training. Additionally, a vast array of synthetic data was generated, covering various noise types and intensities, to enrich the training set and boost the model's generalization capabilities. The postprocessor treats the output from the network's last stage and then forwards the results. It comprises a pooling layer and a fully connected layer. In this setup, each feature's spatial size can be independently reduced by the pooling layer, retaining its depth dimension, to minimize computational requirements. The fully connected layer maps the previously learned feature space to the sample label space. Furthermore, the backpropagation algorithm and the stochastic gradient descent optimizer have been employed to minimize the loss function. This loss function encapsulates the discrepancies between the inversion results and true data, along with regularization terms, optimizing model stability and generalization performance.
Beyond this, the remaining parts of the CNN are divided into two stages. Each stage consists of two CNNBlocks linked in succession, and within a stage, the feature size remains constant. A transition layer has been incorporated between the two stages to adjust dimensions, simplifying network learning issues and suppressing lower-response information. The CNNBlock contains two mappings: An identity mapping and a mapping $F$ composed of two convolutional layers with the same weights, two activation functions, and two BN layers. When the CNNBlock input is denoted by $y_i$ and the output by $y_{i+1}$, the CNNBlock computational expression is represented as:
where, $F\left({y}_i,{W}_i\right)$ indicates the convolution operation, ${W}_i$ denotes the weight parameters of the convolution kernel in the CNNBlock, $\sigma(\bullet)$ represents the activation function, and $B N(\cdot)$ signifies the batch normalization operation. Ultimately, a correspondence between far-field data and obstacle shape parameters has been established by the CNN model, allowing for the inversion of obstacle shape parameters $Y_2=\left(y_1, y_2, \cdots y_m\right)$ from far-field data $X$.
Remarkable advantages of this CNN model in addressing the obstacle inverse scattering problem over traditional methods or general neural network approaches have been observed. Firstly, CNNs inherently possess potent feature extraction capabilities. Through the combination of convolutional and pooling layers, multi-scale features within the data can be autonomously discerned by CNNs. This capability is crucial for intricate obstacle inverse scattering challenges. While traditional methods typically require manual feature extractor design, CNNs can adaptively extract features, capturing shape and positional details of obstacles more effectively. Secondly, when grappling with large-scale data, CNNs demonstrate remarkable computational efficiency. Traditional approaches might consume vast computational resources when navigating complex inverse scattering issues, but the parallel computing capability and parameter sharing mechanism of CNNs allow them to efficiently handle extensive data sets. Additionally, the noise suppression and generalization capabilities of CNNs stand out. In multifaceted noise scenarios, multi-scale feature extraction of data under various noise conditions allows CNNs to learn statistical characteristics of noise, enhancing obstacle imaging clarity and accuracy. Coupled with vast data training, the robust generalization ability of CNNs is evident, applicable across diverse obstacle types and noise distributions without frequent model readjustments. Lastly, the end-to-end learning approach of CNNs streamlines problem modeling and solving processes. While traditional methods necessitate intricate physical model and solver design, CNNs, by learning directly from data, aptly navigate the complexity of obstacle inverse scattering challenges, making problem modeling less arduous.
In summation, by virtue of their feature extraction capabilities, computational efficiency, noise suppression, generalization ability, and end-to-end learning approach, CNNs manifest evident advantages in handling obstacle inverse scattering challenges when compared to traditional or general neural network methods. Such tools offer powerful solutions for real-world complex noise scenarios in obstacle imaging.
Experimental simulations were used to collect far-field data under five different noise scenarios, including Gaussian white noise, uniform distribution noise, Poisson distribution noise, Laplace noise, and impulse noise. This data encompasses various situations in complex noise environments. Normalization and preliminary feature extraction were applied to the data, ensuring consistent magnitudes and thereby mitigating the impact on the solution algorithm (as shown in Figure 2).
Gaussian white noise serves as a prevalent random signal model and finds extensive application in fields such as communication systems, signal processing, and image processing. This noise is characterized by a zero mean, indicating an equilibrium between its positive and negative components over extended periods; a constant variance, signifying a consistent noise intensity over time; and adherence to the Gaussian or normal distribution, evident from its bell-shaped curve. It can be mathematically expressed as:
where, $X(t)$ represents the noise value at time $t$, $A$ is the amplitude, $f$ is the frequency, and $\phi$ is the phase. In Gaussian white noise, each noise value $X(t)$ at a given time is a random variable following a Gaussian distribution with mean and variance $\sigma^2$. Introducing Gaussian white noise in backscattering problems contributes randomness, resulting in an amalgamation of obstacle signals with noise signals in the observed data. This amalgamation can lead to blurred imaging of obstacles, as noise obscures the genuine signals.
Uniform distribution noise is distinguished by equal probabilities of its values occurring within a given interval. It lacks any specific trends or biases and is sometimes referred to as rectangular or average distribution in statistics. It possesses a uniform probability density function, indicating an equal chance of all values within a specified range. It can be mathematically expressed as:
where, $f(x)$ is the probability density function for random variable $x$. Within the interval $[a,b]$, the probability density function remains constant at $\frac{1}{b-a}$, denoting an equal chance of all values in this range. Outside this interval, the probability density is zero. This uniform signal distribution means that every value within the defined interval has an equal chance of occurrence, potentially leading to periodic disturbances in the data. Hence, such noise can blur the signal boundaries, affecting the precise imaging of obstacles.
Poisson distribution noise serves as another commonly employed random signal model, representing the number of discrete events occurring within a certain time or spatial frame. Considering it from a noise perspective, it signifies the number of times a random event arrives or occurs within a certain duration or space. Specifically, when the amplitude of noise signal is a non-negative integer, it can be expressed mathematically as:
where, $P(k ; \lambda)$ denotes the probability that the amplitude of the noise signal is $k$ given parameter $\lambda$. $\lambda$ is a parameter of the Poisson distribution, representing the average number of events occurring per unit time or unit space. In backscattering problems, Poisson noise may elevate the data's volatility, especially in low signal-to-noise ratio scenarios, causing unstable imaging of obstacles.
Laplace noise is often used in real-world applications to simulate random noises exhibiting peak and heavy-tail characteristics, such as edge detection in image processing or channel noise modeling in communication systems. This noise, given its mean and scale parameters, displays amplitude variations with sharp peaks and heavy tails, making it suitable for characterizing some non-Gaussian random signals. Its mathematical expression is:
where, $f(x ; \mu, b)$ represents the probability density function of the noise signal taking a value $x$ under the given parameters $\mu$ and $b$. $\mu$ is the mean parameter of the Laplace distribution, indicating the central position of the noise signal, while $b$ is the scale parameter, controlling the amplitude variation of the noise signal. Noise of the Laplace distribution, characterized by sharp peaks, might introduce pronounced spikes or steep noise components in the data concerning backscatter problems. Such noise disrupts the smoothness of the signal, complicating the detection and imaging of obstacle boundaries.
Impulse noise, a discrete random signal, is characterized by its abrupt and transient amplitude perturbations. The timing and amplitude of these pulses can be adjusted based on specific application contexts. Widely used in communication systems and sensor signals, impulse noise can be described using the probability distribution of pulse sequences, mathematically expressed as:
where, $P(x)$ denotes the probability density function of the impulse noise when taking a value of $x$. $\delta(x)$ represents the Dirac $\delta$-function, which becomes infinitely large at $x=0$ and is zero elsewhere. $t_i$ indicates the time point of the $i$th impulse, while $p_i$ designates the amplitude of the $i$th impulse. Impulse noise might introduce prominently anomalous values in backscatter problems. These anomalies could mislead backscattering algorithms, resulting in incorrect imaging of obstacles.
4. Numerical Experiments
Consideration was given to solving the obstacle backscattering problem of the Helmholtz equation in a two-dimensional context, with the kite-shaped obstacle defined by the following function:
An initial classification of the received far-field data was conducted to gain prior information about the category to which the obstacle belongs. Relevant experiments can be referred to in the study [13]. For the training of the network model and the inversion of obstacle shape parameters, a dataset (X, Y) containing both the far-field data and the Fourier coefficients of the truncated obstacle boundary curve equation was utilized. The dataset was divided into training and testing sets at a ratio of 8:2. Table 1 presents the hyperparameter settings for the network model, chosen based on extensive numerical experiments and literature references.
Parameter | $J$ | $v$ | $m$ | $\eta$ | $e$ |
Value | 5000 | 50 | 16 | 0.001 | 500 |
Further quantitative evaluations were made on the reliability of the obstacle shape parameters predicted based on the entire test set. Two commonly used performance metrics were adopted, namely the relative error (RE), root mean square error (RMSE), and the correlation coefficient R, defined as follows:
where, $\bar{{Y}_i}$ and $\bar{{\tilde{Y}}}_i$ represent the mean values of the actual and predicted values, respectively, and N denotes the number of test data.
In this experiment, consideration was given to a single incident direction with settings of 16 observation directions. Gaussian white noise, with a mean of 0 and standard deviations of 0.05, 0.1, and 0.2, was added to the far-field data, respectively. The reconstruction effects of the obstacle by the network model under different levels of Gaussian white noise were subsequently analyzed.
0.05 | 0.1 | 0.2 | |
R | 0.9089 | 0.7571 | 0.4940 |
Loss | 0.0064 | 0.0157 | 0.0296 |
RE | 3.3572% | 5.5418% | 8.3111% |
RMSE | 0.0837 | 0.1369 | 0.2010 |
The experimental data displayed in Table 2 represent average results from 100 experiments conducted under different random seed scenarios. From the results in Table 2, it can be observed that as the mean of the Gaussian white noise increases, the inversion effects deteriorate progressively. This deterioration is attributed to the randomness introduced by the Gaussian white noise, which reduces the signal-to-noise ratio of the original data, making it challenging for the model to differentiate between useful information and noise. This is further illustrated in Figure 3.
For this experiment, a single incident direction was considered, set with 16 observation directions. Uniform noise distributed within the range (-n, n), where n took values of 0.1, 0.2, and 0.3, was introduced to the far-field data. The reconstruction effects of obstacles by the network model under various levels of uniformly distributed noise were then analyzed.
Based on the experimental data, Table 3 presents the average outcomes from 100 trials conducted under diverse random seed scenarios. The findings from Table 3 indicate that as the mean value of the uniformly distributed noise increased from 0.1 to 0.3, the inversion outcomes progressively worsened. This trend is explicitly portrayed in Figure 4. Such degradation arises because the uniform distribution noise introduces a heightened degree of uncertainty, increasing the intricacy of the data. This complexity challenges the model's capacity to distinguish between noise and pertinent information. To address the scenario with uniformly distributed noise, the consideration of additional data augmentation techniques, such as data smoothing and feature selection, might be beneficial. Such techniques could mitigate the cyclical disturbances introduced by uniform distribution noise, enhancing the model's sensitivity to valuable information.
0.1 | 0.2 | 0.3 | |
R | 0.8873 | 0.7051 | 0.5454 |
RE | 3.7720% | 6.2518% | 7.9077% |
RMSE | 0.0935 | 0.1536 | 0.1920 |
In this study, a single incident direction, along with 16 observation directions, was considered. Poisson-distributed noise, taking non-negative integer values and multiplied by a factor of n, was introduced into the far-field data, with n having values of 0.05, 0.1, and 0.2. The reconstruction effects of obstacles by the network model under varying levels of Poisson-distributed noise were subsequently examined.
Based on the compiled data, Table 4 presents the average results of 100 trials conducted under distinct random seed situations. A noticeable trend, evident from Table 4, is that as the mean value of the Poisson-distributed noise increased, the inversion outcomes progressively deteriorated. This trend is elucidated in Figure 5. Such decline can be attributed to the intrinsic instability exhibited by Poisson-distributed noise, especially under conditions of low signal-to-noise ratio, leading to heightened sensitivity of the model to noise.
0.05 | 0.1 | 0.2 | |
R | 0.9130 | 0.7570 | 0.4846 |
RE | 3.3153% | 5.5655% | 8.3057% |
RMSE | 0.0825 | 0.1362 | 0.2007 |
In this study, the context was a single incident direction accompanied by 16 observation directions. Laplace-distributed noise with a mean of 0 and standard deviations of 0.02, 0.05, and 0.1 was introduced into the far-field data. The impact of varying levels of Laplace-distributed noise on the reconstruction effects of obstacles by the network model was subsequently assessed.
From the data presented in Table 5, it can be discerned that, under different random seed situations, after conducting 100 trials, the inversion outcomes deteriorated progressively as the mean value of the Laplace-distributed noise increased. This observable trend is further elucidated in Figure 6. Such decline can likely be attributed to the peak and heavy-tail characteristics of the Laplace distribution noise, where high amplitude noise might have introduced significant disturbances to the model, leading to the degradation of inversion results.
0.02 | 0.05 | 0.1 | |
R | 0.9591 | 0.8517 | 0.6573 |
RE | 2.0956% | 4.2427% | 6.6307% |
RMSE | 0.0527 | 0.1053 | 0.1625 |
In this experiment, a single incident direction was considered, along with 16 observation directions. Impulse noise was introduced into the far-field data by incorporating Laplace distribution noise, meaning that n signal information points were randomly disrupted and replaced with Gaussian white noise with a standard deviation of 0.1. The effects of varying levels of impulse noise on the network model's reconstruction of obstacles were then assessed.
The data presented in Table 6, obtained from 100 trials under different random seed scenarios, indicates that as the level of impulse noise increases, the far-field data tends to lose characteristic information regarding obstacles, thereby affecting the reconstruction results of the obstacle shapes. The spontaneous and irregular nature of impulse noise renders it challenging to handle. To address impulse noise, the development of more intricate outlier processing techniques to refine the network model can be considered, aiming to mitigate the effects of impulse noise. Detailed outcomes are illustrated in Figure 7.
2 | 4 | 8 | |
R | 0.7973 | 0.6527 | 0.4138 |
RE | 3.7906% | 5.5535% | 7.8988% |
RMSE | 0.0950 | 0.1378 | 0.1955 |
In the exploration of backscatter tasks from obstacles in complex noise environments, the proposed CNN model was first studied. A detailed comparison was then conducted between CNN and two other machine learning models: Fully Connected Neural Network (FCNN) and Long Short-Term Memory Network (LSTM). For the comparative experiments, settings for the different network models were kept consistent, and the same dataset was utilized. Furthermore, identical hyperparameters were employed, including a learning rate of 1e-4, training cycles set at 500 rounds, batch sizes of 50, and activation functions all being LeakyReLU. The same evaluation metrics were also used to quantify the inversion performance of different models under complex noise conditions. Detailed outcomes have been summarized in Table 7. It is noteworthy that the experimental results, being based on various random seeds, may have slight variations.
| CNN | FCNN | LSTM | |||||||
R | RE | RMSE | R | RE | RMSE | R | RE | RMSE | ||
Gaussian Distribution | 0.05 | 0.9089 | 3.36% | 0.0837 | 0.9036 | 3.79% | 0.0801 | 0.7002 | 5.12% | 0.1303 |
0.1 | 0.7571 | 5.54% | 0.1369 | 0.6228 | 6.40% | 0.1413 | 0.6518 | 5.84% | 0.1462 | |
0.2 | 0.4940 | 8.31% | 0.2010 | 0.2991 | 7.65% | 0.2646 | 0.5092 | 7.19% | 0.2075 | |
Uniform Distribution | 0.1 | 0.8873 | 3.77% | 0.0935 | 0.9082 | 3.19% | 0.0883 | 0.5862 | 7.21% | 0.2011 |
0.2 | 0.7051 | 6.25% | 0.1536 | 0.6815 | 6.46% | 0.1455 | 0.3909 | 9.35% | 0.2181 | |
0.3 | 0.5454 | 7.91% | 0.1920 | 0.3908 | 9.22% | 0.1902 | 0.2947 | 10.45% | 0.2394 | |
Poisson Distribution | 0.05 | 0.9130 | 3.32% | 0.0825 | 0.9305 | 3.17% | 0.0605 | 0.6316 | 6.45% | 0.1585 |
0.1 | 0.7570 | 5.57% | 0.1362 | 0.7660 | 5.43% | 0.1095 | 0.4638 | 8.29% | 0.1959 | |
0.2 | 0.4846 | 8.31% | 0.2007 | 0.1351 | 9.91% | 0.2546 | 0.3476 | 9.77% | 0.2237 | |
Laplace Distribution | 0.02 | 0.9591 | 2.10% | 0.0527 | 0.9738 | 1.61% | 0.0389 | 0.6866 | 5.38% | 0.1367 |
0.05 | 0.8517 | 4.24% | 0.1053 | 0.8813 | 4.10% | 0.0785 | 0.5834 | 6.89% | 0.1687 | |
0.1 | 0.6573 | 6.63% | 0.1625 | 0.5806 | 5.77% | 0.1439 | 0.4468 | 8.46% | 0.2003 | |
Impulse | 2 | 0.7973 | 3.79% | 0.0950 | 0.3866 | 9.37% | 0.1312 | 0.6169 | 5.82% | 0.1461 |
4 | 0.6527 | 5.55% | 0.1378 | -0.2552 | 11.90% | 0.1958 | 0.5221 | 6.92% | 0.1691 | |
8 | 0.4138 | 7.90% | 0.1955 | -1.6825 | 14.79% | 0.2939 | 0.3828 | 9.39% | 0.2019 |
In Table 7, the best experimental results under each noise condition are emphasized in bold. It can be observed that the performance of CNN, when handling backscatter problems under complex noise conditions, surpassed both FCNN and LSTM. Specifically, in Gaussian, Uniform, Poisson, and Laplace noise scenarios, CNN achieved either the best or the second-best results. Under impulse noise conditions, CNN demonstrated a significant performance improvement compared to the other models. Notably, even when CNN's performance was not optimal, the gap between it and the best result remained minimal.
On the other hand, FCNN typically excelled in low-noise Uniform, Poisson, and Laplace scenarios but faltered under high-noise conditions, often underperforming LSTM. Especially under impulse noise conditions, its inversion outcomes were less than satisfactory. Compared to FCNN and LSTM, CNN possesses the ability to autonomously extract features to enhance network robustness and prevent overfitting. This renders CNN less sensitive to irrelevant variations in input data, achieving superior inversion performance.
5. Conclusion
In the research presented, the challenge of obstacle backscattering in complex noise environments was scrutinized. A convolutional neural network-based model, designed for high-precision obstacle shape inversion, was introduced. Numerical experiments, conducted across varied data noise types such as Gaussian white noise and uniform noise, evidenced the distinct impact of each noise type on the inversion task. The collective influence of these noise forms posed significant tests to the model's stability and performance. Results from these experiments indicate commendable resilience of the model to diverse noise disturbances. Under modest interference, R-values were observed to approximate 0.7, RMSE values hovered around 0.09, and RE values remained below 3.8%. In scenarios of intensified interference, R-values approached 0.5, RMSE registered near 0.2, and RE values were confined below 9%. The efficacy of the proposed algorithm in intricate noise contexts became manifest through these findings.
Contrasted with conventional methodologies, the devised algorithm not only demonstrated proficiency in navigating the complexities of the obstacle backscattering issue amid intricate noise situations but also showcased innate adaptability in feature learning. This inherent adaptability rendered manual feature extractor designs obsolete, thereby amplifying the algorithm's precision and adaptability.
The model's applicability extends across sectors, encompassing geological exploration, non-destructive testing, medical imaging, and material science. Within the realm of geological exploration, when seismic wave data are amassed, noise interference with geological information can be discerned and nullified, thereby bolstering exploration efficacy and precision. Such advancements lead to tangible benefits, including diminished costs and risks alongside optimized resource allocation.
However, it must be emphasized that despite the valuable insights yielded by the experimental results, inherent limitations persist. The noise models employed within the experiments operated on distinct distribution assumptions, such as Gaussian white noise and uniform distribution noise. Yet, in real-world settings, signals may be compromised by a confluence of noise types. Hence, the integration of apt noise suppression and data augmentation methodologies may prove indispensable for confronting the backscattering challenge shaped by multifaceted noise distributions. While this study furnishes pivotal insights into the obstacle backscattering dilemma, the nature of obstacles and specific noise distribution must be judiciously weighed prior to model application, ensuring performance and robustness remain unhampered. Future endeavors might venture into broader and more intricate scenarios, with an aspiration to augment the model's generalizability and dependability.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare that they have no conflicts of interest.