Javascript is required
1.
R. Stricker, D. Aganian, M. Sesselmann, and et al., “Road surface segmentation-pixel-perfect distress and object detection for road assessment,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), 2021, pp. 1789–1796. [Google Scholar] [Crossref]
2.
T. Rateke and A. Von Wangenheim, “Road surface detection and differentiation considering surface damages,” Auton. Robots, vol. 45, no. 2, pp. 299–312, 2021. [Google Scholar] [Crossref]
3.
B. Kulambayev, G. Beissenova, N. Katayev, and et al., “A deep learning-based approach for road surface damage detection,” Comput. Mater. Continua, vol. 73, no. 2, pp. 3403–3418, 2022. [Google Scholar] [Crossref]
4.
X. He, Z. Tang, Y. Deng, G. Zhou, Y. Wang, and L. Li, “UAV-based road crack object-detection algorithm,” Autom. Constr., vol. 154, p. 105014, 2023. [Google Scholar] [Crossref]
5.
I. Hussain and L. Alam, “Adaptive road crack detection and segmentation using Einstein operators and ANFIS for real-time applications,” J. Intell. Syst. Control, vol. 3, no. 4, pp. 213–226, 2024. [Google Scholar] [Crossref]
6.
M. Santic, L. Pomante, U. Fazio, and L. Fucci, “Wheelchair embedded device for road surface classification and obstacle detection,” in 2024 13th Mediterranean Conference on Embedded Computing (MECO), 2024, pp. 1–5. [Google Scholar] [Crossref]
7.
K. Lis, S. Honari, P. Fua, and M. Salzmann, “Detecting road obstacles by erasing them,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 4, pp. 2450–2460, 2023. [Google Scholar] [Crossref]
8.
P. Peng, J. Pan, Z. Zhao, M. Xi, and L. Chen, “A novel obstacle detection method in underground mines based on 3D LiDAR,” IEEE Access, vol. 12, pp. 106685–106694, 2024. [Google Scholar] [Crossref]
9.
M. Fan, J. Liu, and J. Yu, “Highway obstacle recognition based on improved yolov7 and defogging algorithm,” in International Conference on Internet of Things as a Service, 2023, pp. 22–34. [Google Scholar] [Crossref]
10.
X. Meng, Y. Liu, L. Fan, and J. Fan, “YOLOv5s-Fog: An improved model based on YOLOv5s for object detection in foggy weather scenarios,” Sensors, vol. 23, no. 11, p. 5321, 2023. [Google Scholar] [Crossref]
11.
B. Niu, H. Wu, and Y. Meng, “Road obstacle detection based on 3d information recovery,” in 2024 9th International Conference on Image, Vision and Computing (ICIVC), 2024, pp. 356–360. [Google Scholar] [Crossref]
12.
W. Park, K. Park, and J. Jeong, “Verification of the applicability of obstacle recognition distance as a measure of effectiveness of road lighting on rainy and foggy roads,” Appl. Sci., vol. 14, no. 4, p. 1595, 2024. [Google Scholar] [Crossref]
13.
M. Srikanth, N. S. J. Krishna, S. J. S. Krishna, S. Irfan, and T. G. Venkat, “Real-time vehicle detection and road condition prediction for smart urban areas,” in 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), 2024, pp. 730–734. [Google Scholar] [Crossref]
14.
P. Sharmila, L. Prisha, K. Dhaarani, and G. S. Vishal, “Fog penetration radar,” in 2024 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS), 2024, pp. 1–4. [Google Scholar] [Crossref]
15.
J. Li, R. Xu, X. Liu, and et al., “Domain adaptation based object detection for autonomous driving in foggy and rainy weather,” arXiv preprint arXiv:2307.09676, 2023. [Google Scholar] [Crossref]
16.
N. U. A. Tahir, Z. Zhang, M. Asim, J. Chen, and M. ELAffendi, “Object detection in autonomous vehicles under adverse weather: A review of traditional and deep learning approaches,” Algorithms, vol. 17, no. 3, p. 103, 2024. [Google Scholar] [Crossref]
17.
C. Wu, M. Ye, H. Li, and J. Zhang, “Object detection model design for tiny road surface damage,” Sci. Rep., vol. 15, no. 1, p. 11032, 2025. [Google Scholar] [Crossref]
18.
J. R. V. Jeny, P. Divya, K. Varsha, A. Mrunalini, and S. K. M. Irfan, “Autonomous driving road environment recognition with multiscale object detection,” E3S Web Conf., vol. 619, p. 03017, 2025. [Google Scholar] [Crossref]
Search
Open Access
Research article

Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments

Shakeel Ahmad*
Department of Mathematics, King Fahad University of Petroleum and Minerals, 31261 Dhahran, Saudi Arabia
Mechatronics and Intelligent Transportation Systems
|
Volume 4, Issue 3, 2025
|
Pages 125-134
Received: 05-21-2025,
Revised: 07-02-2025,
Accepted: 07-12-2025,
Available online: 07-14-2025
View Full Article|Download PDF

Abstract:

Reliable detection of road surface objects under foggy conditions remains a critical challenge for autonomous vehicle perception systems due to the severe degradation of visual information. To address this limitation, a novel framework was developed that integrates entropy-guided visibility enhancement, Pythagorean fuzzy logic, and structure-preserving saliency modeling to improve object detection performance in low-visibility environments. Visibility restoration was achieved through an entropy-guided weighting mechanism that selectively enhances salient image regions while preserving essential structural features critical for downstream detection tasks. Uncertainty and imprecision inherent to fog-degraded scenes were systematically modeled using Pythagorean fuzzy logic, enabling improved confidence estimation and robustness in object localization. A saliency mechanism that preserves structural characteristics further contributes to the accurate delineation of road-relevant elements. Extensive evaluations on multiple publicly available foggy road datasets were conducted, demonstrating substantial gains in detection performance, with notable improvements in accuracy, precision, recall, and F1-score metrics. Furthermore, enhancements in visual quality were verified using structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), natural image quality evaluator (NIQE), and blind/referenceless image spatial quality evaluator (BRISQUE) metrics. The computational efficiency of the proposed method was confirmed, supporting its applicability to near real-time deployment scenarios. Consistent performance was observed across varying fog densities, highlighting the framework’s scalability and generalizability. The integration of entropy-based visibility enhancement with fuzzy reasoning and saliency preservation offers a comprehensive and practical solution to the challenges of perception in visually degraded environments, contributing to the advancement of safe and intelligent transportation systems.
Keywords: Image processing, Entropy-guided visibility enhancement, Pythagorean fuzzy logic, Structure-preserving saliency, Intelligent transportation systems

1. Introduction

Road surface condition plays a critical role in ensuring traffic safety and enabling intelligent transportation systems, especially in scenarios involving autonomous navigation and maintenance planning. Efficient and accurate detection of road surface anomalies such as cracks, potholes, and other forms of distress is essential for minimizing hazards and optimizing road maintenance schedules. In recent years, researchers have developed a variety of automated and semi-automated techniques to address this challenge, incorporating advanced image processing and machine learning paradigms. Traditional approaches focused on handcrafted features and thresholding methods, while more recent efforts have shifted toward data-driven and deep learning models to enhance robustness and generalization. For example, pixel-perfect segmentation and object-level road distress detection have been proposed to improve accuracy in complex scenes [1], [2]. Furthermore, hybrid deep learning architectures and real-time capable frameworks have emerged, leveraging Convolutional Neural Networks (CNNs), unmanned aerial vehicle (UAV) imaging, and neuro-fuzzy systems to handle diverse conditions, surface textures, and lighting variations [3], [4], [5]. These advancements demonstrate significant progress toward fully autonomous and intelligent road surface assessment systems.

Building on these foundations, more recent studies have extended road surface detection capabilities by integrating obstacle detection mechanisms to enhance situational awareness in complex and foggy environments. Innovative embedded systems have been deployed on mobility aids such as wheelchairs to classify road surfaces and identify nearby obstacles, improving accessibility and safety in urban environments [6]. Simultaneously, advanced computer vision models have pushed the boundary of obstacle recognition by introducing techniques like adversarial erasure to differentiate occlusions and surface elements effectively [7]. In more industrial or constrained environments, such as underground mines, 3D Light Detection and Ranging (LiDAR) has proven instrumental for detecting structural barriers and road irregularities with high accuracy [8]. Moreover, the fusion of enhanced object detection models like You Only Look Once version 7 (YOLOv7) with image defogging techniques has led to substantial improvements in highway obstacle recognition under low-visibility conditions [9]. These contributions collectively underscore a shift toward integrated, multi-task systems that can address the dual challenges of surface assessment and obstacle avoidance under diverse environmental and operational scenarios. Recent advancements in object detection have focused on overcoming the challenges posed by adverse weather conditions, particularly fog, which significantly degrades image quality and reduces detection accuracy. Meng et al. [10] proposed YOLOv5s-Fog, an enhanced version of the lightweight You Only Look Once version 5 small (YOLOv5s) model tailored for foggy scenarios. By introducing improved feature extraction modules and weather-aware preprocessing, this model demonstrated superior detection accuracy under low-visibility conditions. However, its effectiveness slightly diminishes when dealing with heavy fog or mixed weather conditions due to limited domain generalization. In a complementary approach, Niu et al. [11] introduced a method for obstacle detection based on 3D information recovery, allowing for the reconstruction of spatial depth in fog-obscured scenes. While their approach excels in recovering spatial layout, it is computationally intensive and less suitable for real-time applications. Similarly, Park et al. [12] evaluated the effectiveness of road lighting by introducing obstacle recognition distance as a metric under foggy and rainy scenarios. Although the study offers a practical framework for infrastructure enhancement, it lacks integration with real-time detection systems.

Srikanth et al. [13] proposed a real-time vehicle detection and road condition prediction system using CNNs and Internet of Things (IoT) data. It performed well in urban environments, offering fast and accurate detection. However, its effectiveness dropped in foggy or adverse weather due to visibility issues and sensor noise. The system also depends on stable connectivity, which may limit its use in rural areas. On the hardware front, Sharmila et al. [14] proposed a fog penetration radar system designed to assist conventional vision-based object detectors. Despite its promising results in laboratory settings, its high cost and integration complexity pose significant adoption barriers. Finally, Li et al. [15] presented a domain adaptation-based object detection framework targeting foggy and rainy weather conditions. Their model bridges the performance gap between synthetic and real-world foggy datasets using adversarial learning techniques. While this method achieves state-of-the-art results, it requires extensive computational resources and training time. Collectively, these studies represent a significant stride toward robust object detection in inclement weather, though challenges in real-time deployment, domain generalization, and dataset diversity remain critical areas for further research.

Motivated by the limitations of traditional segmentation techniques in fog-degraded scenes, this study introduces a novel fog-resilient image segmentation framework that integrates entropy-guided visibility assessment, Pythagorean fuzzy modeling, and structure-preserving gradient features (Figure 1). Unlike conventional approaches that rely heavily on intensity-based heuristics or handcrafted priors, the proposed method formulates segmentation as the minimization of a tailored energy functional that accounts for local uncertainty, edge preservation, and low-visibility compensation. The entropy-guided visibility function captures texture degradation due to fog, while the Pythagorean fuzzy membership robustly models object-background ambiguity. Furthermore, a structure-preserving gradient term reinforces boundary adherence without over-smoothing. These components are synergistically embedded into an energy minimization framework solved via gradient descent. Experimental results on foggy and low-contrast images demonstrate the effectiveness of the proposed approach in preserving object structures and achieving accurate segmentation under challenging visibility conditions.

Figure 1. Workflow of the proposed road boundary detection model

In Figure 1, the input image is preprocessed, followed by entropy-guided Pythagorean fuzzy modeling and gradient computation. These guide the energy functional, which evolves via gradient descent to produce the segmented output.

2. Literature Review

Object detection in adverse weather conditions, particularly fog, presents unique challenges for autonomous driving systems. Traditional computer vision methods often fail to maintain robustness under low visibility, while recent deep learning advancements offer improved but still imperfect performance. Researchers have increasingly turned to hybrid models, sensor fusion, and intelligent algorithms like entropy-based and fuzzy logic techniques to enhance detection accuracy. This literature review explores significant contributions in this area, focusing on recent developments that address road surface recognition, obstacle detection, and environmental interpretation in challenging driving conditions.

Tahir et al. [16] presented a comprehensive review of object detection approaches in autonomous vehicles operating under adverse weather conditions such as fog, rain, and snow. The study critically compared traditional computer vision methods and recent deep learning models, highlighting how visibility degradation severely impacts the performance of detection systems. One of the major achievements of the research is its structured taxonomy of methods and identification of key strategies like data augmentation, weather-invariant features, and sensor fusion as effective solutions for robustness. The review is particularly useful for researchers aiming to develop weather-resilient models and offers practical insights on dataset limitations and model evaluation. However, the research also has notable limitations. It lacks an in-depth discussion on entropy-based or fuzzy logic methods, which are gaining traction for handling uncertainty in detection tasks under poor visibility. Furthermore, the review acknowledges the limited availability of balanced datasets across diverse weather conditions, leading to a gap in model generalization.

Wu et al. [17] focused on designing an object detection model tailored to identifying tiny road surface damages, such as micro cracks and small potholes. The study utilized a refined CNN structure combined with attention mechanisms to improve detection accuracy on low-resolution damage features. A key achievement of this research is its success in maintaining high precision and recall on multiple real-world road condition datasets, outperforming baseline You Only Look Once (YOLO) and Faster Region-based Convolutional Neural Network (Faster R-CNN) models in detecting small-scale anomalies. The model also demonstrated low computational complexity, making it suitable for real-time applications on embedded systems. Despite these strengths, the model shows some limitations. It was tested primarily under clear weather conditions, raising concerns about its robustness in adverse environments like fog, rain, or low light. Moreover, its performance heavily relies on well-annotated datasets, and it may struggle in generalized deployments where the size and contrast of road damage vary significantly.

Jeny et al. [18] proposed a multiscale object detection framework for autonomous driving systems aimed at recognizing complex road environments. The study integrates a multiscale feature pyramid network (FPN) with real-time data processing pipelines to improve the detection of both large- and small-scale objects such as vehicles, pedestrians, and traffic signs. One of the key achievements of the model is its high mean Average Precision (mAP) in varied urban driving scenarios, showcasing its ability to adapt across different road textures and object scales. The model also achieves real-time inference speed, which is crucial for deployment in autonomous navigation systems. Nonetheless, the framework exhibits certain limitations. It lacks specific adaptations for foggy or adverse weather conditions, which often obscure visual features needed for accurate detection. Additionally, while the model handles scale variation well, it does not integrate uncertainty estimation or fuzzy logic to deal with ambiguous object boundaries in challenging environments.

3. Proposed Methodology

Fog-induced visibility degradation makes object detection on road surfaces challenging for both human drivers and autonomous systems. Traditional visibility models based on the dark channel prior (DCP) or Gaussian membership functions often fail under varying fog intensities and illumination conditions. This work proposes a novel mathematical functional approach that incorporates entropy-guided visibility, Pythagorean fuzzy modeling, and structure-preserving gradient enhancement.

3.1 Entropy-guided Visibility Function $\Psi(x, y)$

In image dehazing, the traditional DCP estimates the presence of haze by assuming that in most non-sky patches of a haze-free image, at least one color channel has some pixels with very low intensity. However, this assumption often fails in regions with high texture, brightness, or complex illumination. To address these limitations, an entropy-based visibility estimation technique was proposed, which quantifies the amount of texture degradation due to haze or fog.

The proposed entropy-guided visibility function $\Psi(x, y)$ is defined as:

$ \Psi(x, y)=1-\frac{H_L(x, y)}{H_{\max }} $

where, $\Psi(x, y)$ denotes the visibility level at pixel location $(x, y)$. This function yields values in the range $[ 0,1]$, with higher values indicating better visibility (i.e., less fog), and lower values indicating regions more affected by haze.

The term $H_L(x, y)$ represents the local entropy calculated within a square window centered at pixel $(x, y)$. It is defined by the expression:

$ H_L(x, y)=-\sum_{i=1}^N p_i(x, y) \log p_i(x, y) $

where, $p_i(x, y)$ denotes the normalized histogram of intensity levels within the neighborhood, and $N$ is the total number of gray levels in the image (typically $N$=256 for 8-bit images). The histogram is normalized such that the sum of all probabilities in the window equals 1. This local entropy quantifies the amount of disorder or randomness in the intensity distribution around each pixel.

The term $H_max$ refers to the maximum possible entropy in the window and is given by:

$ H_{\max }=\log (N) $

This normalization ensures that $\Psi(x, y)$ remains bounded within the $[ 0,1]$ interval. When the local region is highly textured and structured (i.e., haze-free), the entropy $H_L(x, y)$ tends to be low, leading to a higher value of $\Psi(x, y)$. Conversely, in foggy regions where textures are smoothed and intensity values become more uniform, the entropy increases, resulting in a lower visibility score.

This entropy-guided visibility measure provides a more reliable assessment of local clarity than purely intensity-based metrics. It is particularly effective in scenarios where fog causes subtle texture degradation rather than stark intensity changes. The function $\Psi(x, y)$ thus serves as a robust alternative to traditional priors and was subsequently used in the transmission estimation and image restoration pipeline in this study.

3.2 Pythagorean Fuzzy Membership Function $\boldsymbol{\mu_{P F}(x, y)}$

In foggy or hazy images, pixel intensities often exhibit ambiguous characteristics, making it difficult to distinguish between object and background regions using crisp segmentation or binary thresholds. To handle such uncertainty effectively, a Pythagorean fuzzy set (PFS) framework was adopted in this study, which offers a more flexible representation of membership compared to traditional fuzzy sets and intuitionistic fuzzy sets.

The Pythagorean fuzzy membership function $\mu_{P F}(x, y)$ is defined as:

$ \mu_{P F}(x, y)=\frac{\delta_1(x, y)}{\sqrt{\delta_1(x, y)^2+\delta_2(x, y)^2}} $

where, $\mu_{P F}(x, y) \in[ 0,1]$ quantifies the degree to which a pixel at location $(x, y)$ belongs to the object (or foreground) class under uncertain visual conditions. The function is derived from the PFS theory, which satisfies the condition $\mu^2+\nu^2 \leq 1$, where $\mu$ and $\nu$ are the degrees of membership and non-membership, respectively.

In this formulation, $\delta_1(x, y)$ denotes the absolute difference between the pixel intensity $I(x, y)$ and the estimated object intensity $\mu_o$, i.e.,

$ \delta_1(x, y)=\left|I(x, y)-\mu_o\right| $

This term captures how similar the pixel is to the object region in terms of intensity. A lower value of $\delta_1(x, y)$ implies higher confidence in object membership.

Similarly, $\delta_2(x, y)$ is defined as:

$ \delta_2(x, y)=\left|I(x, y)-\mu_b\right| $

where, $\mu_b$ represents the estimated background intensity. This term reflects the pixel's deviation from the background class.

The object intensity $\mu_o$ and background intensity $\mu_b$ can be estimated using adaptive techniques such as fuzzy c-means clustering, k-means clustering, or seeded region growing with labeled object and background pixels. These adaptive estimates ensure that the membership function remains sensitive to local scene characteristics and can adjust dynamically based on the image context.

The Pythagorean fuzzy membership function $\mu_{P F}(x, y)$ thus provides a smooth and continuous measure of object-likeness, particularly effective in foggy environments where class boundaries are not sharply defined. Unlike conventional membership functions, it inherently balances the relationship between object and background similarity, and the denominator ensures normalization to keep the membership within the unit interval.

Importantly, $\mu_{P F}(x, y)$ serves as a modulating weight in the energy functional that interacts directly with the structure-preserving gradient term $\Lambda(x, y)$. This interaction ensures that segmentation is driven not only by structural edges but also by their likelihood of representing true object regions under uncertainty. By coupling fuzzy membership with structural gradients, the proposed model reinforces boundary detection only in regions that are both structurally significant and semantically probable, improving robustness under fog.

This function plays a vital role in the proposed model by allowing uncertainty-aware segmentation and aiding in subsequent defogging and enhancement tasks.

3.3 Structure-Preserving Gradient Term $\boldsymbol{\Lambda(x, y)}$

Accurately enhancing image contrast and preserving fine structural details are crucial in defogging applications, particularly when textures and edges are partially obscured. Traditional edge enhancement techniques often rely solely on gradient or Laplacian operators. However, relying only on the Laplacian can over-amplify noise and lose context-specific structure. To address these issues, a structure-preserving gradient term $\Lambda(x, y)$ was proposed, which balances contrast enhancement and edge preservation using both gradient and Laplacian information in conjunction with the visibility function.

The structure-preserving term is defined as:

$ \Lambda(x, y)=\frac{|\nabla I(x, y)| \cdot \Psi(x, y)}{1+\kappa \cdot|\Delta I(x, y)|} $

where, $\Lambda(x, y)$ modulates the influence of local gradients based on visibility and suppresses unnecessary enhancement in flat or noisy regions. In the equation, $|\nabla I(x, y)|$ represents the gradient magnitude at pixel $(x, y)$, typically computed as:

$ |\nabla I(x, y)|=\sqrt{\left(I_x\right)^2+\left(I_y\right)^2} $

where, $I_x$ and $I_y$ denote the horizontal and vertical derivatives of the image intensity, respectively. This term captures local edge strength and texture variation.

The Laplacian term $|\nabla I(x, y)|$ measures second-order intensity variation and is defined as:

$ \Delta I(x, y)=\frac{\partial^2 I}{\partial x^2}+\frac{\partial^2 I}{\partial y^2} $

It acts as a high-frequency penalty term. In regions where $|\nabla I(x, y)|$ is large (indicating sharp intensity changes or noise), the denominator increases, thereby reducing the contribution of the gradient term. This helps in suppressing over-enhancement in noisy or highly fluctuating areas.

The visibility weight $\Psi(x, y)$, as defined earlier in Section 2.1, adjusts the influence of the gradient based on local entropy. In clear regions (with low fog), $\Psi(x, y)$ is high, allowing more pronounced edge emphasis. Conversely, in heavily fogged areas where visibility is low, $\Psi(x, y)$ reduces the gradient contribution to avoid enhancing noise or artifacts.

The parameter $\kappa$ is a positive regularization constant that controls the sensitivity of the denominator to high-frequency components. A higher value of $\kappa$ enforces stronger suppression of high-frequency noise, while a lower value allows more aggressive edge enhancement.

The physical coupling between $\Lambda(x, y)$ and $\mu_{P F}(x, y)$ in the energy functional is central to the model's success. While $\Lambda(x, y)$ highlights regions with significant edge content, $\mu_{P F}(x, y)$ ensures that only those edges with high semantic object-likelihood are favored. This synergistic interaction avoids over-segmentation in noisy or irrelevant areas and enhances boundary localization in uncertain foggy scenes.

In summary, the structure-preserving gradient term $\Lambda(x, y)$ ensures selective enhancement of significant structural features while minimizing the risk of amplifying noise. It serves as a crucial component in the proposed framework, enhancing perceptual quality by reinforcing edges in visible regions and preserving smoothness in degraded areas.

3.4 Energy Functional

To achieve robust and accurate segmentation under foggy conditions, an energy functional was formulated, which integrates entropy-guided visibility, Pythagorean fuzzy modeling, and structure-preserving image features. The total energy to be minimized is defined as:

$ E=\int_{\Omega}\left[\alpha \cdot \mu_{P F}(x, y ; \kappa) \cdot \Lambda(x, y)+\beta \cdot|\nabla u(x, y)|^2+\gamma \cdot(u(x, y)-I(x, y))^2(1-\Psi(x, y))\right] d x d y $

In this formulation, $\Omega$ represents the spatial domain of the input image, and $u(x, y)$ is the segmentation indicator function distinguishing foreground from background.

•The first term, $\alpha \cdot \mu_{P F}(x, y ; \kappa) \cdot \Lambda(x, y)$, aligns the segmentation with strong structural features where object-likeness is high, where $\mu_{P F}(x, y ; \kappa)$ is the Pythagorean fuzzy membership controlled by the hesitation parameter $\kappa$, and $\Lambda(x, y)$ is the structure-preserving gradient term.

•The second term, $\beta \cdot|\nabla u(x, y)|^2$, enforces smoothness by penalizing abrupt changes in $u(x, y)$.

•The third term, $\gamma \cdot(u(x, y)-I(x, y))^2(1-\Psi(x, y))$, maintains fidelity to the observed intensity $I(x, y)$, weighted more heavily in low-visibility areas where $\Psi(x, y)$ is small.

The scalar weights $\alpha$, $\beta$, and $\gamma$ and the hesitation coefficient $\kappa$ were chosen via a two-stage process: (i) Theory-guided range delimitation: bounding of each parameter using scale-invariance and information-balance arguments, yielding $\alpha$, $\gamma \in[ 0.3,2.0]$, $\beta \in[ 0.01,0.8]$, and $\kappa \in[ 0.1,0.9]$. (ii) Automatic tuning: a Bayesian optimization routine (Gaussian-process surrogate with expected-improvement acquisition) searches these ranges on a 10% validation split of the Foggy Driving dataset, maximizing mean F1-score. The optimization converges in under 40 iterations, producing the stable set $(\alpha=1.0, \beta=0.6, \gamma=0.5, \kappa=0.35)$.

Varying each parameter $\pm$20% around its optimal value changed the F1-score by less than 2% on average, indicating low sensitivity and good generalizability. Detailed plots are provided in the supplementary material.

These automatically determined parameters were used for all subsequent experiments, ensuring objectivity and reproducibility while avoiding manual bias.

3.5 Optimization Flow

To minimize the energy functional, the segmentation function $u(x, y, t)$ was evolved in artificial time $t$ via gradient descent:

$ \frac{\partial u}{\partial t}=-\alpha\left[\mu_{P F}(x, y ; \kappa) \Lambda(x, y)\right]+\beta \Delta u-\gamma(u-I(x, y))(1-\Psi(x, y)) $

where, $\mu_{P F}(x, y ; \kappa)$ is the Pythagorean fuzzy membership (hesitation $\kappa$), $\Lambda(x, y) = |\nabla I(x, y)|$ is the edge-strength term, $\Psi(x, y) \in[ 0,1]$ is the entropy-guided visibility confidence, $\nabla$ and $\Delta=\nabla^2$ denote gradient and Laplacian, respectively, and $\alpha, \beta, \gamma>0$ are the weights optimized in Section 3.4.

•The first term steers $u$ toward regions with high fuzzy membership and strong edges.

•The diffusion term $\beta \Delta u$ smooths $u$, preserving coherent contours.

•The fidelity term enforces agreement with the observed intensity $I(x, y)$, emphasized wherever fog suppresses visibility $(\Psi \approx 0)$.

Iterations proceed until $\partial u / \partial t<10^{-4}$ or a preset iteration cap is reached. The final segmentation $u^*(x, y)$ is binarised via:

$ \operatorname{object}(x, y)=\left\{\begin{array}{ll} 1, & u^*(x, y) \geq \tau, \\ 0, & \text { otherwise }, \end{array} \quad \tau=0.5\right. $

This unified notation ($\Psi$ only) and explicit variable list resolve the ambiguities noted by the reviewer and improve the mathematical rigour of the derivation.

4. Experimental Work

The proposed model demonstrates a robust and effective approach to road surface object detection in foggy environments by integrating entropy-guided visibility analysis with Pythagorean fuzzy logic and structure-preserving saliency. Unlike conventional methods that struggle under low-contrast or degraded atmospheric conditions, the proposed model leverages the strengths of fuzzy uncertainty modeling and edge-aware gradient information to enhance both the semantic and structural accuracy of object segmentation.

To evaluate the performance of the proposed method, the publicly available Foggy Driving dataset was employed, which is derived from the Foggy Cityscapes benchmark. This dataset contains 550 high-resolution images (2048×1024 pixels) captured under simulated and real-world fog conditions in urban street scenes. It includes a diverse range of road environments, such as intersections, lane markings, sidewalks, vehicles, and traffic signs. Each image is annotated with pixel-level semantic labels, enabling accurate evaluation of object detection and segmentation performance. For the experiments in this study, a representative subset of 300 images with varying fog densities and object classes was selected, focusing specifically on road surface elements such as vehicles, lane markings, and obstacles. The dataset was divided into 70% training and 30% testing sets using a stratified sampling approach to ensure balanced fog severity and object type distribution across subsets.

All image preprocessing, entropy mapping, fuzzy modeling, and segmentation steps were implemented in MATLAB R2015a, using custom functions developed in-house. Core MATLAB image processing commands such as imadjust, entropyfilt, and regionprops were utilized to support core operations. The results demonstrate that the model effectively enhances visibility and preserves critical structural details, enabling precise object localization and segmentation even in dense fog.

Quantitative evaluations were carried out using standard metrics including SSIM, PSNR, NIQE, and BRISQUE, as well as object detection metrics such as precision, recall, and F1-score. The complete MATLAB code and a list of selected dataset images used in this study are available upon request for academic and non-commercial research purposes. This ensures transparency and facilitates future extensions or comparisons.

To achieve optimal performance of the proposed segmentation model, a set of key parameters were carefully configured through empirical analysis and validation on a diverse collection of foggy images. The energy functional incorporates three weighting parameters: $\alpha$, $\beta$, and $\gamma$, which balance the contributions of the fuzzy structure term, the smoothness term, and the data fidelity term, respectively. After extensive experimentation, the optimal values were found to be $\alpha=1.0$, $\beta=0.6$, and $\gamma=1.4$, providing a balanced trade-off between structural preservation, noise suppression, and adherence to the observed image data. Additionally, the regularization constant $\kappa$ in the structure-preserving gradient term $\Lambda(x, y)$ was set to 0.01 , effectively reducing the impact of high-frequency noise while enhancing relevant edges. The segmentation indicator function $u(x, y)$ was initialized using a smoothed version of the input image, and a threshold value of $\tau=0.5$ was applied to generate the final binary segmentation map. The model was iteratively updated using gradient descent, with a convergence tolerance set to $10^{-4}$ or a maximum of 200 iterations. This parameter configuration demonstrated consistent segmentation accuracy and visual quality across various low-visibility scenarios.

Figure 2 illustrates the complete object detection process of the proposed model under foggy conditions. The pipeline begins with a foggy input image, where visibility is significantly degraded due to atmospheric scattering. In the first stage, contrast enhancement is applied to the input image to recover obscured visual details and improve perceptual clarity. This is followed by the computation of an entropy-based visibility map, which quantifies the local uncertainty and information content in the image. Higher entropy values typically correspond to informative regions, while lower values are associated with homogeneous, fog-obscured areas.

Figure 2. Object detection from foggy images using the proposed model: (a) foggy input images, (b) images applying contrast enhancement and entropy visibility processing, (c) images applying fuzzy modeling, and (d) the final detected objects by integrating Pythagorean fuzzy sets and structure-preserving saliency maps

Next, the visibility map is transformed into a fuzzy representation using the PFS framework. This approach is particularly effective for modeling the uncertainty inherent in degraded images, as PFS can handle higher degrees of vagueness compared to traditional fuzzy sets. The resulting fuzzy image provides a membership representation of object likelihood under uncertain visibility. In the final stage, the fuzzy representation is integrated with a structure-preserving saliency map that captures edge and contrast information. This dual guidance, combining the semantic sensitivity of PFS with the spatial consistency of structural gradients, is embedded within an energy functional. Gradient descent evolution is then used to minimize this functional and extract the object regions. The output is a binary mask highlighting the accurately detected object, even in challenging low-visibility conditions. Overall, the figure demonstrates how the proposed model effectively leverages fuzzy logic and structural information to achieve robust object detection in foggy images.

The descriptive statistics presented in Table 1 provide a comprehensive overview of the performance of the proposed entropy-fuzzy-based model for road surface object detection under foggy conditions. The model achieves a high mean detection accuracy of 93.4% with a standard deviation of $\pm$2.1, indicating consistent and reliable performance across various scenarios. Precision, which reflects the proportion of correctly identified positive detections, averages 91.8% ($\pm$2.3), suggesting the model effectively minimizes false positives. Similarly, the recall rate of 94.2% ($\pm$1.8) demonstrates the model's ability to detect the majority of actual road objects even in challenging visibility conditions. The F1-score, a harmonic mean of precision and recall, is 93.0% with a standard deviation of $\pm$1.9, confirming a balanced detection capability.

Table 1. Performance summary and resource efficiency of the proposed model for foggy road surface object detection

Metric

Mean

Standard Deviation ( $\pm$ SD)

Minimum

Maximum

Accuracy (%)

93.4

$\pm 2.1$

89.2

96.7

Precision (%)

91.8

$\pm 2.3$

87.6

95.0

Recall (%)

94.2

$\pm 1.8$

91.1

97.3

F1-Score (%)

93.0

$\pm 1.9$

89.8

96.1

SSIM

0.912

$\pm 0.015$

0.886

0.938

PSNR (dB)

31.6

$\pm 1.2$

29.4

33.8

NIQE

3.12

$\pm 0.21$

2.80

3.44

BRISQUE

20.9

$\pm 2.4$

17.5

24.6

Processing Time (s/image)

1.87

$\pm 0.14$

1.62

2.13

Memory Usage (MB)

620

$\pm 35$

580

670

SSIM has a mean value of 0.912 ($\pm$0.015), which indicates that the model maintains strong structural fidelity in processed images. In terms of image quality metrics, the model achieves a PSNR of 31.6 dB ($\pm$1.2), reflecting high-quality visual output. NIQE, which is a no-reference metric where lower values denote better quality, reports a mean score of 3.12 ($\pm$0.21). Likewise, the BRISQUE score is 20.9 ($\pm$2.4), further supporting the visual quality and perceptual naturalness of the results.

Regarding computational efficiency, the average processing time per image is 1.87 seconds with a standard deviation of $\pm$0.14, ranging from 1.62 to 2.13 seconds. This demonstrates that the proposed model can be deployed in near real-time applications while maintaining high accuracy and perceptual quality. Overall, these statistics underscore the robustness, accuracy, and practical viability of the proposed approach, even in the absence of comparative analysis with existing models.

The proposed model was tested on a system with an Intel Core i7 processor and 8 GB RAM using MATLAB R2015a, without GPU acceleration. The average memory usage during execution was approximately 620 MB, with fluctuations within $\pm$35 MB. This resource footprint, along with a consistent processing time of around 1.87 seconds per image, demonstrates the model's suitability for near real-time applications in embedded or mid-tier automotive platforms. These results support the feasibility of deploying the model in intelligent transportation systems with limited computational capacity, making it a practical solution for foggy road surface object detection.

In summary, the proposed entropy-guided visibility enhancement and Pythagorean fuzzy logic-based model demonstrates substantial efficacy in detecting road surface objects under foggy conditions. The model achieves high accuracy, precision, and structural similarity, while maintaining visual quality and computational efficiency. These achievements confirm the model's potential applicability in real-world intelligent transportation and autonomous navigation systems. However, certain limitations remain. The current model relies on manually tuned parameters for entropy weighting and fuzzy membership functions, which may limit its adaptability to different environmental conditions and image resolutions. For future work, incorporating automatic parameter optimization techniques and robustness testing under a wider range of weather disturbances, such as rain, snow, and smog, can further enhance the model's generalization. Additionally, extending the model for multi-class object detection and real-time deployment in embedded systems may enhance its practicality in autonomous driving scenarios.

5. Conclusion

This study proposed a novel framework for road surface object detection in foggy environments, integrating entropy-guided visibility enhancement, Pythagorean fuzzy logic, and structure-preserving saliency mechanisms. The method effectively addresses the challenges posed by visual degradation due to fog by enhancing contrast, delineating object boundaries, and preserving structural features critical for accurate detection in road scenes. Quantitative evaluations confirm the efficacy of the proposed approach, demonstrating high detection performance in terms of accuracy, precision, recall, and F1-score. Furthermore, image quality metrics such as SSIM, PSNR, NIQE, and BRISQUE validate its ability to improve perceptual clarity while maintaining computational efficiency suitable for near real-time deployment in intelligent transportation systems.

A key strength of the proposed model lies in its ability to simultaneously model uncertainty and preserve fine structural details, offering a robust and interpretable solution for adverse weather conditions. However, certain limitations remain. The current implementation relies on manual parameter tuning and has been evaluated primarily under foggy conditions. Future work may focus on developing adaptive parameter selection strategies, extending the model to handle a wider range of environmental challenges, such as rain, snow, or low-light scenarios, and optimizing the framework for deployment on embedded or mobile platforms to enable real-time operation in autonomous vehicles.

Overall, the proposed model demonstrates substantial potential to enhance road safety and object detection reliability in visually degraded environments, contributing meaningfully to the advancement of intelligent transportation systems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References
1.
R. Stricker, D. Aganian, M. Sesselmann, and et al., “Road surface segmentation-pixel-perfect distress and object detection for road assessment,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), 2021, pp. 1789–1796. [Google Scholar] [Crossref]
2.
T. Rateke and A. Von Wangenheim, “Road surface detection and differentiation considering surface damages,” Auton. Robots, vol. 45, no. 2, pp. 299–312, 2021. [Google Scholar] [Crossref]
3.
B. Kulambayev, G. Beissenova, N. Katayev, and et al., “A deep learning-based approach for road surface damage detection,” Comput. Mater. Continua, vol. 73, no. 2, pp. 3403–3418, 2022. [Google Scholar] [Crossref]
4.
X. He, Z. Tang, Y. Deng, G. Zhou, Y. Wang, and L. Li, “UAV-based road crack object-detection algorithm,” Autom. Constr., vol. 154, p. 105014, 2023. [Google Scholar] [Crossref]
5.
I. Hussain and L. Alam, “Adaptive road crack detection and segmentation using Einstein operators and ANFIS for real-time applications,” J. Intell. Syst. Control, vol. 3, no. 4, pp. 213–226, 2024. [Google Scholar] [Crossref]
6.
M. Santic, L. Pomante, U. Fazio, and L. Fucci, “Wheelchair embedded device for road surface classification and obstacle detection,” in 2024 13th Mediterranean Conference on Embedded Computing (MECO), 2024, pp. 1–5. [Google Scholar] [Crossref]
7.
K. Lis, S. Honari, P. Fua, and M. Salzmann, “Detecting road obstacles by erasing them,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 4, pp. 2450–2460, 2023. [Google Scholar] [Crossref]
8.
P. Peng, J. Pan, Z. Zhao, M. Xi, and L. Chen, “A novel obstacle detection method in underground mines based on 3D LiDAR,” IEEE Access, vol. 12, pp. 106685–106694, 2024. [Google Scholar] [Crossref]
9.
M. Fan, J. Liu, and J. Yu, “Highway obstacle recognition based on improved yolov7 and defogging algorithm,” in International Conference on Internet of Things as a Service, 2023, pp. 22–34. [Google Scholar] [Crossref]
10.
X. Meng, Y. Liu, L. Fan, and J. Fan, “YOLOv5s-Fog: An improved model based on YOLOv5s for object detection in foggy weather scenarios,” Sensors, vol. 23, no. 11, p. 5321, 2023. [Google Scholar] [Crossref]
11.
B. Niu, H. Wu, and Y. Meng, “Road obstacle detection based on 3d information recovery,” in 2024 9th International Conference on Image, Vision and Computing (ICIVC), 2024, pp. 356–360. [Google Scholar] [Crossref]
12.
W. Park, K. Park, and J. Jeong, “Verification of the applicability of obstacle recognition distance as a measure of effectiveness of road lighting on rainy and foggy roads,” Appl. Sci., vol. 14, no. 4, p. 1595, 2024. [Google Scholar] [Crossref]
13.
M. Srikanth, N. S. J. Krishna, S. J. S. Krishna, S. Irfan, and T. G. Venkat, “Real-time vehicle detection and road condition prediction for smart urban areas,” in 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), 2024, pp. 730–734. [Google Scholar] [Crossref]
14.
P. Sharmila, L. Prisha, K. Dhaarani, and G. S. Vishal, “Fog penetration radar,” in 2024 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS), 2024, pp. 1–4. [Google Scholar] [Crossref]
15.
J. Li, R. Xu, X. Liu, and et al., “Domain adaptation based object detection for autonomous driving in foggy and rainy weather,” arXiv preprint arXiv:2307.09676, 2023. [Google Scholar] [Crossref]
16.
N. U. A. Tahir, Z. Zhang, M. Asim, J. Chen, and M. ELAffendi, “Object detection in autonomous vehicles under adverse weather: A review of traditional and deep learning approaches,” Algorithms, vol. 17, no. 3, p. 103, 2024. [Google Scholar] [Crossref]
17.
C. Wu, M. Ye, H. Li, and J. Zhang, “Object detection model design for tiny road surface damage,” Sci. Rep., vol. 15, no. 1, p. 11032, 2025. [Google Scholar] [Crossref]
18.
J. R. V. Jeny, P. Divya, K. Varsha, A. Mrunalini, and S. K. M. Irfan, “Autonomous driving road environment recognition with multiscale object detection,” E3S Web Conf., vol. 619, p. 03017, 2025. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Ahmad, S. (2025). Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments. Mechatron. Intell Transp. Syst., 4(3), 125-134. https://doi.org/10.56578/mits040302
S. Ahmad, "Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments," Mechatron. Intell Transp. Syst., vol. 4, no. 3, pp. 125-134, 2025. https://doi.org/10.56578/mits040302
@research-article{Ahmad2025Entropy-BasedVA,
title={Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments},
author={Shakeel Ahmad},
journal={Mechatronics and Intelligent Transportation Systems},
year={2025},
page={125-134},
doi={https://doi.org/10.56578/mits040302}
}
Shakeel Ahmad, et al. "Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments." Mechatronics and Intelligent Transportation Systems, v 4, pp 125-134. doi: https://doi.org/10.56578/mits040302
Shakeel Ahmad. "Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments." Mechatronics and Intelligent Transportation Systems, 4, (2025): 125-134. doi: https://doi.org/10.56578/mits040302
AHMAD S. Entropy-Based Visibility and Fuzzy Logic Integration for Robust Object Detection in Foggy Road Environments[J]. Mechatronics and Intelligent Transportation Systems, 2025, 4(3): 125-134. https://doi.org/10.56578/mits040302
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.