Javascript is required
Search
Volume 14, Issue 2, 2026

Abstract

Full Text|PDF|XML
Adaptive multi-scale representation learning has become a fundamental component of modern image processing systems. However, existing fusion strategies often treat features extracted from different scales equally, resulting in suboptimal performance under uncertain conditions such as noise, blur, and low contrast. To address this limitation, this paper proposes an uncertainty-aware deep feature fusion framework for adaptive multi-scale image processing. The proposed framework decomposes input images into multiple scales using wavelet-based or Laplacian pyramid representations to capture complementary spatial-frequency information. Discriminative features are extracted at each scale using lightweight Convolutional Neural Networks (CNNs) or Vision Transformer (ViT) encoders. To estimate feature reliability, Bayesian deep learning with Monte Carlo (MC) dropout is employed to model uncertainty at the feature level. A principled uncertainty-aware fusion mechanism is then introduced to dynamically combine multi-scale features according to their estimated reliability. As a result, reliable features contribute more significantly to the fused representation, while uncertain features are suppressed. The fused representation is subsequently utilized in task-specific heads for image restoration, classification, and segmentation. Extensive experiments conducted under multiple degradation conditions demonstrate that the proposed framework consistently outperforms traditional fusion and attention-based methods in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Fréchet Inception Distance (FID). The results further confirm the robustness and generalization capability of the proposed uncertainty-aware multi-scale fusion strategy in adverse imaging environments.
- no more data -