Javascript is required
Search
Volume 5, Issue 1, 2026

Abstract

Full Text|PDF|XML
Atmospheric turbulence induces severe blurring and geometric distortions in facial imagery, critically compromising the performance of downstream tasks. To overcome this challenge, a lightweight conditional diffusion model was proposed for the restoration of single-frame turbulence-degraded facial images. Super-resolution techniques were integrated with the diffusion model, and high-frequency information was incorporated as a conditional constraint to enhance structural recovery and achieve high-fidelity generation. A simplified U-Net architecture was employed within the diffusion model to reduce computational complexity while maintaining high restoration quality. Comprehensive comparative evaluations and restoration experiments across multiple scenarios demonstrate that the proposed method produces results with reduced perceptual and distributional discrepancies from ground-truth images, while also exhibiting superior inference efficiency compared to existing approaches. The presented approach not only offers a practical solution for enhancing facial imagery in turbulent environments but also establishes a promising paradigm for applying efficient diffusion models to ill-posed image restoration problems, with potential applicability to other domains such as medical and astronomical imaging.
Open Access
Research article
Decision-Level Multimodal Fusion for Non-Invasive Diagnosis of Endometriosis: Strategies, Calibration, and Net Clinical Benefit
oluwayemisi b. fatade ,
oyebimpe f. ajiboye ,
funmilayo a. sanusi ,
kikelomo i. okesola ,
grace c. okorie ,
goodness o. opateye ,
oluwasefunmi b. famodimu
|
Available online: 01-18-2026

Abstract

Full Text|PDF|XML

Endometriosis remains underdiagnosed due to reliance on invasive laparoscopy. Artificial Intelligence (AI) using either imaging or structured clinical data have shown promise, but single modality approaches face limitations in sensitivity, calibration, and clinical reliability. This work seeks to evaluate whether decision-level multimodal fusion of Magnetic Resonance Imaging (MRI)-based and clinical data-based AI systems improves diagnostic performance, calibration, and net clinical benefit, compared with single-modality models. Two previously validated models were combined with retrospective data from 1,208 patients with suspected endometriosis: a Dual U-Net trained on pelvic MRI with Gradient-weighted Class Activation Mapping (Grad-CAM) interpretability and a dense neural network trained on structured clinical features with SHapley Additive exPlanations (SHAP). This study tested weighted averaging, stacking via logistic regression, and confidence-gating. Performance was assessed using accuracy, precision, recall, F1-score, and area under the curve (AUC). Calibration was evaluated using the Brier score, expected calibration error (ECE), and reliability diagrams. Clinical utility was quantified with decision curve analysis (DCA). Statistical significance was tested with McNemar’s test for accuracy and DeLong’s test for AUC. Multimodal fusion outperformed both single modality models. Weighted averaging accuracy was 0.89, precision was 0.89, recall was 0.87, and F1-score was 0.86, thus improving on either modality alone. Stacking further enhanced calibration (ECE reduction from 0.8 to 0.04) and yielded higher net benefit across clinically relevant probability thresholds (20 to 60%). DCA indicated fusion would avoid 12 to 18 unnecessary surgical investigations per 100 patients, compared with single modality strategies. Confidence-gating maintained performance under simulated distribution shifts to support robustness. Decision-level multimodal fusion enhanced non-invasive diagnosis of endometriosis by improving accuracy, calibration, and clinical utility. These results demonstrated the value of integrative AI gynecological care and justify prospective validation in real-world clinical settings.

Abstract

Full Text|PDF|XML
Climate change, which has intensified to a global governance crisis, demands adaptation strategies that are faster, precise, and more inclusive than ever before. Artificial intelligence (AI), increasingly positioned at the core of this transformation, is offering powerful tools for climate risk forecasting, disaster preparedness, energy optimization, agricultural efficiency, and business resilience. Yet the growing adoption of AI exposes a fundamental paradox: while it promises unprecedented analytical capacity, its benefits remain unevenly distributed across communities. The current study addressed this tension by presenting a comprehensive and governance-oriented analysis of AI-driven climate adaptation. Drawing on an extensive review of academic research and major institutional reports, this paper identified three interlinked challenges including methodological limitations, ethical and equity risks, as well as governance gaps which continuously undermine the effectiveness of AI-enabled adaptation. Predictive models struggled to incorporate complex social vulnerabilities; algorithmic opacity limited trust and accountability whereas persistent data inequality prevented low-income regions from leveraging advanced digital tools. In response, the study introduced a multi-layered governance framework encompassing technical capacity, regulatory and ethical infrastructure, and socially inclusive outcomes. The findings revealed that the contributions of AI to climate adaptation were fundamentally shaped by institutional quality, transparent data governance, equitable digital access, and participation of vulnerable populations in decision making. The paper concluded that AI held extraordinary potential to strengthen resilience, only if deployed within governance systems that prioritize fairness, accountability, transparency, ethics, and social inclusion. By aligning technological innovation with just and sustainable governance, AI becomes not only a predictive instrument but a transformative catalyst for equitable climate adaptation worldwide.
Open Access
Research article
Transformer-Driven Feature Fusion for Robust Diagnosis of Lung Cancer Brain Metastasis Under Missing-Modality Scenarios
yue ding ,
yunqi ma ,
kuo jing ,
zhansong shang ,
feiyang gao ,
zhengwei cui ,
linyan xue ,
shuang liu
|
Available online: 02-05-2026

Abstract

Full Text|PDF|XML
Accurate diagnosis of lung cancer brain metastasis is often hindered by incomplete magnetic resonance imaging (MRI) modalities, resulting in suboptimal utilization of complementary radiological information. To address the challenge of ineffective feature integration in missing-modality scenarios, a Transformer-based multi-modal feature fusion framework, referred to as Missing Modality Transformer (MMT), was introduced. In this study, multi-modal MRI data from 279 individuals diagnosed with lung cancer brain metastasis, including both small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), were acquired and processed through a standardized radiomics pipeline encompassing feature extraction, feature selection, and controlled data augmentation. The proposed MMT framework was trained and evaluated under various single-modality and combined-modality configurations to assess its robustness to modality absence. A maximum diagnostic accuracy of 0.905 was achieved under single-modality missing conditions, exceeding the performance of the full-modality baseline by 0.017. Interpretability was further strengthened through systematic analysis of loss-function hyperparameters and quantitative assessments of modality-specific importance. The experimental findings collectively indicate that the MMT framework provides a reliable and clinically meaningful solution for diagnostic environments in which imaging acquisition is limited by patient conditions, equipment availability, or time constraints. These results highlight the potential of Transformer-based radiomics fusion to advance computational neuro-oncology by improving diagnostic performance, enhancing robustness to real-world imaging variability, and offering transparent interpretability that aligns with clinical decision-support requirements.

Abstract

Full Text|PDF|XML

This paper explored how generative artificial intelligence (AI) could enhance the digital accessibility of individuals with visual, auditory, and cognitive impairments. It aims to develop an adaptive and context-sensitive system to dynamically customize content in accordance with users’ needs. The proposed system creates text simplification with generative AI models like Generative Pretrained Transformer 3 (GPT-3), and caption images with Contrastive Language–Image Pre-Training (CLIP). It adapts users’ reactions with reinforcement learning, to enable the generation of real-time and personalized content. This project tested the system performance with mixed data, including texts, images, and videos. The outcomes revealed that the accessibility of the content had been significantly increased. At the same time, the Flesch-Kincaid Grade Level was reduced by 50% through text simplification, and the bilingual evaluation understudy (BLEU) score was ranked at 0.74 in the case of image captioning. User satisfaction had increased by 15% after feedback corrections. In addition to these results, the system demonstrated high effectiveness in supporting auditory-impaired users by achieving a subtitle synchronization accuracy of 94.6% in video content, and increasing auditory user satisfaction by 18% during accessibility evaluations. This study helped develop AI-based accessibility and provide more inclusive online environment for people with disabilities, thus facilitating their access to online content. In conclusion, the proposed system is more convenient and could offer a broader range of individual and time-sensitive user experiences, compared to the current accessibility models.

Open Access
Research article
A Deep Learning and Sensor-Based Internet of Things Framework for Intelligent Waste Management: A Comparative Analysis
rexhep mustafovski ,
aleksandar petrovski ,
marko radovanovic ,
aner behlic ,
kristijan ilievski
|
Available online: 03-15-2026

Abstract

Full Text|PDF|XML
The escalating volume of municipal solid waste has intensified the need for intelligent waste management systems capable of improving operational efficiency, classification accuracy, and sustainability. In recent years, the integration of Internet of Things technologies, deep learning algorithms, and sensor-based monitoring has significantly transformed conventional waste collection and sorting practices. In this study, an intelligent waste management framework was proposed and comparatively evaluated against twelve contemporary smart waste management systems reported in the literature. The proposed architecture integrates a Raspberry Pi 3 embedded platform, You Only Look Once version 8 (YOLOv8) deep learning models for real-time waste classification, and ultrasonic bin-fill sensors for monitoring container capacity, enabling automated lid operation, and supporting optimized waste collection scheduling. A comprehensive comparative analysis was conducted across multiple performance dimensions, including classification accuracy, system responsiveness, scalability, deployment cost, and operational efficiency. Experimental evaluation demonstrates that the deep learning–driven framework achieved high real-time classification accuracy while maintaining low computational overhead on resource-constrained edge devices. In addition, the incorporation of bin-fill sensing and automated actuation enhanced system responsiveness and supported data-driven collection planning, thereby reducing unnecessary collection trips and operational costs. The findings highlight the significant potential of combining advanced deep learning algorithms with sensor-based Internet of Things infrastructures to develop sustainable, intelligent, and cost-effective waste management ecosystems. These insights provide a foundation for future research aimed at enhancing intelligent waste infrastructure and supporting environmentally sustainable urban development.

Abstract

Full Text|PDF|XML

Accurate prediction of the thermal ablation zone in hepatic radiofrequency ablation (RFA) is critical for preventing the recurrence of local tumor, yet it is complicated by the convective heat sink effect of blood perfusion. Traditional numerical solvers, such as the finite difference method (FDM), are inherently limited by time-step constraints which require greater computational cost and impede real-time clinical applications. This study proposed a mesh-free Physics-Informed Neural Network (PINN) framework to simulate the spatiotemporal dynamics of Pennes bioheat equation. By embedding the governing partial differential equation (PDE) directly into the loss function of the neural network, the model learnt the continuous temperature field without spatial discretization or labeled training data. A comparative analysis against an explicit FDM baseline yielded a relative L2 error norm of 1.9%. Although PINN’s continuous functional approximation slightly dampened the theoretical singularity at the tip of the electrode, it accurately resolved the critical 50 °C isotherm that defined the boundary of irreversible coagulative necrosis. Furthermore, the framework effectively decoupled computational cost from the time of physical simulation. While offline training required approximately 6 minutes, the optimized network executed online inference in milliseconds. This capability to provide physically consistent and near-instantaneous thermal predictions demonstrates the potential of the PINN framework for intraoperative decision support systems.

- no more data -