Javascript is required
Search
Volume 4, Issue 4, 2025

Abstract

Full Text|PDF|XML

Facial expression recognition (FER) remains a challenging problem in computer vision owing to subtle inter-class visual differences, substantial intra-class variability, and severe class imbalance in commonly adopted benchmark datasets. In this study, a statistically rigorous comparative evaluation of two pretrained Convolutional Neural Network (CNN) architectures, MobileNetV2 and EfficientNet-B0, was conducted using the FER2013 dataset. To ensure methodological fairness and reproducibility, both architectures were fine-tuned and evaluated under strictly identical experimental conditions. Model performance was systematically assessed using overall classification accuracy and macro-averaged precision, recall, and F1-score to account for class imbalance, complemented by confusion matrix analysis and multi-class receiver operating characteristic area under the curve (ROC–AUC) evaluation. Beyond conventional performance reporting, the reliability and robustness of the observed differences were examined through McNemar’s test and paired bootstrap confidence intervals (CIs). The experimental results demonstrate that EfficientNet-B0 consistently outperforms MobileNetV2 across all evaluation criteria. Statistical analysis confirms that the observed performance gains are significant at the 5% significance level. These findings provide empirically grounded evidence for informed model selection in FER tasks and highlight the importance of integrating statistical validation into comparative deep learning studies. The results further suggest that EfficientNet-B0 offers a favorable balance between recognition accuracy and computational efficiency, making it a compelling candidate for real-world FER applications, including human–computer interaction, affect-aware systems, and assistive computing environments.

Abstract

Full Text|PDF|XML

Skin burns represent a major clinical concern due to their association with pain, functional impairment, sensory damage, and even life-threatening complications. Early and accurate assessment is critical for first aid, clinical intervention, and the prevention of secondary complications. However, conventional burn diagnosis remains highly dependent on visual inspection and clinical expertise, which can introduce subjectivity and delay timely decision-making. To address these limitations, a hybrid automated skin burn detection framework was proposed, integrating transformer-based feature extraction with classical machine learning classification. In this framework, discriminative visual features were extracted using multiple Vision Transformer (ViT) architectures, including ViT-B/16, ViT-L/16, ViT-B/32, and DINOv2 (a self-supervised Vision Transformer model). The extracted features were subsequently fused. Given the resulting high-dimensional feature space, dimensionality reduction was performed using the Chi-square (Chi$^2$) algorithm, through which 500 features were retained, reducing computational complexity and mitigating the risk of model overfitting. The reduced feature set was then employed for burn classification using six classifiers. Model effectiveness was assessed using accuracy, precision, sensitivity, and F1-score metrics. Experimental results demonstrated that the Support Vector Machine (SVM) classifier achieved the highest classification performance, yielding an accuracy of 82.29%. Comparable yet slightly lower accuracies were observed for the Light Gradient Boosting Machine (LGBM) (80.51%) and Extreme Gradient Boosting (XGBoost) (80.17%) classifiers. Overall, the proposed hybrid model consistently outperformed baseline models, highlighting its superior discriminative capability. These findings indicate that the proposed framework holds strong potential for integration into clinical decision support systems, offering a reliable and objective tool for automated skin burn detection.

Open Access
Research article
An AI-Powered Adaptive Learning Framework for Personalized Education
habitam asimare sendeku ,
ravuri daniel ,
gaddam venu gopal
|
Available online: 12-17-2025

Abstract

Full Text|PDF|XML

An adaptive learning framework driven by artificial intelligence (AI) was proposed in which cognitive, emotional, and cultural dimensions of learner diversity were jointly modeled to address heterogeneous educational needs in a personalized and inclusive manner. Within the proposed system, learner adaptation was achieved through the coordinated deployment of multiple machine learning paradigms: models based on Decision Trees (DTs) were employed to dynamically align instructional content with learners’ cognitive profiles, Recurrent Neural Networks (RNNs) were utilized to capture temporal patterns in emotional engagement, and Collaborative Filtering (CF) techniques were applied to accommodate cultural preferences. The framework operates as a continuously adaptive system, enabling instructional content to be refined based on learner data derived from a dataset comprising 10,000 students. Experimental evaluation demonstrated that the proposed approach yielded statistically significant improvements in learning outcomes when compared with conventional instructional methods. Specifically, mean quiz and assignment scores were increased by 15.7% and 14.4%, respectively, while emotional engagement indicators exhibited an improvement of 35.8%. In addition, cultural satisfaction metrics were enhanced by 24.2%. These results suggest that the synergistic integration of cognitive, emotional, and cultural adaptation mechanisms contributes substantively to academic performance gains, heightened learner engagement, and improved educational equity. Beyond performance improvements, the proposed framework is designed with scalability and robustness, allowing for deployment across personalized educational contexts. As such, the framework offers a viable pathway for the development of next-generation personalized education systems capable of supporting diverse learners at scale while maintaining pedagogical effectiveness and inclusivity.

Abstract

Full Text|PDF|XML
In a highly competitive telecommunications environment, customer behavior data has become an important source of organizational knowledge for service innovation and strategic decision-making. The ability to transform large-scale user data into actionable knowledge is essential for effective customer retention and sustainable business development. This study develops a knowledge discovery framework that integrates a denoising autoencoder with an enhanced stacking learning strategy to support customer retention innovation. The denoising autoencoder is employed to extract latent behavioral representations from complex and noisy user data, enabling the identification of underlying patterns that are difficult to capture through conventional statistical features. These latent representations are further combined with structured indicators and integrated through a stacking ensemble composed of decision trees, random forests, and XGBoost to achieve robust knowledge fusion. Empirical results show that the proposed framework provides more reliable identification of high-risk customers and improves decision support quality in terms of accuracy and area under curve (AUC). The study demonstrates how artificial intelligence can serve as a mechanism for organizational knowledge creation and offers practical implications for data-driven service innovation and resource allocation in the telecommunications sector.
- no more data -