Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 4
Archive
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) aims to spearhead the academic exploration of artificial intelligence, machine, and deep learning, along with their associated disciplines. Underscoring the pivotal role of AI and machine learning innovations in shaping the modern technological landscape, ATAIML strives to decode the complexities of current methodologies and applications in the AI domain. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our proficiency in orchestrating the peer-review, editing, and production processes, all accepted articles see rapid publication.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(2)
andreas pester
British University in Egypt, Egypt
andreas.pester@bue.edu.eg | website
Research interests: Differential Equations; LabVIEW; MATLAB, Educational Technology; Blended Learning; M-Learning; Deep Learning
zhuang wu
Capital University of Economics and Business, China
wuzhuang@cueb.edu.cn | website
Research interests: Decision Optimization and Management; Computational Intelligence; Intelligent Information Processing; Big Data; Online Public Opinion; Image Processing and Visualization

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) emerges as a pivotal platform at the intersection of artificial intelligence, machine learning, and their multifaceted applications. Recognizing the profound potential of these disciplines, the journal endeavors to unravel the complexities underpinning AI and ML theories, methodologies, and their tangible real-world implications.

In a world advancing at digital light-speed, ATAIML posits that AI and ML reshape industries at their core. From the expansion of reality to the birth of synthetic data and the intricate design of graph neural networks, such advancements are at the forefront of innovation. With a mission to chronicle these paradigm shifts, ATAIML aims to serve as a beacon for researchers, professionals, and enthusiasts eager to fathom the vast horizons of AI and ML in the modern age.

Furthermore, ATAIML highlights the following features:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

ATAIML's expansive scope encompasses, but is not limited to:

  • AI-Integrated Sensory Technologies: Insights into AI's role in amplifying and harmonizing sensory data.

  • Symbiosis of AI and IoT: The collaborative dance between artificial intelligence and the Internet of Things and their cumulative impact on contemporary society.

  • Mixed Realities Shaped by AI: Probing the AI-crafted mixed-reality realms and their implications.

  • Sustainable AI Innovations: A focus on 'Green AI' and its instrumental role in shaping a sustainable future.

  • Synthetic Data in the AI Era: A deep dive into the rise and relevance of synthetic data and its AI-driven generation.

  • Graph Neural Paradigms: Exploration of the nuances of graph-centric neural networks and their evolutionary trajectory.

  • Interdisciplinary AI Applications: Delving into AI's intersections with fields such as psychology, fashion, and the arts.

  • Moral and Ethical Dimensions of AI: A comprehensive study of the ethical landscapes carved by AI's advancements and the corresponding legal challenges.

  • Diverse Learning Methodologies: Exploration of revolutionary learning techniques ranging from Bayesian paradigms to statistical approaches in ML.

  • Emergent AI Narratives: Spotlight on cutting-edge AI technologies, foundational standards, computational attributes, and their transformative use cases.

  • Holistic Integration: Emphasis on multi-disciplinary submissions that combine insights from varied fields, offering a holistic perspective on AI and ML's global resonance.

Articles
Recent Articles
Most Downloaded
Most Cited
Open Access
Research article
Development of a Machine Learning-Driven Web Platform for Automated Identification of Rice Insect Pests
samuel n. john ,
nasiru a. musa ,
joshua s. mommoh ,
etinosa noma-osaghe ,
ukeme i. udioko ,
james l. obetta
|
Available online: 05-22-2025

Abstract

Full Text|PDF|XML

An advanced machine learning (ML)-driven web platform was developed and deployed to automate the identification of rice insect pests, addressing limitations associated with traditional pest detection methods and conventional ML algorithms. Historically, pest identification in rice cultivation has relied on expert evaluation of pest species and their associated crop damage, a process that is labor-intensive, time-consuming, and prone to inaccuracies, particularly in the misclassification of pest species. In this study, a subset of the publicly available IP102 benchmark dataset, consisting of 7,736 images across 12 rice pest categories, was curated for model training and evaluation. Two classification models—a Support Vector Machine (SVM) and a deep Convolutional Neural Network (CNN) based on the Inception_ResNetV2 architecture—were implemented and assessed using standard performance metrics. Experimental results demonstrated that the Inception_ResNetV2 model significantly outperformed SVM, achieving an accuracy of 99.97%, a precision of 99.46%, a recall of 99.81%, and an F1-score of 99.53%. Owing to its superior performance, the Inception_ResNetV2 model was integrated into a web-based application designed for real-time pest identification. The deployed system exhibited an average response time of 5.70 seconds, representing a notable improvement in operational efficiency and usability over previous implementations. The results underscore the potential of artificial intelligence in transforming agricultural practices by enabling accurate, scalable, and timely pest diagnostics, thereby enhancing pest management strategies, mitigating crop losses, and supporting global food security initiatives.

Abstract

Full Text|PDF|XML

Image segmentation remains a foundational task in computer vision, remote sensing, medical imaging, and object detection, serving as a critical step in delineating object boundaries and extracting meaningful regions from complex visual data. However, conventional segmentation methods often exhibit limited robustness in the presence of noise, intensity inhomogeneity, and intricate region geometries. To address these challenges, a novel segmentation framework was developed, integrating fuzzy logic with geometric principles. Uncertainty and overlapping intensity distributions within regions were modeled through fuzzy membership functions, allowing for more flexible and resilient region characterization. Simultaneously, geometric principles—specifically image gradients and curvature—were incorporated to guide boundary evolution, thereby improving delineation precision. A fuzzy energy functional was constructed to jointly optimize region homogeneity, edge preservation, and boundary smoothness. This functional was minimized through an iterative level-set evolution process, allowing dynamic adaptation to varying image characteristics while maintaining computational efficiency. The proposed model demonstrated robust performance across diverse image modalities, including those with high noise levels and complex regional structures, outperforming traditional methods in terms of segmentation accuracy and stability. Its applicability to tasks demanding high-precision region-based analysis highlights its potential for widespread deployment in advanced imaging applications.

Abstract

Full Text|PDF|XML

As market saturation and competitive pressure intensify within the banking sector, the mitigation of customer churn has emerged as a critical concern. Given that the cost of acquiring new clients substantially exceeds that of retaining existing ones, the development of highly accurate churn prediction models has become imperative. In this study, a hybrid customer churn prediction model was developed by integrating Sentence Transformers with a stacking ensemble learning architecture. Customer behavioral data containing textual content was transformed into dense vector representations through the use of Sentence Transformers, thereby capturing contextual and semantic nuances. These embeddings were combined with normalized structured features. To enhance predictive performance, a stacking ensemble method was employed to integrate the outputs of multiple base models, including random forest, Gradient Boosting Tree (GBT), and Support Vector Machine (SVM). Experimental evaluation was conducted on real-world banking data, and the proposed model demonstrated superior performance relative to conventional baseline approaches, achieving notable improvements in both accuracy and the area under the curve (AUC). Furthermore, the analysis of model outputs revealed several salient predictors of customer attrition, such as anomalous transaction behavior, prolonged inactivity, and indicators of dissatisfaction with customer service. These insights are expected to inform the development of targeted intervention strategies aimed at strengthening customer retention, improving satisfaction, and fostering long-term institutional growth and stability.

Open Access
Research article
Enhancing Non-Invasive Diagnosis of Endometriosis Through Explainable Artificial Intelligence: A Grad-CAM Approach
afolashade oluwakemi kuyoro ,
oluwayemisi boye fatade ,
ernest enyinnaya onuiri
|
Available online: 04-23-2025

Abstract

Full Text|PDF|XML

Significant advancements in artificial intelligence (AI) have transformed clinical decision-making, particularly in disease detection and management. Endometriosis, a chronic and often debilitating gynecological disorder, affects a substantial proportion of reproductive-age women and is associated with pelvic pain, infertility, and a reduced quality of life. Despite its high prevalence, non-invasive and accurate diagnostic methods remain limited, frequently resulting in delayed or missed diagnoses. In this study, a novel diagnostic framework was developed by integrating deep learning (DL) with explainable artificial intelligence (XAI) to address existing limitations in the early and non-invasive detection of endometriosis. Abdominopelvic magnetic resonance imaging (MRI) data were obtained from the Crestview Radiology Center in Victoria Island, Lagos State. Preprocessing procedures, including Digital Imaging and Communications in Medicine (DICOM)-to-PNG conversion, image resizing, and intensity normalization, were applied to standardize the imaging data. A U-Net architecture enhanced with a dual attention mechanism was employed for lesion segmentation, while Gradient-weighted Class Activation Mapping (Grad-CAM) was incorporated to visualize and interpret the model’s decision-making process. Ethical considerations, including informed patient consent, fairness in algorithmic decision-making, and mitigation of data bias, were rigorously addressed throughout the model development pipeline. The proposed system demonstrated the potential to improve diagnostic accuracy, reduce diagnostic latency, and enhance clinician trust by offering transparent and interpretable predictions. Furthermore, the integration of XAI is anticipated to promote greater clinical adoption and reliability of AI-assisted diagnostic systems in gynecology. This work contributes to the advancement of non-invasive diagnostic tools and reinforces the role of interpretable DL in the broader context of precision medicine and women's health.

Abstract

Full Text|PDF|XML

The selection of optimal text embedding models remains a critical challenge in semantic textual similarity (STS) tasks, particularly when performance varies substantially across datasets. In this study, the comparative effectiveness of multiple state-of-the-art embedding models was systematically evaluated using a benchmarking framework based on established machine learning techniques. A range of embedding architectures was examined across diverse STS datasets, with similarity computations performed using Euclidean distance, cosine similarity, and Manhattan distance metrics. Performance evaluation was conducted through Pearson and Spearman correlation coefficients to ensure robust and interpretable assessments. The results revealed that GIST-Embedding-v0 consistently achieved the highest average correlation scores across all datasets, indicating strong generalizability. Nevertheless, MUG-B-1.6 demonstrated superior performance on datasets 2, 6, and 7, while UAE-Large-V1 outperformed other models on datasets 3 and 5, thereby underscoring the influence of dataset-specific characteristics on embedding model efficacy. These findings highlight the importance of adopting a dataset-aware approach in embedding model selection for STS tasks, rather than relying on a single universal model. Moreover, the observed performance divergence suggests that embedding architectures may encode semantic relationships differently depending on domain-specific linguistic features. By providing a detailed evaluation of model behavior across varied datasets, this study offers a methodological foundation for embedding selection in downstream NLP applications. The implications of this research extend to the development of more reliable, scalable, and context-sensitive STS systems, where model performance can be optimized based on empirical evidence rather than heuristics. These insights are expected to inform future investigations on embedding adaptation, hybrid model integration, and meta-learning strategies for semantic similarity tasks.

Abstract

Full Text|PDF|XML

Sentiment analysis in legal documents presents significant challenges due to the intricate structure, domain-specific terminology, and strong contextual dependencies inherent in legal texts. In this study, a novel hybrid framework is proposed, integrating Graph Attention Networks (GATs) with domain-specific embeddings, i.e., Legal Bidirectional Encoder Representations from Transformers (LegalBERT) and an aspect-oriented sentiment classification approach to improve both predictive accuracy and interpretability. Unlike conventional deep learning models, the proposed method explicitly captures hierarchical relationships within legal texts through GATs while leveraging LegalBERT to enhance domain-specific semantic representation. Additionally, auxiliary features, including positional information and topic relevance, were incorporated to refine sentiment predictions. A comprehensive evaluation conducted on diverse legal datasets demonstrates that the proposed model achieves state-of-the-art performance, attaining an accuracy of 93.1% and surpassing existing benchmarks by a significant margin. Model interpretability was further enhanced through SHapley Additive exPlanations (SHAP) and Legal Context Attribution Score (LCAS) techniques, which provide transparency into decision-making processes. An ablation study confirms the critical contribution of each model component, while scalability experiments validate the model’s efficiency across datasets ranging from 10,000 to 200,000 sentences. Despite increased computational demands, strong robustness and scalability are exhibited, making this framework suitable for large-scale legal applications. Future research will focus on multilingual adaptation, computational optimization, and broader applications within the field of legal analytics.

Abstract

Full Text|PDF|XML

The restoration of blurred images remains a critical challenge in computational image processing, necessitating advanced methodologies capable of reconstructing fine details while mitigating structural degradation. In this study, an innovative image restoration framework was introduced, employing Complex Interval Pythagorean Fuzzy Sets (CIPFSs) integrated with mathematically structured transformations to achieve enhanced deblurring performance. The proposed methodology initiates with the geometric correction of pixel-level distortions induced by blurring. A key innovation lies in the incorporation of CIPFS-based entropy, which is synergistically combined with local statistical energy to enable robust blur estimation and adaptive correction. Unlike traditional fuzzy logic-based approaches, CIPFS facilitates a more expressive modeling of uncertainty by leveraging complex interval-valued membership functions, thereby enabling nuanced differentiation of blur intensity across image regions. A fuzzy inference mechanism was utilized to guide the refinement process, ensuring that localized corrections are adaptively applied to degraded regions while leaving undistorted areas unaffected. To preserve edge integrity, a geometric step function was applied to reinforce structural boundaries and suppress over-smoothing artifacts. In the final restoration phase, structural consistency is enforced through normalization and regularization techniques to ensure coherence with the original image context. Experimental validations demonstrate that the proposed model delivers superior image clarity, improved edge sharpness, and reduced visual artifacts compared to state-of-the-art deblurring methods. Enhanced robustness against varying blur patterns and noise intensities was also confirmed, indicating strong generalization potential. By unifying the expressive power of CIPFS with analytically driven restoration strategies, this approach contributes a significant advancement to the domain of image deblurring and restoration under uncertainty.

Abstract

Full Text|PDF|XML

Stance, a critical discourse marker, reflects the expression of attitudes, feelings, evaluations, or judgments by speakers or writers toward a topic or other participants in a conversation. This study investigates the manifestation of stance in the discourse of four prominent artificial intelligence (AI) chatbots—ChatGPT, Gemini, MetaAI, and Bing Copilot—focusing on three dimensions: interpersonal stance (how chatbots perceive one another), epistemic stance (their relationship to the topic of discussion), and style stance (their communicative style). Through a systematic analysis, it is revealed that these chatbots employ various stance markers, including hedging, self-mention, power dominance, alignment, and face-saving strategies. Notably, the use of face-saving framing by AI models, despite their lack of a genuine “face,” highlights the distinction between authentic interactional intent and the reproduction of linguistic conventions. This suggests that stance in AI discourse is not a product of subjective intent but rather an inherent feature of natural language. However, this study extends the discourse by examining stance as a feature of chatbot-to-chatbot communication rather than human-AI interactions, thereby bridging the gap between human linguistic behaviors and AI tendencies. It is concluded that stance is not an extraneous feature of discourse but an integral and unavoidable aspect of language use, which chatbots inevitably replicate. In other words, if chatbots must use language, then pragmatic features like stance are inevitable. Ultimately, this raises a broader question: Is it even possible for a chatbot to produce language devoid of stance? The implications of this research underscore the intrinsic connection between language use and pragmatic features, suggesting that stance is an inescapable component of any linguistic output, including that of AI systems.

Abstract

Full Text|PDF|XML

This study investigates the recognition of seven primary human emotions—contempt, anger, disgust, surprise, fear, happiness, and sadness—based on facial expressions. A transfer learning approach was employed, utilizing three pre-trained convolutional neural network (CNN) architectures: AlexNet, VGG16, and ResNet50. The system was structured to perform facial expression recognition (FER) by incorporating three key stages: face detection, feature extraction, and emotion classification using a multiclass classifier. The proposed methodology was designed to enhance pattern recognition accuracy through a carefully structured training pipeline. Furthermore, the performance of the transfer learning models was compared using a multiclass support vector machine (SVM) classifier, and extensive testing was planned on large-scale datasets to further evaluate detection accuracy. This study addresses the challenge of spontaneous FER, a critical research area in human-computer interaction, security, and healthcare. A key contribution of this study is the development of an efficient feature extraction method, which facilitates FER with minimal reliance on extensive datasets. The proposed system demonstrates notable improvements in recognition accuracy compared to traditional approaches, significantly reducing misclassification rates. It is also shown to require less computational time and resources, thereby enhancing its scalability and applicability to real-world scenarios. The approach outperforms conventional techniques, including SVMs with handcrafted features, by leveraging the robust feature extraction capabilities of transfer learning. This framework offers a scalable and reliable solution for FER tasks, with potential applications in healthcare, security, and human-computer interaction. Additionally, the system’s ability to function effectively in the absence of a caregiver provides significant assistance to individuals with disabilities in expressing their emotional needs. This research contributes to the growing body of work on facial emotion recognition and paves the way for future advancements in artificial intelligence-driven emotion detection systems.

Abstract

Full Text|PDF|XML

Drought, a complex natural phenomenon with profound global impacts, including the depletion of water resources, reduced agricultural productivity, and ecological disruption, has become a critical challenge in the context of climate change. Effective drought prediction models are essential for mitigating these adverse effects. This study investigates the contribution of various data preprocessing steps—specifically class imbalance handling and dimensionality reduction techniques—to the performance of machine learning models for drought prediction. Synthetic Minority Over-sampling Technique (SMOTE) and near miss sampling methods were employed to address class imbalances within the dataset. Additionally, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were applied for dimensionality reduction, aiming to improve computational efficiency while retaining essential features. Decision tree algorithms were trained on the preprocessed data to assess the impact of these preprocessing techniques on model accuracy, precision, recall, and F1-score. The results indicate that the SMOTE-based sampling approach significantly enhances the overall performance of the drought prediction model, particularly in terms of accuracy and robustness. Furthermore, the combination of SMOTE, PCA, and LDA demonstrates a substantial improvement in model reliability and generalizability. These findings underscore the critical importance of carefully selecting and applying appropriate data preprocessing techniques to address class imbalances and reduce feature space, thus optimizing the performance of machine learning models in drought prediction. This study highlights the potential of preprocessing strategies in improving the predictive capabilities of models, providing valuable insights for future research in climate-related prediction tasks.

Open Access
Research article
Advanced Tanning Detection Through Image Processing and Computer Vision
sayak mukhopadhyay ,
janmejay gupta ,
akshay kumar
|
Available online: 01-20-2025

Abstract

Full Text|PDF|XML

This study introduces an advanced approach to the automated detection of skin tanning, leveraging image processing and computer vision techniques to accurately assess tanning levels. A method was proposed in which skin tone variations were analyzed by comparing a reference image with a current image of the same subject. This approach establishes a reliable framework for estimating tanning levels through a sequence of image preprocessing, skin segmentation, dominant color extraction, and tanning assessment. The hue-saturation-value (HSV) color space was employed to quantify these variations, with particular emphasis placed on the saturation component, which is identified as a critical factor for tanning detection. This novel focus on the saturation component offers a robust and objective alternative to traditional visual assessment methods. Additionally, the potential integration of machine learning techniques to enhance skin segmentation and improve image analysis accuracy was explored. The proposed framework was positioned within an Internet of Things (IoT) ecosystem for real-time monitoring of sun safety, providing a practical application for both individual and public health contexts. Experimental results demonstrate the efficacy of the proposed method in distinguishing various tanning levels, thereby offering significant advancements in the fields of cosmetic dermatology, public health, and preventive medicine. These findings suggest that the integration of image processing, computer vision, and machine learning can provide a powerful tool for the automated assessment of skin tanning, with broad implications for real-time health monitoring and the prevention of overexposure to ultraviolet (UV) radiation.

Abstract

Full Text|PDF|XML

In recent years, representing computer vision data in tensor form has become an important method of data representation. However, due to the limitations of signal acquisition devices, the actual data obtained may be damaged, such as image loss, noise interference, or a combination of both. Using Low-Rank Tensor Completion (LRTC) techniques to recover missing or corrupted tensor data has become a hot research topic. In this paper, we adopt a tensor coupled total variation (t-CTV) norm based on t-SVD as the minimization criterion to capture the combined effects of low-rank and local piecewise smooth priors, thus eliminating the need for balance parameters in the process. At the same time, we utilize the Non-Local Means (NLM) denoiser to smooth the image and reduce noise by leveraging the nonlocal self-similarity of the image. Furthermore, an Alternating Direction Method of Multipliers (ADMM) algorithm is designed for the proposed optimization model, NLM-TCTV. Extensive numerical experiments on real tensor data (including color, medical, and satellite remote sensing images) show that the proposed method has good robustness, performs well in noisy images, and surpasses many existing methods in both quality and visual effects.

load more...
- no more data -
- no more data -