Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2026: Vol. 5
Archive
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) is a peer-reviewed scholarly journal that publishes original research in artificial intelligence and machine learning and related areas. The journal places particular emphasis on work that develops new theoretical approaches, algorithmic methods, or well-founded applications, and that provides clear technical or analytical contributions to the field. ATAIML welcomes submissions that address methodological advances, empirical validation, system-level implementation, as well as the ethical and societal aspects of AI, where these are examined with appropriate technical or analytical depth. The journal is published quarterly by Acadlore, with four issues released in March, June, September, and December.

  • Professional Editorial Standards - All submissions are evaluated through a standard peer-review process involving independent reviewers and editorial assessment before acceptance.

  • Efficient Publication - The journal follows a defined review, revision, and production workflow to support regular and predictable publication of accepted manuscripts.

  • Open Access - ATAIML is an open-access journal. All published articles are made available online without subscription or access fees.

Editor(s)-in-chief(1)
zhuang wu
School of Artificial Intelligence, Capital University of Economics and Business, China
wuzhuang@cueb.edu.cn | website
Research interests: Computational intelligence and machine learning; Data-driven optimization and decision models; Intelligent information processing; Big data analytics for intelligent systems; Multi-modal data analysis and visualization

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) is a peer-reviewed open-access journal that publishes original research in artificial intelligence and machine learning, with an emphasis on theoretical analysis, algorithmic development, and carefully designed empirical studies.

The journal is primarily interested in work that offers clear methodological contributions, theoretical insights, or well-supported experimental findings, rather than papers that report only incremental applications of existing techniques.

ATAIML aims to provide a venue for research that connects foundational ideas in AI and ML with engineering practice and real-world systems, while maintaining a strong focus on scientific rigor, reproducibility, and transparency in reporting.

The journal also welcomes critical discussions on the reliability, interpretability, and broader implications of AI technologies, including ethical and social dimensions, provided that these issues are examined with appropriate technical or analytical depth.

A distinctive focus of ATAIML is the integration of algorithmic innovation with deployable engineering solutions and transparent evaluation practices, aiming to bridge foundational research and practical implementation in a rigorous and reproducible manner.

Published quarterly by Acadlore, ATAIML follows a structured peer-review process and standard editorial procedures to ensure consistency and transparency in its publication practices.

ATAIML accepts research articles, review papers, reproducibility studies, benchmark papers, and well-documented negative or neutral results when they provide meaningful methodological insights and contribute to scientific understanding.

Key features of ATAIML include:

  • An emphasis on research that contributes to theoretical understanding and methodological development in artificial intelligence and machine learning;

  • A commitment to reproducibility and transparent reporting, encouraging authors to provide access to code, datasets, and detailed experimental procedures to enable independent verification and reuse of results;

  • A particular interest in work addressing model interpretability, robustness, reliability, and security in learning systems;

  • Contributions that connect AI methods with engineering practice, scientific domains, or socioeconomic contexts in a technically grounded way;

  • Studies that examine ethical, legal, or societal aspects of AI where clear analytical or technical frameworks support these;

  • A standard peer-review and editorial process intended to support consistency, transparency, and fairness in the evaluation of submissions.

Scope

ATAIML welcomes submissions across a broad range of topics in artificial intelligence and machine learning, including, but not limited to, the areas outlined below:

Foundations and Models

  • Deep learning architectures and related optimisation methods

  • Graph neural networks and representation learning

  • Probabilistic and Bayesian approaches to learning

  • Computational learning theory and statistical learning methods

  • Reinforcement learning and sequential decision models

  • Transfer, domain adaptation, federated, and meta-learning

  • Evolutionary computation and swarm-based methods

Systems, Infrastructure, and Engineering

  • Scalable, distributed, and edge-based learning systems

  • AI for Internet of Things and cyber-physical systems

  • Training infrastructure, deployment, and lifecycle management (MLOps)

  • High-performance and neuromorphic computing for AI workloads

Data-Centric and Multimodal AI

  • Data governance, quality assessment, and uncertainty modelling

  • Synthetic data, self-supervised, and weakly supervised learning

  • Multimodal learning and data fusion techniques

  • Knowledge graphs and symbolic–neural hybrid approaches

Trustworthy and Responsible AI

  • Explainability, interpretability, and reliability of learning systems

  • Robustness, safety, fairness, privacy, and security in AI and ML

  • Ethical, legal, and societal aspects of AI use and deployment

Applied AI Across Domains

  • Robotics, autonomous systems, and intelligent manufacturing

  • Healthcare analytics, medical imaging, and bioinformatics

  • Smart cities, climate-related modelling, and sustainability applications

  • Computer vision, natural language processing, and speech technologies

  • AI applications in finance, education, agriculture, and public services

  • Human–AI interaction and computational support for creativity and culture

Emerging and Future Paradigms

  • Generative models and foundation architectures

  • Quantum approaches to learning and optimisation

  • Bio-inspired and cognitive computing

  • Intelligent systems for mixed, augmented, and extended reality

Articles
Recent Articles
Most Downloaded
Most Cited
Open Access
Research article
Transformer-Driven Feature Fusion for Robust Diagnosis of Lung Cancer Brain Metastasis Under Missing-Modality Scenarios
yue ding ,
yunqi ma ,
kuo jing ,
zhansong shang ,
feiyang gao ,
zhengwei cui ,
linyan xue ,
shuang liu
|
Available online: 02-05-2026

Abstract

Full Text|PDF|XML
Accurate diagnosis of lung cancer brain metastasis is often hindered by incomplete magnetic resonance imaging (MRI) modalities, resulting in suboptimal utilization of complementary radiological information. To address the challenge of ineffective feature integration in missing-modality scenarios, a Transformer-based multi-modal feature fusion framework, referred to as Missing Modality Transformer (MMT), was introduced. In this study, multi-modal MRI data from 279 individuals diagnosed with lung cancer brain metastasis, including both small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), were acquired and processed through a standardized radiomics pipeline encompassing feature extraction, feature selection, and controlled data augmentation. The proposed MMT framework was trained and evaluated under various single-modality and combined-modality configurations to assess its robustness to modality absence. A maximum diagnostic accuracy of 0.905 was achieved under single-modality missing conditions, exceeding the performance of the full-modality baseline by 0.017. Interpretability was further strengthened through systematic analysis of loss-function hyperparameters and quantitative assessments of modality-specific importance. The experimental findings collectively indicate that the MMT framework provides a reliable and clinically meaningful solution for diagnostic environments in which imaging acquisition is limited by patient conditions, equipment availability, or time constraints. These results highlight the potential of Transformer-based radiomics fusion to advance computational neuro-oncology by improving diagnostic performance, enhancing robustness to real-world imaging variability, and offering transparent interpretability that aligns with clinical decision-support requirements.

Abstract

Full Text|PDF|XML
Climate change, which has intensified to a global governance crisis, demands adaptation strategies that are faster, precise, and more inclusive than ever before. Artificial intelligence (AI), increasingly positioned at the core of this transformation, is offering powerful tools for climate risk forecasting, disaster preparedness, energy optimization, agricultural efficiency, and business resilience. Yet the growing adoption of AI exposes a fundamental paradox: while it promises unprecedented analytical capacity, its benefits remain unevenly distributed across communities. The current study addressed this tension by presenting a comprehensive and governance-oriented analysis of AI-driven climate adaptation. Drawing on an extensive review of academic research and major institutional reports, this paper identified three interlinked challenges including methodological limitations, ethical and equity risks, as well as governance gaps which continuously undermine the effectiveness of AI-enabled adaptation. Predictive models struggled to incorporate complex social vulnerabilities; algorithmic opacity limited trust and accountability whereas persistent data inequality prevented low-income regions from leveraging advanced digital tools. In response, the study introduced a multi-layered governance framework encompassing technical capacity, regulatory and ethical infrastructure, and socially inclusive outcomes. The findings revealed that the contributions of AI to climate adaptation were fundamentally shaped by institutional quality, transparent data governance, equitable digital access, and participation of vulnerable populations in decision making. The paper concluded that AI held extraordinary potential to strengthen resilience, only if deployed within governance systems that prioritize fairness, accountability, transparency, ethics, and social inclusion. By aligning technological innovation with just and sustainable governance, AI becomes not only a predictive instrument but a transformative catalyst for equitable climate adaptation worldwide.
Open Access
Research article
Decision-Level Multimodal Fusion for Non-Invasive Diagnosis of Endometriosis: Strategies, Calibration, and Net Clinical Benefit
oluwayemisi b. fatade ,
oyebimpe f. ajiboye ,
funmilayo a. sanusi ,
kikelomo i. okesola ,
grace c. okorie ,
goodness o. opateye ,
oluwasefunmi b. famodimu
|
Available online: 01-18-2026

Abstract

Full Text|PDF|XML

Endometriosis remains underdiagnosed due to reliance on invasive laparoscopy. Artificial Intelligence (AI) using either imaging or structured clinical data have shown promise, but single modality approaches face limitations in sensitivity, calibration, and clinical reliability. This work seeks to evaluate whether decision-level multimodal fusion of Magnetic Resonance Imaging (MRI)-based and clinical data-based AI systems improves diagnostic performance, calibration, and net clinical benefit, compared with single-modality models. Two previously validated models were combined with retrospective data from 1,208 patients with suspected endometriosis: a Dual U-Net trained on pelvic MRI with Gradient-weighted Class Activation Mapping (Grad-CAM) interpretability and a dense neural network trained on structured clinical features with SHapley Additive exPlanations (SHAP). This study tested weighted averaging, stacking via logistic regression, and confidence-gating. Performance was assessed using accuracy, precision, recall, F1-score, and area under the curve (AUC). Calibration was evaluated using the Brier score, expected calibration error (ECE), and reliability diagrams. Clinical utility was quantified with decision curve analysis (DCA). Statistical significance was tested with McNemar’s test for accuracy and DeLong’s test for AUC. Multimodal fusion outperformed both single modality models. Weighted averaging accuracy was 0.89, precision was 0.89, recall was 0.87, and F1-score was 0.86, thus improving on either modality alone. Stacking further enhanced calibration (ECE reduction from 0.8 to 0.04) and yielded higher net benefit across clinically relevant probability thresholds (20 to 60%). DCA indicated fusion would avoid 12 to 18 unnecessary surgical investigations per 100 patients, compared with single modality strategies. Confidence-gating maintained performance under simulated distribution shifts to support robustness. Decision-level multimodal fusion enhanced non-invasive diagnosis of endometriosis by improving accuracy, calibration, and clinical utility. These results demonstrated the value of integrative AI gynecological care and justify prospective validation in real-world clinical settings.

Abstract

Full Text|PDF|XML
Atmospheric turbulence induces severe blurring and geometric distortions in facial imagery, critically compromising the performance of downstream tasks. To overcome this challenge, a lightweight conditional diffusion model was proposed for the restoration of single-frame turbulence-degraded facial images. Super-resolution techniques were integrated with the diffusion model, and high-frequency information was incorporated as a conditional constraint to enhance structural recovery and achieve high-fidelity generation. A simplified U-Net architecture was employed within the diffusion model to reduce computational complexity while maintaining high restoration quality. Comprehensive comparative evaluations and restoration experiments across multiple scenarios demonstrate that the proposed method produces results with reduced perceptual and distributional discrepancies from ground-truth images, while also exhibiting superior inference efficiency compared to existing approaches. The presented approach not only offers a practical solution for enhancing facial imagery in turbulent environments but also establishes a promising paradigm for applying efficient diffusion models to ill-posed image restoration problems, with potential applicability to other domains such as medical and astronomical imaging.
Open Access
Research article
Multimodal Audio Violence Detection: Fusion of Acoustic Signals and Semantics
Shivwani Nadar ,
Disha Gandhi ,
anupama jawale ,
Shweta Pawar ,
Ruta Prabhu
|
Available online: 12-23-2025

Abstract

Full Text|PDF|XML

When public safety is considered to be of paramount importance, the capacity to detect violent situations through audio monitoring has become increasingly indispensable. This paper proposed a hybrid audio text violence detection system that combines text-based information with frequency-based features to improve accuracy and reliability. The two core models of the system include a frequency-based model, Random Forest (RF) classifier, and a natural language processing (NLP) model called Bidirectional Encoder Representations from Transformers (BERT). RF classifier was trained on Mel-Frequency Cepstral Coefficients (MFCCs) and other spectrum features, whereas BERT identified violent content in transcribed speech. BERT model was improved through task-specific fine-tuning on a curated violence-related text dataset and balanced with class-weighting strategies to address category imbalance. This adaptation enhanced its ability to capture subtle violent language patterns beyond general purpose embeddings. Furthermore, a meta-learner ensemble model using eXtreme Gradient Boosting (XGBoost) classifier model could combine the probability output of the two base models. The ensemble strategy proposed in this research differed from conventionally multimodal fusion techniques, which depend on a single strategy, either NLP or audio. The XGBoost fusion model possessed the qualities derived from both base models to improve classification accuracy and robustness by creating an ideal decision boundary. The proposed system was supported by a Graphical User Interface (GUI) for multiple purposes, such as smart city applications, emergency response, and security monitoring with real-time analysis. The proposed XGBoost ensemble model attained an overall accuracy of over 97.37%, demonstrating the efficacy of integrating machine learning-based decision.

Abstract

Full Text|PDF|XML
Post-traumatic stress disorder (PTSD) has been recognized as a critical global mental health challenge, and the application of natural language processing (NLP) has emerged as a promising approach for its detection and management. In this study, a systematic review was conducted to evaluate the quality, quantity, and consistency of research investigating the role of NLP in PTSD detection. Through this process, prior research was consolidated, methodological gaps were identified, and a conceptual framework was formulated to guide future investigations. To complement the systematic review, a bibliometric analysis was performed to map the intellectual landscape, assess publication trends, and visualize research networks within this domain. The systematic review involved a structured search across ScienceDirect, IEEE Xplore, PubMed, and Web of Science, resulting in the retrieval of 328 records. After rigorous screening, 56 studies were included in the final synthesis. Separately, a bibliometric analysis was conducted on 4,138 publications obtained from the Web of Science database. The findings highlight that NLP methods not only enhance the detection of PTSD but also support the development of personalized treatment strategies. Ethical and security considerations were also identified as pressing concerns requiring further attention. The results of this study underscore the significance of NLP in advancing PTSD research and emphasize its potential to transform mental health services. By identifying trends, challenges, and opportunities, this study provides a foundation for future research aimed at strengthening the role of NLP in clinical practice and mental health policy.

Abstract

Full Text|PDF|XML
Generative Artificial Intelligence (Gen-AI) has emerged as a transformative technology with considerable potential to enhance information management and decision-making processes in the public sector. The present study examined how Gen-AI, with specific attention to Microsoft Copilot, can be integrated into local government organizations to support routine operations and strategic tasks. An Integrative Literature Review (ILR) methodology was applied, through which scholarly sources were systematically evaluated and findings were synthesized across predefined research questions and thematic categories. The review emphasized three focal areas: the conceptual foundations of Gen-AI, the challenges associated with its integration, and the opportunities for improving public sector information analysis and administrative practices. Evidence indicated that Gen-AI adoption in local government contexts can substantially improve efficiency in data retrieval, accelerate decision-making processes, enhance service responsiveness, and streamline administrative workflows. At the same time, significant risks were identified, including fragmented data infrastructures, limited digital and Artificial Intelligence (AI) literacy among personnel, and ongoing ethical, transparency, and regulatory challenges. Recommendations were formulated for future research, including empirical assessments of Gen-AI deployment across diverse local government contexts and longitudinal studies to evaluate the sustainability of AI-driven transformations. The insights generated from this study provide actionable guidance for local government organizations seeking to evaluate both the benefits and the risks of integrating Gen-AI technologies into information management and decision-support systems, thereby contributing to ongoing debates on public sector innovation and digital governance.

Abstract

Full Text|PDF|XML

Accurate and efficient detection of small-scale targets on dynamic water surfaces remains a critical challenge in the deployment of unmanned surface vehicles (USVs) for maritime applications. Complex background interference—such as wave motion, sunlight reflections, and low contrast—often leads to missed or false detections, particularly when using conventional convolutional neural networks. To address these issues, this study introduces LMS-YOLO, a lightweight detection framework built upon the YOLOv8n architecture and optimized for real-time marine object recognition. The proposed network integrates three key components: (1) a C2f-SBS module incorporating StarNet-based Star Blocks, which streamlines multi-scale feature extraction while reducing parameter overhead; (2) a Shared Convolutional Lightweight Detection Head (SCLD), designed to enhance detection precision across scales using a unified convolutional strategy; and (3) a Mixed Local Channel Attention (MLCA) module, which reinforces context-aware representation under complex maritime conditions. Evaluated on the WSODD and FloW-Img datasets, LMS-YOLO achieves a 5.5% improvement in precision and a 2.3% gain in mAP@0.5 compared to YOLOv8n, while reducing parameter count and computational cost by 37.18% and 34.57%, respectively. The model operates at 128 FPS on standard hardware, demonstrating its practical viability for embedded deployment in marine perception systems. These results highlight the potential of LMS-YOLO as a deployable solution for high-speed, high-accuracy marine object detection in real-world environments.

Open Access
Research article
Real-Time Anomaly Detection in IoT Networks Using a Hybrid Deep Learning Model
Anil Kumar Pallikonda ,
Vinay Kumar Bandarapalli ,
aruna vipparla
|
Available online: 10-09-2025

Abstract

Full Text|PDF|XML

The rapid expansion of Internet of Things (IoT) systems and networks has led to increased challenges regarding security and system reliability. Anomaly detection has become a critical task for identifying system flaws, cyberattacks, and failures in IoT environments. This study proposes a hybrid deep learning (DL) approach combining Autoencoders (AE) and Long Short-Term Memory (LSTM) networks to detect anomalies in real-time within IoT networks. In this model, normal data trends were learned in an unsupervised manner using an AE, while temporal dependencies in time-series data were captured through the use of an LSTM network. Experiments conducted on publicly available IoT datasets, namely the Kaggle IoT Network Traffic Dataset and the Numenta Anomaly Benchmark (NAB) dataset, demonstrate that the proposed hybrid model outperforms conventional machine learning (ML) algorithms, such as Support Vector Machine (SVM) and Random Forest (RF), in terms of accuracy, precision, recall, and F1-score. The hybrid model achieved a recall of 96.2%, a precision of 95.8%, and an accuracy of 97.5%, with negligible false negatives and false positives. Furthermore, the model is capable of handling real-time data with a latency of just 75 milliseconds, making it suitable for large-scale IoT applications. The performance evaluation, which utilized a diverse set of anomaly scenarios, highlighted the robustness and scalability of the proposed model. The Kaggle IoT Network Traffic Dataset, consisting of approximately 630,000 records across six months and 115 features, along with the NAB dataset, which includes around 365,000 sensor readings and 55 features, provided comprehensive data for evaluating the model’s effectiveness in real-world conditions. These findings suggest that the hybrid DL framework offers a robust, scalable, and efficient solution for anomaly detection in IoT networks, contributing to enhanced system security and dependability.

Open Access
Research article
Application of Artificial Intelligence on MNIST Dataset for Handwritten Digit Classification for Evaluation of Deep Learning Models
jide ebenezer taiwo akinsola ,
micheal adeolu olatunbosun ,
Ifeoluwa Michael Olaniyi ,
moruf adedeji adeagbo ,
emmanuel ajayi olajubu ,
ganiyu adesola aderounmu
|
Available online: 09-18-2025

Abstract

Full Text|PDF|XML

Handwritten digit classification represents a foundational task in computer vision and has been widely adopted in applications ranging from Optical Character Recognition (OCR) to biometric authentication. Despite the availability of large benchmark datasets, the development of models that achieve both high accuracy and computational efficiency remains a central challenge. In this study, the performance of three representative machine learning paradigms—Chi-Squared Automatic Interaction Detection (CHAID), Generative Adversarial Networks (GANs), and Feedforward Deep Neural Networks (FFDNNs)—was systematically evaluated on the Modified National Institute of Standards and Technology (MNIST) dataset. The assessment was conducted with a focus on classification accuracy, computational efficiency, and interpretability. Experimental results demonstrated that deep learning approaches substantially outperformed traditional Decision Tree (DT) methods. GANs and FFDNNs achieved classification accuracies of approximately 97%, indicating strong robustness and generalization capability for handwritten digit recognition tasks. In contrast, CHAID achieved only 29.61% accuracy, highlighting the limited suitability of DT models for high-dimensional image data. It was further observed that, despite the computational demand of adversarial training, GANs required less time per epoch than FFDNNs when executed on modern GPU architectures, thereby underscoring their potential scalability. These findings reinforce the importance of model selection in practical deployment, particularly where accuracy, computational efficiency, and interpretability must be jointly considered. The study contributes to the ongoing discourse on the role of artificial intelligence (AI) in pattern recognition by providing a comparative analysis of classical machine learning and deep learning approaches, thereby offering guidance for the development of reliable and efficient digit recognition systems suitable for real-world applications.

Abstract

Full Text|PDF|XML
Electroencephalography (EEG) provides a non-invasive approach for capturing brain dynamics and has become a cornerstone in clinical diagnostics, cognitive neuroscience, and neuroengineering. The inherent complexity, low signal-to-noise ratio, and variability of EEG signals have historically posed substantial challenges for interpretation. In recent years, artificial intelligence (AI), encompassing both classical machine learning (ML) and advanced deep learning (DL) methodologies, has transformed EEG analysis by enabling automatic feature extraction, robust classification, regression-based state estimation, and synthetic data generation. This survey synthesizes developments up to 2025, structured along three dimensions. The first dimension is task category, e.g., classification, regression, generation and augmentation, clustering and anomaly detection. The second dimension is the methodological framework, e.g., shallow learners, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, Graph Neural Networks (GNNs), and hybrid approaches. The third dimension is application domain, e.g., neurological disease diagnosis, brain-computer interfaces (BCIs), affective computing, cognitive workload monitoring, and specialized tasks such as sleep staging and artifact removal. Publicly available EEG datasets and benchmarking initiatives that have catalyzed progress were reviewed in this study. The strengths and limitations of current AI models were critically evaluated, including constraints related to data scarcity, inter-subject variability, noise sensitivity, limited interpretability, and challenges of real-world deployment. Future research directions were highlighted, including federated learning (FL) and privacy-preserving learning, self-supervised pretraining of Transformer-based architectures, explainable artificial intelligence (XAI) tailored to neurophysiological signals, multimodal fusion with complementary biosignals, and the integration of lightweight on-device AI for continuous monitoring. By bridging historical foundations with cutting-edge innovations, this survey aims to provide a comprehensive reference for advancing the development of accurate, robust, and transparent AI-driven EEG systems.
load more...
- no more data -
Most cited articles, updated regularly using citation data from CrossRef.
- no more data -