Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 4
Archive
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) is a peer-reviewed scholarly journal that publishes original research in artificial intelligence and machine learning and related areas. The journal places particular emphasis on work that develops new theoretical approaches, algorithmic methods, or well-founded applications, and that provides clear technical or analytical contributions to the field. ATAIML welcomes submissions that address methodological advances, empirical validation, system-level implementation, as well as the ethical and societal aspects of AI, where these are examined with appropriate technical or analytical depth. The journal is published quarterly by Acadlore, with four issues released in March, June, September, and December.

  • Professional Editorial Standards - All submissions are evaluated through a standard peer-review process involving independent reviewers and editorial assessment before acceptance.

  • Efficient Publication - The journal follows a defined review, revision, and production workflow to support regular and predictable publication of accepted manuscripts.

  • Open Access - ATAIML is an open-access journal. All published articles are made available online without subscription or access fees.

Editor(s)-in-chief(2)
andreas pester
Faculty of Computer Sciences and Informatics, British University in Egypt, Egypt
andreas.pester@bue.edu.eg | website
Research interests: Differential Equations; LabVIEW; MATLAB, Educational Technology; Blended Learning; M-Learning; Deep Learning
zhuang wu
School of Management Engineering, Capital University of Economics and Business, China
wuzhuang@cueb.edu.cn | website
Research interests: Computational intelligence and machine learning; Data-driven optimization and decision models; Intelligent information processing; Big data analytics for intelligent systems; Multi-modal data analysis and visualization

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) is a peer-reviewed open-access journal that publishes original research in artificial intelligence and machine learning, with an emphasis on theoretical analysis, algorithmic development, and carefully designed empirical studies.

The journal is primarily interested in work that offers clear methodological contributions, theoretical insights, or well-supported experimental findings, rather than papers that report only incremental applications of existing techniques.

ATAIML aims to provide a venue for research that connects foundational ideas in AI and ML with engineering practice and real-world systems, while maintaining a strong focus on scientific rigor, reproducibility, and transparency in reporting.

The journal also welcomes critical discussions on the reliability, interpretability, and broader implications of AI technologies, including their ethical and social aspects, where these issues are addressed with appropriate technical or analytical depth.

Published quarterly by Acadlore, ATAIML follows a structured peer-review process and standard editorial procedures to ensure consistency and transparency in its publication practices.

Key features of ATAIML include:

  • An emphasis on research that contributes to theoretical understanding and methodological development in artificial intelligence and machine learning;

  • A particular interest in work addressing model interpretability, robustness, reliability, and security in learning systems;

  • Contributions that connect AI methods with engineering practice, scientific domains, or socio-economic contexts in a technically grounded way;

  • Studies that examine ethical, legal, or societal aspects of AI where these are supported by clear analytical or technical frameworks;

  • A standard peer-review and editorial process intended to support consistency, transparency, and fairness in the evaluation of submissions.

Scope

ATAIML welcomes submissions across a broad range of topics in artificial intelligence and machine learning, including, but not limited to, the areas outlined below:

Foundations and Models

  • Deep learning architectures and related optimisation methods

  • Graph neural networks and representation learning

  • Probabilistic and Bayesian approaches to learning

  • Computational learning theory and statistical learning methods

  • Reinforcement learning and sequential decision models

  • Transfer, domain adaptation, federated, and meta-learning

  • Evolutionary computation and swarm-based methods

Systems, Infrastructure, and Engineering

  • Scalable, distributed, and edge-based learning systems

  • AI for Internet of Things and cyber-physical systems

  • Training infrastructure, deployment, and lifecycle management (MLOps)

  • High-performance and neuromorphic computing for AI workloads

Data-Centric and Multimodal AI

  • Data governance, quality assessment, and uncertainty modelling

  • Synthetic data, self-supervised, and weakly supervised learning

  • Multimodal learning and data fusion techniques

  • Knowledge graphs and symbolic–neural hybrid approaches

Trustworthy and Responsible AI

  • Explainability, interpretability, and reliability of learning systems

  • Robustness, safety, fairness, privacy, and security in AI and ML

  • Ethical, legal, and societal aspects of AI use and deployment

Applied AI Across Domains

  • Robotics, autonomous systems, and intelligent manufacturing

  • Healthcare analytics, medical imaging, and bioinformatics

  • Smart cities, climate-related modelling, and sustainability applications

  • Computer vision, natural language processing, and speech technologies

  • AI applications in finance, education, agriculture, and public services

  • Human–AI interaction and computational support for creativity and culture

Emerging and Future Paradigms

  • Generative models and foundation architectures

  • Quantum approaches to learning and optimisation

  • Bio-inspired and cognitive computing

  • Intelligent systems for mixed, augmented, and extended reality

Articles
Recent Articles
Most Downloaded
Most Cited
Open Access
Research article
Multimodal Audio Violence Detection: Fusion of Acoustic Signals and Semantics
Shivwani Nadar ,
Disha Gandhi ,
anupama jawale ,
Shweta Pawar ,
Ruta Prabhu
|
Available online: 12-23-2025

Abstract

Full Text|PDF|XML

When public safety is considered to be of paramount importance, the capacity to detect violent situations through audio monitoring has become increasingly indispensable. This paper proposed a hybrid audio text violence detection system that combines text-based information with frequency-based features to improve accuracy and reliability. The two core models of the system include a frequency-based model, Random Forest (RF) classifier, and a natural language processing (NLP) model called Bidirectional Encoder Representations from Transformers (BERT). RF classifier was trained on Mel-Frequency Cepstral Coefficients (MFCCs) and other spectrum features, whereas BERT identified violent content in transcribed speech. BERT model was improved through task-specific fine-tuning on a curated violence-related text dataset and balanced with class-weighting strategies to address category imbalance. This adaptation enhanced its ability to capture subtle violent language patterns beyond general purpose embeddings. Furthermore, a meta-learner ensemble model using eXtreme Gradient Boosting (XGBoost) classifier model could combine the probability output of the two base models. The ensemble strategy proposed in this research differed from conventionally multimodal fusion techniques, which depend on a single strategy, either NLP or audio. The XGBoost fusion model possessed the qualities derived from both base models to improve classification accuracy and robustness by creating an ideal decision boundary. The proposed system was supported by a Graphical User Interface (GUI) for multiple purposes, such as smart city applications, emergency response, and security monitoring with real-time analysis. The proposed XGBoost ensemble model attained an overall accuracy of over 97.37%, demonstrating the efficacy of integrating machine learning-based decision.

Abstract

Full Text|PDF|XML
Post-traumatic stress disorder (PTSD) has been recognized as a critical global mental health challenge, and the application of natural language processing (NLP) has emerged as a promising approach for its detection and management. In this study, a systematic review was conducted to evaluate the quality, quantity, and consistency of research investigating the role of NLP in PTSD detection. Through this process, prior research was consolidated, methodological gaps were identified, and a conceptual framework was formulated to guide future investigations. To complement the systematic review, a bibliometric analysis was performed to map the intellectual landscape, assess publication trends, and visualize research networks within this domain. The systematic review involved a structured search across ScienceDirect, IEEE Xplore, PubMed, and Web of Science, resulting in the retrieval of 328 records. After rigorous screening, 56 studies were included in the final synthesis. Separately, a bibliometric analysis was conducted on 4,138 publications obtained from the Web of Science database. The findings highlight that NLP methods not only enhance the detection of PTSD but also support the development of personalized treatment strategies. Ethical and security considerations were also identified as pressing concerns requiring further attention. The results of this study underscore the significance of NLP in advancing PTSD research and emphasize its potential to transform mental health services. By identifying trends, challenges, and opportunities, this study provides a foundation for future research aimed at strengthening the role of NLP in clinical practice and mental health policy.

Abstract

Full Text|PDF|XML
Generative Artificial Intelligence (Gen-AI) has emerged as a transformative technology with considerable potential to enhance information management and decision-making processes in the public sector. The present study examined how Gen-AI, with specific attention to Microsoft Copilot, can be integrated into local government organizations to support routine operations and strategic tasks. An Integrative Literature Review (ILR) methodology was applied, through which scholarly sources were systematically evaluated and findings were synthesized across predefined research questions and thematic categories. The review emphasized three focal areas: the conceptual foundations of Gen-AI, the challenges associated with its integration, and the opportunities for improving public sector information analysis and administrative practices. Evidence indicated that Gen-AI adoption in local government contexts can substantially improve efficiency in data retrieval, accelerate decision-making processes, enhance service responsiveness, and streamline administrative workflows. At the same time, significant risks were identified, including fragmented data infrastructures, limited digital and Artificial Intelligence (AI) literacy among personnel, and ongoing ethical, transparency, and regulatory challenges. Recommendations were formulated for future research, including empirical assessments of Gen-AI deployment across diverse local government contexts and longitudinal studies to evaluate the sustainability of AI-driven transformations. The insights generated from this study provide actionable guidance for local government organizations seeking to evaluate both the benefits and the risks of integrating Gen-AI technologies into information management and decision-support systems, thereby contributing to ongoing debates on public sector innovation and digital governance.

Abstract

Full Text|PDF|XML

Accurate and efficient detection of small-scale targets on dynamic water surfaces remains a critical challenge in the deployment of unmanned surface vehicles (USVs) for maritime applications. Complex background interference—such as wave motion, sunlight reflections, and low contrast—often leads to missed or false detections, particularly when using conventional convolutional neural networks. To address these issues, this study introduces LMS-YOLO, a lightweight detection framework built upon the YOLOv8n architecture and optimized for real-time marine object recognition. The proposed network integrates three key components: (1) a C2f-SBS module incorporating StarNet-based Star Blocks, which streamlines multi-scale feature extraction while reducing parameter overhead; (2) a Shared Convolutional Lightweight Detection Head (SCLD), designed to enhance detection precision across scales using a unified convolutional strategy; and (3) a Mixed Local Channel Attention (MLCA) module, which reinforces context-aware representation under complex maritime conditions. Evaluated on the WSODD and FloW-Img datasets, LMS-YOLO achieves a 5.5% improvement in precision and a 2.3% gain in mAP@0.5 compared to YOLOv8n, while reducing parameter count and computational cost by 37.18% and 34.57%, respectively. The model operates at 128 FPS on standard hardware, demonstrating its practical viability for embedded deployment in marine perception systems. These results highlight the potential of LMS-YOLO as a deployable solution for high-speed, high-accuracy marine object detection in real-world environments.

Open Access
Research article
Real-Time Anomaly Detection in IoT Networks Using a Hybrid Deep Learning Model
Anil Kumar Pallikonda ,
Vinay Kumar Bandarapalli ,
aruna vipparla
|
Available online: 10-09-2025

Abstract

Full Text|PDF|XML

The rapid expansion of Internet of Things (IoT) systems and networks has led to increased challenges regarding security and system reliability. Anomaly detection has become a critical task for identifying system flaws, cyberattacks, and failures in IoT environments. This study proposes a hybrid deep learning (DL) approach combining Autoencoders (AE) and Long Short-Term Memory (LSTM) networks to detect anomalies in real-time within IoT networks. In this model, normal data trends were learned in an unsupervised manner using an AE, while temporal dependencies in time-series data were captured through the use of an LSTM network. Experiments conducted on publicly available IoT datasets, namely the Kaggle IoT Network Traffic Dataset and the Numenta Anomaly Benchmark (NAB) dataset, demonstrate that the proposed hybrid model outperforms conventional machine learning (ML) algorithms, such as Support Vector Machine (SVM) and Random Forest (RF), in terms of accuracy, precision, recall, and F1-score. The hybrid model achieved a recall of 96.2%, a precision of 95.8%, and an accuracy of 97.5%, with negligible false negatives and false positives. Furthermore, the model is capable of handling real-time data with a latency of just 75 milliseconds, making it suitable for large-scale IoT applications. The performance evaluation, which utilized a diverse set of anomaly scenarios, highlighted the robustness and scalability of the proposed model. The Kaggle IoT Network Traffic Dataset, consisting of approximately 630,000 records across six months and 115 features, along with the NAB dataset, which includes around 365,000 sensor readings and 55 features, provided comprehensive data for evaluating the model’s effectiveness in real-world conditions. These findings suggest that the hybrid DL framework offers a robust, scalable, and efficient solution for anomaly detection in IoT networks, contributing to enhanced system security and dependability.

Open Access
Research article
Application of Artificial Intelligence on MNIST Dataset for Handwritten Digit Classification for Evaluation of Deep Learning Models
jide ebenezer taiwo akinsola ,
micheal adeolu olatunbosun ,
Ifeoluwa Michael Olaniyi ,
moruf adedeji adeagbo ,
emmanuel ajayi olajubu ,
ganiyu adesola aderounmu
|
Available online: 09-18-2025

Abstract

Full Text|PDF|XML

Handwritten digit classification represents a foundational task in computer vision and has been widely adopted in applications ranging from Optical Character Recognition (OCR) to biometric authentication. Despite the availability of large benchmark datasets, the development of models that achieve both high accuracy and computational efficiency remains a central challenge. In this study, the performance of three representative machine learning paradigms—Chi-Squared Automatic Interaction Detection (CHAID), Generative Adversarial Networks (GANs), and Feedforward Deep Neural Networks (FFDNNs)—was systematically evaluated on the Modified National Institute of Standards and Technology (MNIST) dataset. The assessment was conducted with a focus on classification accuracy, computational efficiency, and interpretability. Experimental results demonstrated that deep learning approaches substantially outperformed traditional Decision Tree (DT) methods. GANs and FFDNNs achieved classification accuracies of approximately 97%, indicating strong robustness and generalization capability for handwritten digit recognition tasks. In contrast, CHAID achieved only 29.61% accuracy, highlighting the limited suitability of DT models for high-dimensional image data. It was further observed that, despite the computational demand of adversarial training, GANs required less time per epoch than FFDNNs when executed on modern GPU architectures, thereby underscoring their potential scalability. These findings reinforce the importance of model selection in practical deployment, particularly where accuracy, computational efficiency, and interpretability must be jointly considered. The study contributes to the ongoing discourse on the role of artificial intelligence (AI) in pattern recognition by providing a comparative analysis of classical machine learning and deep learning approaches, thereby offering guidance for the development of reliable and efficient digit recognition systems suitable for real-world applications.

Abstract

Full Text|PDF|XML
Electroencephalography (EEG) provides a non-invasive approach for capturing brain dynamics and has become a cornerstone in clinical diagnostics, cognitive neuroscience, and neuroengineering. The inherent complexity, low signal-to-noise ratio, and variability of EEG signals have historically posed substantial challenges for interpretation. In recent years, artificial intelligence (AI), encompassing both classical machine learning (ML) and advanced deep learning (DL) methodologies, has transformed EEG analysis by enabling automatic feature extraction, robust classification, regression-based state estimation, and synthetic data generation. This survey synthesizes developments up to 2025, structured along three dimensions. The first dimension is task category, e.g., classification, regression, generation and augmentation, clustering and anomaly detection. The second dimension is the methodological framework, e.g., shallow learners, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, Graph Neural Networks (GNNs), and hybrid approaches. The third dimension is application domain, e.g., neurological disease diagnosis, brain-computer interfaces (BCIs), affective computing, cognitive workload monitoring, and specialized tasks such as sleep staging and artifact removal. Publicly available EEG datasets and benchmarking initiatives that have catalyzed progress were reviewed in this study. The strengths and limitations of current AI models were critically evaluated, including constraints related to data scarcity, inter-subject variability, noise sensitivity, limited interpretability, and challenges of real-world deployment. Future research directions were highlighted, including federated learning (FL) and privacy-preserving learning, self-supervised pretraining of Transformer-based architectures, explainable artificial intelligence (XAI) tailored to neurophysiological signals, multimodal fusion with complementary biosignals, and the integration of lightweight on-device AI for continuous monitoring. By bridging historical foundations with cutting-edge innovations, this survey aims to provide a comprehensive reference for advancing the development of accurate, robust, and transparent AI-driven EEG systems.

Abstract

Full Text|PDF|XML

The integration of artificial intelligence (AI) in precision agriculture has facilitated significant advancements in crop health monitoring, particularly in the early identification and classification of foliar diseases. Accurate and timely diagnosis of plant diseases is critical for minimizing crop loss and enhancing agricultural sustainability. In this study, an interpretable deep learning model—referred to as the Multi-Crop Leaf Disease (MCLD) framework—was developed based on a Convolutional Neural Network (CNN) architecture, tailored for the classification of tomato and grapevine leaf diseases. The model architecture was derived from the Visual Geometry Group Network (VGGNet), optimized to improve computational efficiency while maintaining classification accuracy. Leaf image datasets comprising healthy and diseased samples were employed to train and evaluate the model. Performance was assessed using multiple statistical metrics, including classification accuracy, sensitivity, specificity, precision, recall, and F1-score. The proposed MCLD framework achieved a detection accuracy of 98.40% for grapevine leaf diseases and a classification accuracy of 95.71% for tomato leaf conditions. Despite these promising results, further research is required to address limitations such as generalizability across variable environmental conditions and the integration of field-acquired images. The implementation of such interpretable AI-based systems is expected to substantially enhance precision agriculture by supporting rapid and accurate disease management strategies.

Open Access
Research article
Comparative Analysis of Machine Learning Models for Predicting Indonesia’s GDP Growth
rossi passarella ,
muhammad ikhsan setiawan ,
zaqqi yamani
|
Available online: 07-03-2025

Abstract

Full Text|PDF|XML

Accurate forecasting of Gross Domestic Product (GDP) growth remains essential for supporting strategic economic policy development, particularly in emerging economies such as Indonesia. In this study, a hybrid predictive framework was constructed by integrating fuzzy logic representations with machine learning algorithms to improve the accuracy and interpretability of GDP growth estimation. Annual macroeconomic data from 1970 to 2023 were utilised, and 19 input features were engineered by combining numerical economic indicators with fuzzy-based linguistic variables, along with a forecast label generated via the Non-Stationary Fuzzy Time Series (NSFTS) method. Six supervised learning models were comparatively assessed, including Random Forest (RF), Support Vector Regression (SVR), eXtreme Gradient Boosting (XGBoost), Huber Regressor, Decision Tree (DT), and Multilayer Perceptron (MLP). Model performance was evaluated using Mean Absolute Error (MAE) and accuracy metrics. Among the tested models, the RF algorithm demonstrated superior performance, achieving the lowest MAE and an accuracy of 99.45% in forecasting GDP growth for 2023. Its robustness in capturing non-linear patterns and short-term economic fluctuations was particularly evident when compared to other models. These findings underscore the RF model's capability to serve as a reliable tool for economic forecasting in data-limited and volatile macroeconomic environments. By enabling more precise GDP growth predictions, the proposed hybrid framework offers a valuable decision-support mechanism for policymakers in Indonesia, contributing to more informed resource allocation, proactive economic intervention, and long-term development planning. The methodological innovation of integrating NSFTS with machine learning extends the frontier of data-driven macroeconomic modelling and provides a replicable template for forecasting applications in other emerging markets.

load more...
- no more data -
Most cited articles, updated regularly using citation data from CrossRef.
- no more data -