Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2023: Vol. 2
Archive
ATAIML Flyer
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) aims to spearhead the academic exploration of artificial intelligence, machine, and deep learning, along with their associated disciplines. Underscoring the pivotal role of AI and machine learning innovations in shaping the modern technological landscape, ATAIML strives to decode the complexities of current methodologies and applications in the AI domain. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our proficiency in orchestrating the peer-review, editing, and production processes, all accepted articles see rapid publication.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(1)
andreas pester
British University in Egypt, Egypt
andreas.pester@bue.edu.eg | website
Research interests: Differential Equations; LabVIEW; MATLAB, Educational Technology; Blended Learning; M-Learning; Deep Learning

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) emerges as a pivotal platform at the intersection of artificial intelligence, machine learning, and their multifaceted applications. Recognizing the profound potential of these disciplines, the journal endeavors to unravel the complexities underpinning AI and ML theories, methodologies, and their tangible real-world implications.

In a world advancing at digital light-speed, ATAIML posits that AI and ML reshape industries at their core. From the expansion of reality to the birth of synthetic data and the intricate design of graph neural networks, such advancements are at the forefront of innovation. With a mission to chronicle these paradigm shifts, ATAIML aims to serve as a beacon for researchers, professionals, and enthusiasts eager to fathom the vast horizons of AI and ML in the modern age.

Furthermore, ATAIML highlights the following features:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

ATAIML's expansive scope encompasses, but is not limited to:

  • AI-Integrated Sensory Technologies: Insights into AI's role in amplifying and harmonizing sensory data.

  • Symbiosis of AI and IoT: The collaborative dance between artificial intelligence and the Internet of Things and their cumulative impact on contemporary society.

  • Mixed Realities Shaped by AI: Probing the AI-crafted mixed-reality realms and their implications.

  • Sustainable AI Innovations: A focus on 'Green AI' and its instrumental role in shaping a sustainable future.

  • Synthetic Data in the AI Era: A deep dive into the rise and relevance of synthetic data and its AI-driven generation.

  • Graph Neural Paradigms: Exploration of the nuances of graph-centric neural networks and their evolutionary trajectory.

  • Interdisciplinary AI Applications: Delving into AI's intersections with fields such as psychology, fashion, and the arts.

  • Moral and Ethical Dimensions of AI: A comprehensive study of the ethical landscapes carved by AI's advancements and the corresponding legal challenges.

  • Diverse Learning Methodologies: Exploration of revolutionary learning techniques ranging from Bayesian paradigms to statistical approaches in ML.

  • Emergent AI Narratives: Spotlight on cutting-edge AI technologies, foundational standards, computational attributes, and their transformative use cases.

  • Holistic Integration: Emphasis on multi-disciplinary submissions that combine insights from varied fields, offering a holistic perspective on AI and ML's global resonance.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML

In the domain of intellectual property protection, the embedding of digital watermarks has emerged as a pivotal technique for the assertion of copyright, the conveyance of confidential messages, and the endorsement of authenticity within digital media. This research delineates the implementation of a non-blind watermarking algorithm, utilizing alpha blending facilitated by discrete wavelet transform (DWT) to embed watermarks into genuine images. Thereafter, an extraction process, constituting the inverse of embedding, retrieves these watermarks. The robustness of the embedded watermark against prevalent manipulative attacks, specifically median filter, salt and pepper (SAP) noise, Gaussian noise, speckle noise, and rotation, is rigorously evaluated. The performance of the DWT-based watermarking is quantified using the peak signal-to-noise ratio (PSNR), an objective metric reflecting fidelity. It is ascertained that the watermark remains tenaciously intact under such adversarial conditions, underscoring the proposed method's suitability for applications in digital image security and copyright verification.

Abstract

Full Text|PDF|XML

This investigation delineates an optimised predictive model for employee attrition within a substantial workforce, identifying pertinent models tailored to the specific context of employee and organisational variables. The selection and refinement of the appropriate predictive model serve as cornerstones for enhancements and updates, which are integral to honing the model's precision in prognosticating potential departures. Through meticulous optimisation, the model demonstrates proficiency in pinpointing the pivotal factors contributing to employee turnover and elucidating the interdependencies among salient variables. A suite of 27 general and eight critical variables were scrutinised. Pertinent correlations were unearthed, notably between monthly income and job satisfaction, home-to-work distance and job satisfaction, as well as age with both job satisfaction and performance metrics. Drawing from prior studies in analogous domains, a three-stage analytical methodology encompassing data exploration, model selection, and implementation was employed. The rigorous training of the optimised model encompassed both attrition factors and variable correlations, culminating in predictive outcomes with a precision of 90% and an accuracy of 87%. Implementing the refined model projected that 113 out of 709 employees, equating to 15.93%, were at a heightened risk of exiting the organisation. This quantitative foresight equips stakeholders with a strategic tool for preemptive interventions to mitigate turnover and sustain organisational vitality.

Abstract

Full Text|PDF|XML

In the realm of agriculture, crop yields of fundamental cereals such as rice, wheat, maize, soybeans, and sugarcane are adversely impacted by insect pest invasions, leading to significant reductions in agricultural output. Traditional manual identification of these pests is labor-intensive and time-consuming, underscoring the necessity for an automated early detection and classification system. Recent advancements in machine learning, particularly deep learning, have provided robust methodologies for the classification and detection of a diverse array of insect infestations in crop fields. However, inaccuracies in pest classification could inadvertently precipitate the use of inappropriate pesticides, further endangering both agricultural yields and the surrounding ecosystems. In light of this, the efficacy of nine distinct pre-trained deep learning algorithms was evaluated to discern their capability in the accurate detection and classification of insect pests. This assessment utilized two prevalent datasets, comprising ten pest classes of varied sizes. Among the transfer learning techniques scrutinized, adaptations of ResNet-50 and ResNet-101 were deployed. It was observed that ResNet-50, when employed in a transfer learning paradigm, achieved an exemplary classification accuracy of 99.40% in the detection of agricultural pests. Such a high level of precision represents a significant advancement in the field of precision agriculture.

Abstract

Full Text|PDF|XML

In Sub-Saharan Africa, particularly in Nigeria, Lassa fever poses a significant infectious disease threat. This investigation employed count regression and machine learning techniques to model mortality rates associated with confirmed Lassa fever cases. Utilizing weekly data from January 7, 2018, to April 2, 2023, provided by the Nigeria Centre for Disease Control (NCDC), an analytical comparison between these methods was conducted. Overdispersion was indicated (p<0.01), prompting the exclusive use of negative binomial and generalized negative binomial regression models. Machine learning algorithms, specifically medium Gaussian support vector machine (MGSVM), ensemble boosted trees, ensemble bagged trees, and exponential Gaussian Process Regression (GPR), were applied, with 80% of the data allocated for training and the remaining 20% for testing. The efficacy of these methods was evaluated using the coefficients of determination (R²) and the root mean square error (RMSE). Descriptive statistics revealed a total of 30,461 confirmed cases, 4,745 suspected cases, and 772 confirmed fatalities attributable to Lassa fever during the study period. The negative binomial regression model demonstrated superior performance (R²=0.1864, RMSE=4.33) relative to the generalized negative binomial model (R²=0.1915, RMSE=18.2425). However, machine learning algorithms surpassed the count regression models in predictive capability, with ensemble boosted trees emerging as the most effective (R²=0.85, RMSE=1.5994). Analysis also identified the number of confirmed cases as having a significant positive correlation with mortality rates (r=0.885, p<0.01). The findings underscore the importance of promoting community hygiene practices, such as preventing rodent intrusion and securing food storage, to mitigate the transmission and consequent fatalities of Lassa fever.

Abstract

Full Text|PDF|XML

Diabetic retinopathy, a severe ocular disease correlated with elevated blood glucose levels in diabetic patients, carries a significant risk of visual impairment. The essentiality of its timely and precise severity classification is underscored for effective therapeutic intervention. Deep learning methodologies have been shown to yield encouraging results in the detection and categorisation of severity levels of diabetic retinopathy. This study proposes a dual-level approach, wherein the MobileNetV2 model is modified for a regression task, predicting retinopathy severity levels and subsequently fine-tuned on fundus images. The refined MobileNetV2 model is then utilised for learning feature embeddings, and a Support Vector Machine (SVM) classifier is trained for grading retinopathy severity. Upon implementation, this dual-level approach demonstrated remarkable performance, achieving an accuracy rate of 87% and a kappa value of 93.76% when evaluated on the APTOS19 benchmark dataset. Additionally, the efficacy of data augmentation and the handling of class imbalance issues were explored. These findings suggest that the novel dual-level approach provides an efficient and highly effective solution for the detection and classification of diabetic retinopathy severity levels.

Abstract

Full Text|PDF|XML

In addressing the challenge of obstacle scattering inversion amidst intricate noise conditions, a model predicated on convolutional neural networks (CNN) has been proposed, demonstrating high precision. Five distinct noise scenarios, encompassing Gaussian white noise, uniform distribution noise, Poisson distribution noise, Laplace noise, and impulse noise, were evaluated. Far-field data paired with the Fourier coefficients of obstacle boundary curves were employed as network input and output, respectively. Through the convolutional processes inherent to the CNN, salient features within the far-field data related to obstacles were adeptly identified. Concurrently, the statistical characteristics of the noise were assimilated, and its perturbing effects were diminished, thus facilitating the inversion of obstacle shape parameters. The intrinsic capacity of CNNs to intuitively learn and differentiate salient features from data eradicates the necessity for external intervention or manually designed feature extractors. This adaptability confers upon CNNs a significant edge in tackling obstacle scattering inversion challenges, particularly in light of fluctuating data distributions and feature variability. Numerical experiments have substantiated that the aforementioned CNN model excels in addressing scattering inversion complications within multifaceted noise conditions, consistently delivering solutions with remarkable precision.

Abstract

Full Text|PDF|XML

The task of interpreting multi-variable time series data, while also forecasting outcomes accurately, is an ongoing challenge within the machine learning domain. This study presents an advanced method of utilizing Long Short-Term Memory (LSTM) recurrent neural networks in the analysis of such data, with specific attention to both target and exogenous variables. The novel approach aims to extract hidden states that are unique to individual variables, thereby capturing the distinctive dynamics inherent in multi-variable time series and allowing the elucidation of each variable's contribution to predictive outcomes. A pioneering mixture attention mechanism is introduced, which, by leveraging the aforementioned variable-specific hidden states, characterizes the generative process of the target variable. The study further enhances this methodology by formulating associated training techniques that permit concurrent learning of network parameters, variable interactions, and temporal significance with respect to the target prediction. The effectiveness of this approach is empirically validated through rigorous experimentation on three real-world datasets, including the 2022 closing prices of three major stocks - Apple (AAPL), Amazon (AMZN), and Microsoft (MSFT). The results demonstrated superior predictive performance, attributable to the successful encapsulation of the diverse dynamics of different variables. Furthermore, the study provides a comprehensive evaluation of the interpretability outcomes, both qualitatively and quantitatively. The presented framework thus holds substantial promise as a comprehensive solution that not only enhances prediction accuracy but also aids in the extraction of valuable insights from complex multi-variable datasets.

Abstract

Full Text|PDF|XML

Dominant points, or control points, represent areas of high curvature on shape contours and are extensively utilized in the representation of shape outlines. Herein, we introduce a novel, descriptor-based approach for the efficient detection of these pivotal points. Each point on a shape contour is evaluated and mapped to an invariant descriptor set, accomplished through the use of point-neighborhood. These descriptors are then harnessed to discern whether a point qualifies as a dominant one. Our proposed methodology eliminates the need for costly computations typically associated with evaluating candidate dominant points. Furthermore, our algorithm significantly outperforms its predecessors in terms of speed, relying solely on integer operations and obviating the necessity for an optimization phase. Experimental outcomes, derived from the widely used MPEG7_CE-Shape-1_Part_B, denote a minimum enhancement of 2.3 times in terms of running time. This implies that the proposed methodology is particularly suitable for real-time applications or scenarios managing shapes comprising a substantial number of points.

Abstract

Full Text|PDF|XML

Pharmaceutical transport logistics, especially in humanitarian and hospital contexts, is becoming increasingly essential with a growing need to monitor associated costs. In Morocco, however, studies focusing on the cost implications of pharmaceutical delivery conditions are conspicuously absent. This creates a high-dimensional classification framework, where the selection of variables becomes challenging in the face of correlated distribution predictors. The integration of Artificial Intelligence (AI) in cost prediction has emerged as a vital necessity amidst escalating complexities and cost considerations. Cost prediction, being inherently correlated with almost all variables and inputs, offers an interpretable value in performance management, financial planning, and contract negotiation. This study undertakes a comparative analysis of a broad spectrum of prediction algorithms applied to the same, albeit reduced, database. A dozen such algorithms are put into practical use, with variable selection implemented through importance measures. The primary objective of this comparative evaluation is to determine the superior performing algorithm — one that delivers optimal adaptation to the context within a fixed environment. The prediction algorithm incorporates a myriad of inputs and constraints derived from data collection systems. AI's application facilitates the inclusion of diverse variables such as transportation routes, congestion, distances, freight weight, and environmental factors, thereby enhancing the accuracy and efficiency of cost estimation. The Orthogonal Matching Pursuit model emerged as the most successful, boasting an R² value nearing unity. Accurate cost prediction in transport can yield valuable insights into budgeting, estimation, customer service, managerial risk, environmental considerations, and strategic deployment for a company. Improved decision-making and resource allocation can thereby be achieved, leading to enhanced profitability and sustainability.

Abstract

Full Text|PDF|XML

Liver cancer, one of the rapidly escalating forms of cancer, remains a principal cause of mortality globally. Its death rates can be attenuated through vigilant monitoring and early detection. This study aims to develop a sophisticated model to assist medical professionals in the classification of liver tumours using biopsy tissue images, thereby facilitating preliminary diagnosis.The study presents a novel, bio-inspired deep learning strategy purposed for augmenting liver cancer detection. The uniqueness of this approach rests in its two-fold contribution: Firstly, an innovative hybrid segmentation technique, integrating the SegNet network, UNet network, and Al-Biruni Earth Radius (BER) procedure, is introduced to extract liver lesions from Computed Tomography (CT) images. The algorithm initially applies the SegNet to isolate the liver from the abdominal image in a CT scan. Since hyperparameters significantly influence segmentation performance, the BER algorithm is hybridized with each network for optimal tuning. The method proposed herein is inspired by the pursuit of a common objective by swarm members. Al-Biruni's methodology for calculating Earth's radius sets the search space, extending beyond local solutions that require exploration. Secondly, a pre-trained AlexNet model is utilized for diagnosis, further enhancing the method's effectiveness. The proposed segmentation and classification algorithms have been compared with contemporary state-of-the-art techniques. The results demonstrated that in terms of specificity, F1-score, accuracy, and computational time, the proposed method outperforms its competitors, indicating its potential in advancing liver cancer detection.

Open Access
Research article
Artificial Intelligence in Cervical Cancer Research and Applications
chunhui liu ,
jiahui yang ,
ying liu ,
ying zhang ,
shuang liu ,
tetiana chaikovska ,
chan liu
|
Available online: 06-13-2023

Abstract

Full Text|PDF|XML

Cervical cancer remains a leading cause of death among females, posing a severe threat to women's health. Due to the uneven distribution of resources in different regions, there are challenges regarding physicians' experience, quantity, and medical conditions. Early screening, diagnosis, and treatment of cervical cancer still face significant obstacles. In recent years, artificial intelligence (AI) has been increasingly applied to various diseases' screening, diagnosis, and treatment. Currently, AI has many research applications in cervical cancer screening, diagnosis, treatment, and prognosis, assisting doctors and clinical experts in decision-making, improving efficiency and accuracy. This study discusses the application of AI in cervical cancer screening, including HPV typing and detection, cervical cytology screening, and colposcopy screening, as well as AI in cervical cancer diagnosis and treatment, including magnetic resonance imaging (MRI) and computed tomography (CT). Finally, the study briefly describes the current challenges faced by AI applications in cervical cancer and proposes future research directions.

Abstract

Full Text|PDF|XML

With the wide use of facial verification and authentication systems, the performance evaluation of Spoofing Attack Detection (SAD) module in the systems is important, because poor performance leads to successful face spoofing attacks. Previous studies on face SAD used a pretrained Visual Geometry Group (VGG) -16 architecture to extract feature maps from face images using the convolutional layers, and trained a face SAD model to classify real and fake face images, obtaining poor performance for unseen face images. Therefore, this study aimed to evaluate the performance of VGG-19 face SAD model. Experimental approach was used to build the model. VGG-19 algorithm was used to extract Red Green Blue (RGB) and deep neural network features from the face datasets. Evaluation results showed that the performance of the VGG-19 face SAD model improved by 6% compared with the state-of-the-art approaches, with the lowest equal error rate (EER) of 0.4%. In addition, the model had strong generalization ability in top-1 accuracy, threshold operation, quality test, fake face test, equal error rate, and overall test standard evaluation metrics.

load more...
- no more data -
- no more data -