Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2023: Vol. 2
Archive
ATAIML Flyer
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) is a peer-reviewed, scholarly open access journal on artificial intelligence, machine and deep learning, and the related fields. It is published quarterly by Acadlore. The publication dates of the four issues usually fall in March, June, September, and December each year.

  • Professional service - All articles submitted go through rigorous yet rapid peer review and editing, following the strictest publication standards.

  • Fast publication - All articles accepted are quickly published, thanks to our expertise in organizing peer-review, editing, and production.

  • Open access - All articles published are immediately available to global audience, and freely sharable anywhere, anytime.

  • Additional benefits - All articles accepted enjoy free English editing, and face no length limit or color charges.

Editor(s)-in-chief(1)
andreas pester
British University in Egypt, Egypt
andreas.pester@bue.edu.eg | website
Research interests: Differential Equations; LabVIEW; MATLAB, Educational Technology; Blended Learning; M-learning; Deep Learning

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) (ISSN 2957-9562) is an open access journal of computer science, artificial intelligence, machine and deep learning, graph neural networks, synthetic data, and other related fields, theory, methods, as well as interdisciplinary applications, algorithms, data and implementations on different platforms. ATAIML offers an advanced meeting place for studies related to AI and machine learning topics and their applications. We welcome original submissions in various forms, including reviews, regular research papers, and short communications as well as Special Issues on particular topics. The journal will have a special focus on the relation between AI and extended reality, to synthetic data, and to graph neural networks.

The aim of ATAIML is to encourage scientists to publish their concepts, theoretical and experimental results and coding as much detailed as possible. Therefore, the journal has no restrictions regarding the length of papers. Full details should be provided so that the results can be reproduced. In addition, the journal has the following features:

  • Manuscripts regarding new and innovative research proposals and ideas are particularly welcome.
  • Young scientists will find a forum to exchange ideas.
  • Electronic files or software regarding the full details of the calculation and experimental procedure as well as source codes can be submitted as supplementary material.

Scope

The scope of the journal covers, but is not limited to the following topics:

  • AI and sensorics
  • AI and IoT
  • AI and mixed reality
  • AI and smart food, agriculture and forest
  • AI and design, fashion and arts
  • AI and psychology
  • Ethical and law issues of AI
  • Graph neural networks
  • Machine Learning in biology, chemistry, physics
  • Advanced sequential neural networks
  • Bayesian learning
  • Statistical and topological methods in machine learning
  • AI and multimedia data
  • Green AI and AI for a green world
  • Data-centric AI and synthetic data
  • AI and knowledge graphs
  • AI and geoinformatics
  • AI and ML use cases and applications
  • Mathematical methods of deep learning and graph neural networks
  • AI foundational standards
  • Computational approaches and computational characteristics of AI systems
  • Emerging AI technologies
  • Trustworthiness
Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML
It is complex to assess multi-level hierarchical teams, because the solution needs to organize their rapid dynamic adaptation to perform operational tasks, and train team members without sufficient competencies, skills and experience. Assessment also reveals the strengths and weaknesses of the whole team and each team member, which provides opportunities for their further growth in the future. Assessment of the work of teams needs external knowledge and processing methods. Therefore, this study proposed to use ontological approach to improve the assessment of multi-level hierarchical teams, because ontology integrated domain knowledge with relevant competencies of positions and levels in the hierarchical teams. Information on competencies of applicants was acquired in the portfolio analysis. After subdividing the hierarchical teams, appropriate ontologies and Web-services were used to obtain assessment results and competence improvement recommendations for the teams at various sublevels. The step-by-step team assessment method was described, which used elements of semantic similarity between different information objects to match applicants and equipment with team positions. This method could be used as a component of integrated multi-criteria decision-making and was targeted at specific cases of user tasks. The set of assessment criteria was pre-determined by tasks, and built based on domain knowledge. However, particular criterion were dynamic, and changed along with environmental at different time points.
Open Access
Research article
Diagnosis of Chronic Kidney Disease Based on CNN and LSTM
elif nur yildiz ,
emine cengil ,
muhammed yildirim ,
harun bingol
|
Available online: 06-05-2023

Abstract

Full Text|PDF|XML

Kidney plays an extremely important role in human health, and one of its important tasks is to purify the blood from toxic substances. Chronic Kidney Disease (CKD) means that kidney begins to lose its function gradually and show some symptoms, such as fatigue, weakness, nausea, vomiting, and frequent urination. Early diagnosis and treatment increase the likelihood of recovery from the disease. Due to high classification performance, artificial intelligence techniques have been widely used to classify disease data in the last ten years. In this study, a hybrid model based on Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) was proposed using a two-class data set, which automatically classified CKD. This dataset consisted of thirteen features and one output. If the features showed, CKD was diagnosed. Compared with many well-known machine learning methods, the proposed CNN-LSTM based model obtained a classification accuracy of 99.17%.

Abstract

Full Text|PDF|XML
The rapid adoption of the Industrial Internet of Things (IIoT) paradigm has left systems vulnerable due to insufficient security measures. False data injection attacks (FDIAs) present a significant security concern in IIoT, as they aim to deceive industrial platforms by manipulating sensor readings. Traditional threat detection methods have proven inadequate in addressing FDIAs, and most existing countermeasures overlook the necessity of validating data, particularly in the context of data clustering services. To address this issue, this study proposes an innovative approach for FDIA detection using an optimized bidirectional gated recurrent unit (BiGRU) model, with the Sailfish Optimization Algorithm (SOA) employed to select optimal weights. The proposed model exploits temporal and spatial correlations in sensor data to identify fabricated information and subsequently cleanse the affected data. Evaluation results demonstrate the effectiveness of the proposed method in detecting FDIAs, outperforming state-of-the-art techniques in the same task. Furthermore, the data cleaning process showcased the ability to recover damaged or corrupted data, providing an additional advantage.

Abstract

Full Text|PDF|XML

Underwater image processing area has been a central point of interest to many people in many fields such as control of underwater vehicles, archaeology, marine biology research, etc. Underwater exploration is becoming a big part of our life such as underwater marine and creatures research, pipeline and communication logistics, military use, touristic and entertainment use. Underwater images are subject to poor visibility, distortion, poor quality, etc., due to several reasons such as light propagation. The real problem occurs when these images have to be taken at a depth which is more than 500 feet where artificial light needs to be introduced. This work tackles the underwater environment challenges such as as colour casts, lack of image sharpness, low contrast, low visibility, and blurry appearance in deep ocean images by proposing an end-to-end deep underwater image enhancement network (WGH-net) based on convolutional neural network (CNN) algorithm. Quantitative and qualitative metrics results proved that our method achieved competitive results with the previous work methods as it was experimentally tested on different images from several datasets.

Abstract

Full Text|PDF|XML

One of the biggest problems that humans are faced with today is pollution and climate change. Pollution is not a new phenomenon and remains a leading cause of diseases and deaths. Mining, industrialization, exploration and urbanization caused global pollution, whose burdens are shared by developed and undeveloped countries alike. Awareness and stricter laws in the developed countries have contributed to environmental protection. Although all countries have paid attention to pollution, the impact and severity of its long-term consequences are being felt. There is a cause-and-effect link between the pollution of air, water and soil and the environment. This research aimed to prove that the main function of the philosophy of science is to have a functional understanding of knowledge, which views knowledge as a tool for prediction. Prediction is the function or mission of science or the goal that must be achieved if the scientific project is successful. In other words, prediction is the final harvest of description and interpretation. In addition, science is primarily concerned with the prediction of events that have occurred in the universe. A mature prediction is what science provides to validate scientific models. This paper introduced the concepts of using machine learning techniques to enhance the prediction process results. Pollution data set and the negative effects of polluted air data were used. We built, trained and tested various models in order to find the optimal model, which could enhance the results of the prediction process.

Abstract

Full Text|PDF|XML

This paper aimed to realize intelligent diagnosis of obstetric diseases using electronic medical records (EMRs). The Optimized Kernel Extreme Machine Learning (OKEML) technique was proposed to rebalance data. The hybrid approach of the Hunger Games Search (HGS) and the Arithmetic Optimization Algorithm (AOA) was adopted. This paper tested the effectiveness of the OKEML-HGS-AOA on Chinese Obstetric EMR (COEMR) datasets. Compared with other models, the proposed model outperformed the state-of-the-art experimental results on the COEMR, Arxiv Academic Paper Dataset (AAPD), and the Reuters Corpus Volume 1 (RCV1) datasets, with an accuracy of 88%, 90%, and 91%, respectively.

Open Access
Research article
Floor Segmentation Approach Using FCM and CNN
kavya ravishankar ,
puspha devaraj ,
sharath kumar yeliyur hanumathaiah
|
Available online: 03-27-2023

Abstract

Full Text|PDF|XML

Floor plans play an essential role in the architecture design and construction, which serves as an important communication tool between engineers, architects and clients. Automatic identification of various design elements in a floor plan image can improve work efficiency and accuracy. This paper proposed a method consists of two stages, Fuzzy C-Means (FCM) segmentation and Convolutional Neural Network (CNN) segmentation. In FCM stage, the given input image was partitioned into homogeneous regions based on similarity for merging. In CNN stage, the interactive information was introduced as markers of the object area and background area, which were input by the users to roughly indicate the position and main features of the object and background. The segmentation evaluation was measured using probabilistic rand index, variation of information, global consistency error, and boundary displacement error. Experiments were conducted on real dataset to evaluate performance of the proposed model. The experimental results revealed the proposed model was successful.

Abstract

Full Text|PDF|XML

In order to solve the interference caused by the overlapping and extrusion of adjacent plug seedlings, accurately obtain the information of tomato plug seedlings, and improve the transplanting effect of automatic tomato transplanters, this study proposes a seedling information acquisition method based on Cycle-Consistent Adversarial Network (CycleGAN). CycleGAN is a generative unsupervised deep learning method, which can realize the free conversion of the source-domain plug seedling image and the target-domain plug label image. It collects more than 500 images of tomato plug seedlings in different growth stages as a collection image set; follows certain principles to label the plug seedling images to obtain a label image set, and uses two image sets to train the CycleGAN network model. Finally, the trained model is used to process the images of tomato plug seedlings to obtain their label images. According to the labeling principle, the correct rate of model recognition is between 91% and 97%. The recognition results show that the CycleGAN model can recognize and judge whether the seedlings affected by the adjacent seedling holes are suitable for transplanting, so the application of this method can greatly improve the intelligence level of the automatic tomato transplanters.

Abstract

Full Text|PDF|XML

Artificial intelligence (AI) and natural language processing (NLP) are relentless technologies for healthcare that can support a strong and secure digital system with embedded applications of internet of things (IoTs). The study tried to build an artificial intelligence-natural language processing cluster system. In the system, rich content is extracted using parts of speech and then classified into an understandable dataset. The unavailable uniqueness systems with standardize process and procedures for artificial intelligence and natural language processing across different systems to support E-healthcare sector is a big challenge for nations and the world at large. Aim to train a cluster system that extract rich content and fit into a deep learning model frame to enable interpretation of the dataset for healthcare needs through a fast and secure digital system. The study uses (behavior-oriented driven and influential functions) to determine the significance of AI and NLP on E-health. Based on a selective scorings method, a rate of 1 out of 5 grading was developed called the Key Benefits score. The behavior-oriented drive and influential function allows an in-depth evaluation of E-health based on the selection of text content applied to the sample proposed study. Results show a score of 3.947 scale significance of NLP and AI on E-health. The study concluded that well-defined artificial intelligence and natural language processing applications are perfect areas that advance positive results in healthcare electronic services.

Abstract

Full Text|PDF|XML

Video compression gained its relevance with the boon of the internet, mobile phones, variable resolution acquisition device etc. The redundant information is explored in initial stages of compression that’s is prediction. Inter prediction that is prediction within the frame generates high computational complexity when working with traditional signal processing procedures. The paper proposes the design of a deep convolutional neural network model to perform inter prediction by crossing out the flaws in the traditional method. It briefs the modeling of network, mathematics behind each stage and evaluation of the proposed model with sample dataset. The video frame’s coding tree unit (CTU) of 64x64 is the input, the model converts and store it as a 16-element vector with the goodness of CNN network. It gives an overview of deep depth decision algorithm. The evaluation process shows that the model performs better for compression with less computational complexity.

Abstract

Full Text|PDF|XML

The proliferation of digital age security tools is often attributed to the rise of visual surveillance. Since an individual's gait is highly indicative of their identity, it is becoming an increasingly popular biometric modality for use in autonomous visual surveillance and monitoring. There are various steps used in gait recognition frameworks such as segmentation, feature extraction, feature learning and similarity measurement. These steps are mutually independent with each part fixed, which results in a suboptimal performance in a challenging condition. It can be done independently of the users' involvement. Low-resolution video and straightforward instrumentation can verify an individual's identity, making impersonation a rarity. Using the benefits of the Generative Adversarial Network (GAN), this investigation tackles the problem of unevenly distributed unlabeled data with infrequently performed tasks. To estimate the data circulation in various circumstances using constrained observed gait data, a multimodal generator is applied here. When it comes to sharing knowledge, the variety provided by the data generated by a multimodal generator is hard to beat. The capability to distinguish gait activities with varying patterns due to environmental dynamics is enhanced by this multimodal generator. This system is more stable than other gait-based recognition methods because it can process data that is not equally dispersed throughout a different environment. The system's reliability is enhanced by the multimodal generator's capacity to produce a wide variety of outputs. The testing results show that this algorithm is superior to other gait-based recognition methods because it can adapt to changing environments.

Abstract

Full Text|PDF|XML

This paper deals with the trendy topic of coronavirus. The disease is causing severe damage to the entire population as well as to the nation’s economy. Machine Learning algorithms like Support Vector Machines and SIR Models have been used to prepare good and valid predictions of this disease. Total cases, recovered cases, infected cases, and Deaths reported are there in the paper ahead represented beautifully in form of pie charts, bar graphs, and line plots. Predictions are there for the next 20 days and we all hope that the cases remain as low as possible, and we achieve the peak of the disease as early as possible. Also, it should be made clear that these are not clinically and globally accepted to be true, and these should not be used anywhere on a medical basis. This clearly gives us the right approach and a brief idea of how Machine Learning can be used in such pandemic situations.

load more...
- no more data -
- no more data -