Javascript is required
Search
Volume 3, Issue 1, 2025
Open Access
Research article
Benzene Pollution Forecasting by Recurrent Neural Networks Tuned with Adapted Elk Heard Optimizer
dejan bulaja ,
tamara zivkovic ,
milos pavkovic ,
vico zeljkovic ,
nikola jovic ,
branislav radomirovic ,
miodrag zivkovic ,
nebojsa bacanin
|
Available online: 03-30-2025

Abstract

Full Text|PDF|XML
Benzene is a toxic airborne contaminant and a recognized cancer-causing agent that presents substantial health hazards even at minimal concentrations. The precise prediction of benzene concentrations is crucial for reducing exposure, guiding public health strategies, and ensuring adherence to environmental regulations. Because of benzene's high volatility and prevalence in metropolitan and industrial areas, its atmospheric levels can vary swiftly influenced by factors like vehicular exhaust, weather patterns, and manufacturing processes. Predictive models, especially those driven by machine learning algorithms and real-time data streams, serve as effective instruments for estimating benzene concentrations with notable precision. This research emphasizes the use of recurrent neural networks (RNNs) for this objective, acknowledging that careful selection and calibration of model hyperparameters are critical for optimal performance. Accordingly, this paper introduces a customized version of the elk herd optimization algorithm, employed to fine-tune RNNs and improve their overall efficiency. The proposed system was tested using real-world air quality datasets and demonstrated promising results for predicting benzene levels in the atmosphere.

Abstract

Full Text|PDF|XML
Transit time in the transportation and logistics sector is typically governed either by contractual agreements between the customer and the service provider or by relevant regulatory frameworks, including national laws and directives. In the context of postal services, where shipment volumes frequently reach millions of items per day, individual contractual definitions of transit time are impractical. Consequently, transit time expectations are commonly established through regulatory standards. These standards, as observed in numerous European Union (EU) countries and Serbia—the focus of the present case study—define expected delivery timelines at an aggregate level, without assigning specific transit time to individual postal items. Under this conventional model, senders are often unaware of the exact delivery schedule but are provided with general delivery expectations. An alternative approach was introduced and evaluated in this study, in which the transit time is explicitly selected by the sender for each shipment, offering predefined options such as D+1 (next-day delivery) and D+3 (three-day delivery). The impact of this individualized approach on operational efficiency and process organization within sorting facilities was examined through its implementation in a national postal company in Serbia. A comparative analysis between the traditional aggregate-based model and the proposed individualized model was conducted to assess variations in process management, throughput efficiency, and compliance with quality standards. The findings suggest that the new approach enhances the predictability of sorting operations, improves resource allocation, and facilitates more flexible workflow planning, thereby contributing to higher overall service quality and customer satisfaction. Furthermore, it was observed that aligning operational processes with explicitly defined transit time commitments can lead to more efficient industrial process management in logistics and postal centers.

Abstract

Full Text|PDF|XML
The food industry faces a growing challenge concerning improving operational efficiency and reducing waste to maintain competitiveness and meet sustainability purposes. This study explores the application of the Define–Measure–Analyze–Improve–Control (DMAIC) methodology as a critical part of the Lean Six Sigma (LSS) framework, as a structured, data-driven approach to identifying and eliminating raw material waste in the packaging phase of pasta production. The primary objective was to investigate the root causes of waste and implement targeted improvements to enhance industrial process performance in pasta packaging. Real production data from a pasta manufacturing facility were collected and analyzed, focusing on the packaging stage where significant losses had been observed. The DMAIC cycle guided the project through problem definition, data measurement, root cause analysis, process improvement, and long-term control strategies. The analysis identified key operational issues, including overfilling, equipment settings, and inadequate material handling. Equipment reconfiguration, staff training, and standardization of procedures were implemented, resulting in measurable reductions in raw material losses and improved packaging accuracy. An economic evaluation demonstrated that these improvements were effective from an operational standpoint and also generated a positive return on investment. The findings confirm that the DMAIC methodology provides a scalable and repeatable model for reducing waste and improving efficiency in food production environments. This research emphasizes the importance of structured problem-solving approaches in achieving ecologically and socially sustainable, as well as economically viable, process improvements in the food industry.
Open Access
Research article
Machine Learning-Driven IDPS in IIoT Smart Metering Networks
qutaiba i. ali ,
sahar l. qaddoori
|
Available online: 03-30-2025

Abstract

Full Text|PDF|XML

The proliferation of the Industrial Internet of Things (IIoT) has transformed energy distribution infrastructures through the deployment of smart metering networks, enhancing operational efficiency while concurrently expanding the attack surface for sophisticated cyber threats. In response, a wide range of Machine Learning (ML)–based Intrusion Detection and Prevention Systems (IDPS) have been proposed to safeguard these networks. In this study, a systematic review and comparative analysis were conducted across seven representative implementations targeting the Internet of Things (IoT), IIoT, fog computing, and smart metering contexts. Detection accuracies reported in these studies range from 90.00% to 99.95%, with models spanning clustering algorithms, Support Vector Machine (SVM), and Deep Neural Network (DNN) architectures. It was observed that hybrid Deep Learning (DL) models, particularly those combining the Convolutional Neural Network and the Long Short-Term Memory (CNN-LSTM), achieved the highest detection accuracy (99.95%), whereas unsupervised approaches such as K-means clustering yielded comparatively lower performance (93.33%). Datasets utilized included NSL-KDD, CICIDS2017, and proprietary smart metering traces. Despite notable classification accuracy, critical evaluation metrics—such as False Positive Rate (FPR), inference latency, and computational resource consumption—were frequently underreported or omitted, thereby impeding real-world applicability, especially in edge computing environments with limited resources. To address this deficiency, a unified benchmarking framework was proposed, incorporating precision-recall analysis, latency profiling, and memory usage evaluation. Furthermore, strategic directions for future research were outlined, including the integration of federated learning to preserve data privacy and the development of lightweight hybrid models tailored for edge deployment. This review provides a data-driven foundation for the design of scalable, resource-efficient, and privacy-preserving IDPS solutions within next-generation IIoT smart metering systems.

- no more data -