Javascript is required
Search
/
/
Journal of Industrial Intelligence
JGELCD
Journal of Industrial Intelligence (JII)
JIMD
ISSN (print): 2958-2687
ISSN (online): 2958-2695
Submit to JII
Review for JII
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 3
Archive
Home

Journal of Industrial Intelligence (JII) emerges as a premier platform in the domain of intelligent technologies and their industrial applications, distinguishing itself in the scholarly landscape through its unique approach of blending peer-reviewed, open-access content. JII is committed to furthering academic inquiry into the integration of intelligent technologies in industrial settings, underscoring its pivotal role in transforming contemporary technological and practical paradigms. The journal sets itself apart by not merely focusing on the theoretical dimensions of industrial intelligence, but also by giving considerable emphasis to its practical applications and real-world impacts. This approach marks a distinct departure from other journals in its field, highlighting the tangible effects of intelligent technologies in industry. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our expertise in orchestrating the peer-review, editing, and production processes, all accepted articles are published rapidly.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(2)
vladimir simić
Faculty of Transport and Traffic Engineering, University of Belgrade, Serbia
vsima@sf.bg.ac.rs | website
Research interests: Operations Research; Decision Support Systems; Transportation Engineering; Multi-Criteria Decision-Making; Waste Management
liang liu
School of Economics and Management, Tiangong University, China
liuliang@tiangong.edu.cn | website
Research interests: Operations Management; Industrial and Systems Engineering; Artificial Intelligence and Digital Management; Logistics and Supply Chain Management; Digital Twin and Lean Smart Manufacturing; Modeling and Simulation of Complex Systems

Aims & Scope

Aims

Journal of Industrial Intelligence (JII) (ISSN 2958-2687) serves as an innovative forum for disseminating cutting-edge research in intelligent technologies and their practical applications in the industrial sector. It aims to bridge the gap between academic research and industrial practice, providing a platform for researchers, industrial professionals, and policymakers to present both foundational and applied research findings. JII welcomes a variety of submissions including reviews, regular research papers, short communications, and special issues on specific topics, particularly emphasizing works that combine technical rigor with real-world industrial applicability.

The journal’s objective is to foster detailed and comprehensive publication of research findings, with no constraints on paper length. This allows for in-depth presentation of theories and experimental results, facilitating reproducibility and comprehensive understanding. JII also offers distinctive features including:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

JII covers an extensive range of topics, reflecting the diverse aspects of industrial intelligence:

  • Industry 4.0 Technologies: Exploration of the fourth industrial revolution technologies and their transformative impact on industries.

  • Multi-agent Systems: Studies on collaborative sensing and control using multi-agent systems in industrial contexts.

  • Data Analytics in Industry: Research on feature extraction, knowledge acquisition, industrial data modeling, and visualization.

  • Intelligent Sensing and Perception: Innovations in industrial perception, cognition, and decision-making processes.

  • Smart Factories and IoT: Examination of smart factory concepts and the integration of the Internet of Things in industrial operations.

  • Quality Surveillance and Fault Diagnosis: Techniques for product quality monitoring and fault diagnosis in manufacturing.

  • Remote Monitoring and Integrated Systems: Studies on internet-based remote monitoring and the integration of sensors and machines.

  • Predictive Maintenance and Abnormal Situation Monitoring: Research on predictive maintenance strategies and monitoring of abnormal situations in industrial settings.

  • Control Systems: Advanced research in cooperative, autonomous, and optimization control systems.

  • Intelligent Decision Systems: Development and application of intelligent decision-making systems in industrial contexts.

  • Virtual Manufacturing and Smart Grids: Innovations in virtual manufacturing, smart grids, and their industrial applications.

  • Autonomous Vehicles and UAVs: Research on unmanned vehicles and unmanned aerial vehicles (UAVs) in industrial applications.

  • Reinforcement Learning in Real-Time Optimization: Application of reinforcement learning for real-time optimization in industrial processes.

  • Weak AI Development: Exploration of weak AI development and its implications in industrial intelligence.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML

Equipment failure in paper mills represents a critical barrier to operational efficiency and the adoption of Industry 4.0 principles. To address this, a systematic literature review was conducted to identify the multifactorial determinants of such failures. A novel hybrid methodology was proposed, integrating the Functional Analysis Systems Technique (FAST), enhanced by Lean 5S (Sort “Seiri”, Set in Order “Seiton”, Shine “Seiso”, Standardize “Seiketsu”, Sustain “Shitsuke”) principles, to structure the qualitative data collection. The analysis was performed using a Pugh matrix, followed by a Principal Component Analysis (PCA) to extract knowledge systematically. This approach facilitated the development of a conceptual model for downtime causation. The PCA results indicate that two principal components collectively explain 58.5% of the observed variance in failure data. The f irst component was strongly correlated with maintenance practices and operational errors, while the second was associated with intrinsic equipment characteristics and their operating conditions. This data-driven modeling elucidates underlying correlations between disparate factors, providing a robust foundation for prioritizing targeted maintenance optimization actions. This research contributes to the field of industrial intelligence by demonstrating an original methodology for transforming qualitative systematic review data into a quantifiable analytical framework. The application of PCA to this corpus enables the identification of multidimensional interactions that are frequently overlooked in conventional analyses, thereby enriching root-cause failure analysis and informing strategic decision making for predictive maintenance. The identified factors underscore the imperative of a balanced integration between technical data and human factors for the successful digital transformation of production systems.

Abstract

Full Text|PDF|XML

Material extrusion additive manufacturing (MEX-AM) has emerged as a transformative technology with the potential to redefine industrial production; however, persistent challenges remain regarding variability in part quality, the absence of robust in-process defect detection, and limited capacity for process optimization. To address these limitations, an integrated multi-sensor and machine learning (ML) framework was developed to enhance real-time monitoring and defect detection during MEX-AM. Data were acquired from thermocouples, accelerometers, and high-resolution cameras, and subsequently processed through a multi-sensor data fusion pipeline to ensure robustness against noise and variability. A Multi-Criteria Decision Analysis (MCDA) framework was employed to evaluate candidate ML algorithms based on accuracy, computational cost, and interpretability. Random Forest (RF) and Artificial Neural Network (ANN) models were identified as the most suitable approaches for MEX-AM applications. Validation experiments demonstrated a 92% success rate in corrective interventions, with a reduction of defective components by 38% compared with conventional monitoring methods. The integration of sensor fusion with advanced learning models provided improved predictive capability, enhanced process stability, and significant progress toward intelligent, self-optimizing manufacturing systems. The proposed methodology advances statistical quality control and reduces material waste while aligning with the objectives of Industry 4.0 and smart manufacturing. By demonstrating the efficacy of multi-sensor fusion and ML in real-world AM environments, this study highlights a pathway toward scalable, autonomous, and sustainable industrial production.

Open Access
Research article
Machine Learning-Driven IDPS in IIoT Smart Metering Networks
qutaiba i. ali ,
sahar l. qaddoori
|
Available online: 03-30-2025

Abstract

Full Text|PDF|XML

The proliferation of the Industrial Internet of Things (IIoT) has transformed energy distribution infrastructures through the deployment of smart metering networks, enhancing operational efficiency while concurrently expanding the attack surface for sophisticated cyber threats. In response, a wide range of Machine Learning (ML)–based Intrusion Detection and Prevention Systems (IDPS) have been proposed to safeguard these networks. In this study, a systematic review and comparative analysis were conducted across seven representative implementations targeting the Internet of Things (IoT), IIoT, fog computing, and smart metering contexts. Detection accuracies reported in these studies range from 90.00% to 99.95%, with models spanning clustering algorithms, Support Vector Machine (SVM), and Deep Neural Network (DNN) architectures. It was observed that hybrid Deep Learning (DL) models, particularly those combining the Convolutional Neural Network and the Long Short-Term Memory (CNN-LSTM), achieved the highest detection accuracy (99.95%), whereas unsupervised approaches such as K-means clustering yielded comparatively lower performance (93.33%). Datasets utilized included NSL-KDD, CICIDS2017, and proprietary smart metering traces. Despite notable classification accuracy, critical evaluation metrics—such as False Positive Rate (FPR), inference latency, and computational resource consumption—were frequently underreported or omitted, thereby impeding real-world applicability, especially in edge computing environments with limited resources. To address this deficiency, a unified benchmarking framework was proposed, incorporating precision-recall analysis, latency profiling, and memory usage evaluation. Furthermore, strategic directions for future research were outlined, including the integration of federated learning to preserve data privacy and the development of lightweight hybrid models tailored for edge deployment. This review provides a data-driven foundation for the design of scalable, resource-efficient, and privacy-preserving IDPS solutions within next-generation IIoT smart metering systems.

Abstract

Full Text|PDF|XML
The food industry faces a growing challenge concerning improving operational efficiency and reducing waste to maintain competitiveness and meet sustainability purposes. This study explores the application of the Define–Measure–Analyze–Improve–Control (DMAIC) methodology as a critical part of the Lean Six Sigma (LSS) framework, as a structured, data-driven approach to identifying and eliminating raw material waste in the packaging phase of pasta production. The primary objective was to investigate the root causes of waste and implement targeted improvements to enhance industrial process performance in pasta packaging. Real production data from a pasta manufacturing facility were collected and analyzed, focusing on the packaging stage where significant losses had been observed. The DMAIC cycle guided the project through problem definition, data measurement, root cause analysis, process improvement, and long-term control strategies. The analysis identified key operational issues, including overfilling, equipment settings, and inadequate material handling. Equipment reconfiguration, staff training, and standardization of procedures were implemented, resulting in measurable reductions in raw material losses and improved packaging accuracy. An economic evaluation demonstrated that these improvements were effective from an operational standpoint and also generated a positive return on investment. The findings confirm that the DMAIC methodology provides a scalable and repeatable model for reducing waste and improving efficiency in food production environments. This research emphasizes the importance of structured problem-solving approaches in achieving ecologically and socially sustainable, as well as economically viable, process improvements in the food industry.

Abstract

Full Text|PDF|XML
Transit time in the transportation and logistics sector is typically governed either by contractual agreements between the customer and the service provider or by relevant regulatory frameworks, including national laws and directives. In the context of postal services, where shipment volumes frequently reach millions of items per day, individual contractual definitions of transit time are impractical. Consequently, transit time expectations are commonly established through regulatory standards. These standards, as observed in numerous European Union (EU) countries and Serbia—the focus of the present case study—define expected delivery timelines at an aggregate level, without assigning specific transit time to individual postal items. Under this conventional model, senders are often unaware of the exact delivery schedule but are provided with general delivery expectations. An alternative approach was introduced and evaluated in this study, in which the transit time is explicitly selected by the sender for each shipment, offering predefined options such as D+1 (next-day delivery) and D+3 (three-day delivery). The impact of this individualized approach on operational efficiency and process organization within sorting facilities was examined through its implementation in a national postal company in Serbia. A comparative analysis between the traditional aggregate-based model and the proposed individualized model was conducted to assess variations in process management, throughput efficiency, and compliance with quality standards. The findings suggest that the new approach enhances the predictability of sorting operations, improves resource allocation, and facilitates more flexible workflow planning, thereby contributing to higher overall service quality and customer satisfaction. Furthermore, it was observed that aligning operational processes with explicitly defined transit time commitments can lead to more efficient industrial process management in logistics and postal centers.
Open Access
Research article
Benzene Pollution Forecasting by Recurrent Neural Networks Tuned with Adapted Elk Heard Optimizer
dejan bulaja ,
tamara zivkovic ,
milos pavkovic ,
vico zeljkovic ,
nikola jovic ,
branislav radomirovic ,
miodrag zivkovic ,
nebojsa bacanin
|
Available online: 03-30-2025

Abstract

Full Text|PDF|XML
Benzene is a toxic airborne contaminant and a recognized cancer-causing agent that presents substantial health hazards even at minimal concentrations. The precise prediction of benzene concentrations is crucial for reducing exposure, guiding public health strategies, and ensuring adherence to environmental regulations. Because of benzene's high volatility and prevalence in metropolitan and industrial areas, its atmospheric levels can vary swiftly influenced by factors like vehicular exhaust, weather patterns, and manufacturing processes. Predictive models, especially those driven by machine learning algorithms and real-time data streams, serve as effective instruments for estimating benzene concentrations with notable precision. This research emphasizes the use of recurrent neural networks (RNNs) for this objective, acknowledging that careful selection and calibration of model hyperparameters are critical for optimal performance. Accordingly, this paper introduces a customized version of the elk herd optimization algorithm, employed to fine-tune RNNs and improve their overall efficiency. The proposed system was tested using real-world air quality datasets and demonstrated promising results for predicting benzene levels in the atmosphere.

Abstract

Full Text|PDF|XML

Accurate smoke detection in complex industrial environments, such as chemical plants, remains a significant challenge due to the inherently low contrast, transparency, and weak texture features of smoke, which often exhibits blurred boundaries and diverse spatial scales. To address these limitations, YOLOv8n-AM, an enhanced lightweight detection framework belonging to the YOLO (You Only Look Once) series, was developed by integrating advanced architectural components into the baseline YOLOv8n model. Specifically, the conventional Spatial Pyramid Pooling-Fast (SPPF) module was replaced with an Attention-based Intra-scale Feature Interaction (AIFI) Convolution Synergistic Feature Processing Module (SFPM), i.e., AIFC-SFPM, enabling more effective semantic feature representation and an improvement in detection accuracy. In parallel, the original convolutional module was optimized using a Multi-Scale Downsampling (MSDown) module, which reduces model redundancy and computational overhead, increasing the detection speed. Experimental evaluations demonstrate that the YOLOv8n-AM model achieves a 1.7% improvement in mean Average Precision (mAP), accompanied by a 9.1% reduction in Giga Floating-point Operations Per Second (GFLOPs) and a 15.4% decrease in parameter count when compared to the original YOLOv8n framework. These improvements collectively underscore the model’s suitability for real-time deployment in resource-constrained industrial settings where rapid and reliable smoke detection is critical. The proposed architecture thus provides a computationally efficient and high-precision solution for safety-critical visual monitoring applications.

Open Access
Research article
Click Fraud Detection with Recurrent Neural Networks Optimized by Adapted Crayfish Optimization Algorithm
lepa babic ,
vico zeljkovic ,
luka jovanovic ,
stefan ivanovic ,
aleksandar djordjevic ,
tamara zivkovic ,
miodrag zivkovic ,
nebojsa bacanin
|
Available online: 12-30-2024

Abstract

Full Text|PDF|XML

Click fraud is a deceptive malicious strategy that relies on repetitive mimicking of human clicking on online advertisements, without actual intention to complete a purchase. This fraud can result in significant financial loses for both advertising companies and marketers, and at the same time destroying their public images. Nevertheless, detection of these illegitimate clicks is very challenging as they closely resemble to authentic human engagement. This study examines the utilization of artificial intelligence approaches to detect deceptive clicks, by identifying subtle correlations between the timing of the clicks, taking into account their geographical or network sources and linked application sources as indicators to separate legitimate from malicious activity. This study highlights the application of recurrent neural networks (RNNs) for this task, keeping in mind that the process of selection and tuning of the model's hyperparameters plays a vital role in the performance. An adapted implementation of crayfish optimization algorithm (COA) was consequently proposed in this paper, and used to optimize RNN models to enhance their general performance. The developed framework was evaluated utilizing actual operational datasets and yielded encouraging outcomes.

Abstract

Full Text|PDF|XML
The detection of image defects under low-illumination conditions presents significant challenges due to unstable and uneven lighting, which introduces substantial noise and shadow artifacts. These artifacts can obscure actual defect points while simultaneously increasing the likelihood of false positives, thereby complicating accurate defect identification. To address these limitations, a novel defect detection method based on machine vision was proposed in this study. Low-illumination images were captured and decomposed using a noise assessment-based framework to enhance defect visibility. A spatial transformation technique was then employed to distinguish between target regions and background components based on localized variations. To maximize the contrast between these components, the Hue-Saturation-Intensity (HSI) color space was leveraged, enabling precise segmentation of low-illumination images. Subsequently, an energy local binary pattern (LBP) operator was applied to the segmented images for defect detection, ensuring improved robustness against noise and illumination inconsistencies. Experimental results demonstrate that the proposed method significantly enhances detection accuracy, as confirmed by both subjective visual assessments and objective performance evaluations. The findings indicate that the proposed approach effectively mitigates the adverse effects of low illumination, thereby improving the accuracy and reliability of defect detection in challenging imaging environments.

Abstract

Full Text|PDF|XML
A wide range of safety hazards exist in underground coal mines, characterized by unpredictability, randomness, and coupling effects. The increasing structural complexity and diversity of underground equipment present new challenges for fault state monitoring and diagnosis. To address the unique characteristics of underground equipment fault diagnosis, a characterization model of vibration hazards was proposed, integrating a time-frequency mask-based non-stationary filtering technique and sparse representation. Experimental analysis demonstrates that the time-frequency mask algorithm effectively filters out sharp non-stationary noise, restoring the original stationary healthy signal. Compared to Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Principal Component Analysis (PCA), the sparse representation algorithm exhibits superior performance in characterizing vibration hazards, achieving the highest accuracy.

Abstract

Full Text|PDF|XML
Power-domain non-orthogonal multiple access (NOMA) is one of the key technologies in 5G communica-tions, enabling efficient multi-user transmission over the same time-frequency resources through power multiplexing. In this study, an improved max-min relay selection strategy was proposed for NOMA cooperative communication systems to address the issue of insufficient channel fairness in conventional strategies. The proposed strategy optimizes the relay selection process with the objective of ensuring channel fairness. Theoretical derivations and simulation analyses were conducted to comprehensively evaluate the proposed strategy from the perspectives of user throughput and system outage probability. The results demonstrate that, compared to the conventional max-min strategy and other commonly used relay selection methods, the proposed strategy significantly reduces the system outage probability while enhancing user throughput, thereby verifying its superiority in improving system reliability and stability.

Abstract

Full Text|PDF|XML
Job scheduling for a single machine (JSSM) remains a core challenge in manufacturing and service operations, where optimal job sequencing is essential to minimize flow time, reduce delays, prioritize high-value tasks, and enhance overall system efficiency. This study addresses JSSM by developing a hybrid solution aimed at balancing multiple performance objectives and minimizing overall processing time. Eight established scheduling rules were examined through a comprehensive simulation based on randomly generated scenarios, each defined by three parameters: processing time, customer weight, and job due date. Performance was evaluated using six key metrics: flow time, total delay, number of delayed jobs, maximum delay, average delay of delayed jobs, and average weight of delayed jobs. A multi-criteria decision-making (MCDM) framework was applied to identify the most effective scheduling rule. This framework combines two approaches: the Analytic Hierarchy Process (AHP), used to assign relative importance to each criterion, and the Evaluation based on Distance from Average Solution (EDAS) method, applied to rank the scheduling rules. AHP weights were determined by surveying expert assessments, whose averaged responses formed a consensus on priority ranking. Results indicate that the Earliest Due Date (EDD) rule consistently outperformed other rules, likely due to the high weighting of delay-sensitive criteria within the AHP, which positions EDD favourably in scenarios demanding stringent adherence to deadlines. Following this initial rule-based scheduling phase, an optimization stage was introduced, involving four Tabu Search (TS) techniques: job swapping, block swapping, job insertion, and block insertion. The TS optimization yielded marked improvements, particularly in scenarios with high job volumes, significantly reducing delays and improving performance metrics across all criteria. The adaptability of this hybrid MCDM framework is highlighted as a primary contribution, with demonstrated potential for broader application. By adjusting weights, criteria, or search parameters, the proposed method can be tailored to diverse real-time scheduling challenges across different sectors. This integration of rule-based scheduling with metaheuristic search underscores the efficacy of hybrid approaches for complex scheduling problems.
load more...
- no more data -
- no more data -