Javascript is required
Search

Acadlore takes over the publication of IJCMEM from 2025 Vol. 13, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.

This issue/volume is not published by Acadlore.
Volume 13, Issue 2, 2025

Abstract

Full Text|PDF|XML

In today's digitalized production environments, AI-supported systems not only transform production processes, but also complicate the nature of decisions taken in these processes. Especially in smart production scenarios where edge and cloud computing infrastructures are used together, decision processes must be managed with both low-latency local data and large-volume centralized analyses. This bidirectional data flow brings about multi-criteria decision problems that cannot be easily solved with classical algorithms due to the presence of incomplete, uncertain and unstable information. This study proposes a new decision support model for such multi-criteria and uncertain decision problems that arise in computer-aided production environments. Unlike classical data analytics methods, our model is designed based on the T-Spherical Hesitant Fuzzy Rough Set (T-SHFR) theory. While T-SHFR evaluates decision alternatives in the triangle of truth, falsehood and uncertainty, it can also systematically process incomplete or contradictory data with hesitant membership and rough set logic. In this respect, the model goes beyond the artificial intelligence applications frequently found in the literature and offers a structure where uncertainty is directly modeled. In the study, this method was integrated with edge and cloud computing architectures and the multi-criteria performance of Edge-only, Cloud-only and Hybrid approaches was evaluated; scenario-based analyses were conducted on basic parameters such as production efficiency, downtime, cost and resource usage. The findings show that the T-SHFR-based model significantly increases decision quality especially in hybrid architectures and offers higher stability and flexibility in stuations where classical methods are difficult. Thus, the proposed approach offers a holistic framework that strengthens decision making under uncertainty in computer-driven production systems.

Abstract

Full Text|PDF|XML

Each employee has a number of workloads in the form of tasks and responsibilities that must be completed within a certain period of time. There are several aspects that are assessed in performance evaluations, such as the number of working hours per week, the number of projects handled, the number of overtime hours to complete work, the number of sick days, the number of team members, the number of hours to develop self-skills, job promotion offers, etc. All of these aspects certainly impact performance scores, employee satisfaction scores, and the ability to survive. Excessive workload will have a negative impact on physical and mental health, performance, and employee satisfaction levels. This study aims to analyze the results of employee performance evaluations based on workload factors using machine learning approaches such as linear regression and random forest. The computational results will be used to compare the effectiveness of machine learning models and analyze the accuracy of the assessment results. The significance of this study lies in its potential to enhance employee performance management systems by providing accurate, data-driven insights for decision-making processes such as promotions, compensation, and workforce planning. Practical and fair employee performance assessments will enable decision-makers to make informed choices regarding job promotions, salary increases, annual bonuses, and employee career development.

Abstract

Full Text|PDF|XML

Fiber-plastic composites are increasingly used in the aerospace, automotive, and wind energy industries, often exposed to multi-axial mechanical loads and high climatic stresses. The objective of this study is to investigate the fatigue behavior of these composites as a function of multi-axial mechanical stress by a novel developed degradation model based on continuum-damage-mechanical approaches. The model's simulation performance has been examined and demonstrated it is applicable in engineering practice. CFRC composites exhibit 74.5 MPa of tensile strength, but GF(MLG)/EP glass fiber reinforced composites demonstrate a considerable lack in both stiffness and regular deformation until ultimate failure. The failure of textile-reinforced plastic composites occurred in three stages of degradation. The tensile strength of biaxial NCF glass-reinforced polyester material was increased by 13 percent as well as the fatigue endurance by 20 percent as compared to the woven roving reinforced composites. The damage onset was 25-35% of the beginning stage. The structure then stabilized to 10-15% and then failed. In GF-MLG/EP, a pattern of stiffness change according to a direction was observed, where transverse cracks reduced the stiffness to 75% of its initial value after 10,000 cycles. Fatigue damage is more resistant in biaxial NCF composites than in woven fabric composites.

Abstract

Full Text|PDF|XML

Wireless propagation is a crucial technology in modern advancements, requiring highly accurate prediction. Path loss propagation is influenced by various parameters that must be accounted for to predict the signal route over the entire distance and refine breakpoint models with precise interference calculations. The breakpoint distance is defined as the point separating two distinct trends of path loss, each following a different path loss exponent. This paper reviews the Fresnel, Perera, and True breakpoints in a dual-slope model reference at 2 GHz, using a fixed exponent of n₁ = 2 before the breakpoint and n₂ = 4 after. It then proposes a distance-adaptive exponent model that considers a steady path by incorporating a flexible exponent based on environmental factors, mitigating the abrupt change in path loss exponent at breakpoints observed in the dual-slope model, which leads to discontinuities. The comparison results under similar conditions demonstrate that both models perform similarly over short distances of up to 100 meters, while the dual-slope model is more suitable for distances of up to 1 km. However, due to its stability and consistency, the distance-adaptive exponent model is more appropriate for longer distances. Validation using RMSE, followed by comparative analysis, confirms that our model offers higher stability in interference scenarios. These findings will assist researchers and wireless designers in predicting and selecting the most accurate and effective propagation model.

Open Access
Research article
A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization
abderraouf seniguer ,
abdelhamid iratni ,
mustapha aouache ,
hadja yakoubi ,
haithem mekhermeche
|
Available online: 06-29-2025

Abstract

Full Text|PDF|XML

Adequate indoor lighting is essential for ensuring visual comfort, energy efficiency, and compliance with architectural standards. This study presents a novel smartphone-based platform for real-time illuminance estimation and visual mapping, that leverages a lightweight machine learning model. The application utilizes the smartphone’s built-in camera to capture images of the scenes and performs illuminance prediction for each patch of the image using a trained regression model, offering a cost-effective alternative to physical lux meter grid. The mobile application generates a color-coded heat maps that visualize the spatial distribution of illuminance and do the assessment of its compliance with an established lighting norm. The advantages of the proposed system include its affordability, portability, and prediction accuracy enabled by the machine learning model trained on image intensity features. Experimental tests in a controlled indoor setting demonstrate high prediction accuracy and low computational requirements, confirming the platform’s suitability for use in real-word applications. The tool enables effective and precise analysis of light and is hence usable in architectural diagnostics, energy audits, and spatial design optimization. In addition, the user-friendly interface benefits both professional and non-professional users, facilitating real-time adjustment and optimization of indoor lighting.

Abstract

Full Text|PDF|XML

Time series count data such as daily cases of Covid-19 requires adequate modelling and forecasting. Traditional time series models do not have limitations in modelling time series count data, also known as unbounded N-valued data. This study involved in-depth analyses of various models in fitting time unbounded N-valued data. Models such as the Zero-Inflated Poisson, zero-inflated Binomial, and ARIMA popularly used to fit time series count were compared with the integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH) models. The investigation involved two critical aspects: simulation and real-life data analysis. First, we simulated the time series count data, modelled and compared the performance of the competing models. The simulation outcomes consistently favoured the Negative Binomial INGARCH models highlighting their suitability for count data modelling. Subsequently, we examined life data on Covid-19 data in Nigeria. The life data also yielded strong support for the NB INGARCH model. This study recommends further exploration of the NB INGARCH model, as it exhibits substantial promise in effectively modelling over-dispersed zero-inflated data. The current study contributes valuable insights into selecting appropriate models for time series count data, addressing the intricate challenges posed by this specialized data type. Also, the overall outcome of the study helps in national planning, and resource allocation for the people needing health intervention.

Abstract

Full Text|PDF|XML

A bridge is a construction that enables traffic to cross a barrier while keeping in touch with roads or railroads. Throughout history, bridges have played a crucial role in human civilisation and remain an essential component of any transportation network. The main purpose of this study is to evaluate the seismic resistance of bridge structure under effect of earthquake action by adopting force-displacement yielding points and performance points methods. The results of force-displacement yielding point and performance points revealed that the transversal yielding points were greater than the longitudinal yielding points and performance points, this indicates that the seismic action on the transversal bents has little effect and that no damage will be done to the bents if they are subjected to this action alone but in longitudinal direction the force-displacement yielding point and performance points were lower, indicating that the seismic resistance performance of bridge bents is small with low elasticity and stiffness and high plasticity. Meaning that bridge bents capacity cannot resist the demand. Therefore, Therefore, this study suggested to improve the structural performance and seismic resistance of bridge bents by increasing the diameter of bridge piers by 1.6m, 1.8m, and 2m. After thickening the piers structure, the results of yielding points and performance points values were increased with increasing the piers diameter. And the seismic displacement decreased with increasing the piers diameter. Indicating that the elastic limit of bridge bents will increase and the bridge piers will resist the earthquake action according to increase in the stiffness and bearing capacity of bridge bents.

Open Access
Research article
Artificial Intelligence-Based Intelligent Navigation System for Alleviating Traffic Congestion: A Case Study in Batam City, Indonesia
luki hernando ,
ririt dwiputri permatasari ,
sri dwi ana melia ,
m. ansyar bora ,
alhamidi ,
aulia agung dermawan
|
Available online: 06-29-2025

Abstract

Full Text|PDF|XML

Traffic congestion is a major issue faced by Batam, a city that continues to grow rapidly as an economic and logistics hub. This study adopts the Design Science Research Methodology (DSRM) to develop an intelligent navigation system based on artificial intelligence (AI) aimed at optimizing urban traffic management in Batam. The system integrates real-time traffic data, machine learning algorithms, and reinforcement learning to predict traffic flow and optimize route selection. Using the DSRM framework, the system was designed, implemented, and evaluated iteratively to ensure its effectiveness in addressing the city's unique traffic challenges. The results of the study indicate that the implementation of the AI-based navigation system successfully reduced the average travel time by 22.8%, distributed traffic loads more evenly, and improved travel efficiency. Furthermore, the system demonstrated a route prediction accuracy of 91.3%, higher than conventional GPS systems. Performance evaluation also showed high responsiveness, with an average latency of only 423 milliseconds. This study concludes that the AI-based navigation system, developed through the DSRM framework, can be an effective solution to address traffic congestion in rapidly developing cities like Batam and can be applied to other cities with similar characteristics.

Abstract

Full Text|PDF|XML

Multiple-input-multiple-output (MIMO) techniques and effective precoding algorithms are required due to the inherent difficulties of Millimeter-Wave (mmWave) propagation. The challenges include the significant route loss due to high-frequency usage. Further, the sparse channel matrix yields improper channel estimation (CE), resulting in erroneous reception, which limits the implementation of mmWave technology. Therefore, beam formation is required to direct the power in the direction of the user by exploiting the spatial multiplexing of MIMO. The above-stated limitations, along with hardware constraints of using a lesser number of radio frequency (RF) chains, can be mitigated through effective precoding at the transmitter. The efficient utilization of mmWave bandwidth by users is crucial for a spectrally efficient system, as it helps conserve this scarce resource. Hence, this study has examined the spectral efficiency (SE) of mmWave MIMO systems for various precoding strategies, including minimum mean-square estimation (MMSE) precoding, fully digital and hybrid zero-forcing (ZF) precoding, and analog beamforming. The performance in terms of achievable SE has been studied considering variability in the user base and the number of transmit and receive antennas. Simulation results have been presented, showing that the MMSE precoder outperforms the ZF precoder. Furthermore, the dominant fully digital MMSE precoder approaches the SE of single-user MIMO as the number of users increases, compared to the fully digital ZF precoder.

Abstract

Full Text|PDF|XML

The objective of this paper is to examine the design effect of the gas flow field on fuel cell performance. A polymer electrolyte membrane (PEM) fuel cell with 10 W power output operating at 3 A and 4.5 V has been simulated. The study investigates seven configurations of fuel cell assemblies featuring a Z-shaped flow field and explores the effects of various flow fields and flow channel designs. Single Z-type serpentine flow fields with a channel width of 1 mm were modeled to create interconnected pathways. CFD COMSOL Multiphysics 6.1 was used to analyze a three-dimensional, steady-state, isothermal fuel cell model with an active area of 9.84 cm². The study focused on pressure loss, reactions and product distributions, and current density within the fuel cell. Results showed that Model E2 achieved the lowest anode pressure drop at 7 Pa, while Model A1 exhibited the highest pressure drop at 180 Pa, indicating Model E2's superior pressure management. Cathode pressure analysis revealed that Models A1 and A2 generated the highest pressures. Polarization curve analysis determined that Model A2 delivered the highest current density but at elevated pressures up to 1200 Pa. Among the tested configurations, Model E2 emerged as the optimal design, offering excellent performance with minimal pressure drop and enhanced current density. It enabled uniform reactant gas dispersion, leading to a consistent and reliable current distribution across the electrode surface. Moreover, the Model E2 design promoted improved lateral species transfer and uniform species distribution within the gas diffusion layer, contributing to its superior performance.

Abstract

Full Text|PDF|XML

This study introduces Chebyshev Metaheuristic Solver Approach (CMSA), a new computational approach, to get approximate solutions with high-accuracy to a vast range of linear and non-linear differential equations (DEs). The main idea is changing the differential problem into a continuous optimization task. First the approximate solution was written as a truncated series of Chebyshev polynomials, where they are chosen due to their numerical stability and optimal approximation properties. The undetermined coefficients of this series turn into the decision variables in an optimization task. The objective function is derived from the residual of the differential equation, integrated with penalty terms to achieve initial or boundary conditions enforcement. Then the Flower Pollination Algorithm (FPA), a nature-inspired metaheuristic algorithm, is used to find the optimal polynomial coefficients via the minimization of this objective function. This hybrid approach symbiotically integrates the spectral method’s exponential convergence properties with the metaheuristic’s powerful global search capabilities. The demonstration of the efficiency and robustness of the approach is done through rigorous computational tests on benchmark problems, involving integro-differential and non-linear boundary value problems. A comparison of the computed results with known exact solutions, validates this optimization-driven spectral technique, showing excellent accordance. The approach is simple to implement and displays outstanding potential for tackling complex DE systems where traditional methods maybe stick.

Abstract

Full Text|PDF|XML

This research addresses the challenge of optimizing the dynamic performance of functionally graded porous plates, which are widely used in aerospace, automotive, and structural applications, where internal porosities can significantly impact their stability and functionality. This study mainly uses an effective layer wise model to examine how the internal porosities affect the mechanical stability and natural frequencies of FGP plates. According to the ratio of porosity distribution, porosity locations, and three typical thicknesses, the vibration behaviors of functionally graded plates are analyzed. The effective material properties are modeled employing a power law. The primary objective of this study is to understand how these factors impact the vibrational behavior of FGM plates and to provide validated results that can serve as a reliable reference for future research. The model's validity and efficiency were established through a rigorous comparison with existing literature. The investigation findings highlight the significant influence of porosity distribution on the mechanical behavior of functionally graded plates. The highest frequency obtained was 259.81 Hz for a plate thickness of 20 mm and a porosity ratio of 0.3 when the porous layer was located in the middle, resulting in an 11.9% increase in the frequency compared to other porosity distributions. These results hold potential as a valuable reference point for future research endeavors in this domain.

Abstract

Full Text|PDF|XML

Aluminium alloys are widely used in various industries due to their excellent strength-to-weight ratio, corrosion resistance, and formability. However, their wear resistance and ductility can be limiting factors in certain applications. This study investigates the effects of reduction ratio and extrusion temperature on the wear rate and ductility of Al 6063 extrudates. A systematic experimental approach was employed, involving the extrusion of aluminium samples at varying reduction ratios (20%, 40%, and 60%) and temperatures (400℃, 450℃, and 500℃). Quadratic models were developed to predict extruded wear rate and ductility, revealing reduction ratio and temperature as significant factors (p<0.05). The results showed that increased reduction percentage led to decreased wear rate, while enhanced grain sizes were achieved with increased reduction ratio and temperature. This research provides valuable insights for optimizing extrusion parameters to improve the wear resistance and ductility of formed Al 6063, which can be applied in various industries, such as aerospace, automotive, and construction, where high-performance aluminium alloys are critical.

Abstract

Full Text|PDF|XML

To solve the Traveling Salesman Problem (TSP), this research compares three swarm-based optimization algorithms: Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Elephant Herding Optimization (EHO). Finding the shortest path to visit each city once and return to the starting point is the goal of the traditional combinatorial optimization problem, TSP. Exact techniques such as Branch and Bound (BB) and Dynamic Programming (DP) can effectively handle smaller TSP cases, but they become unfeasible as the number of cities increases. The solutions offered by metaheuristic algorithms are more scalable. The algorithms' performance is assessed in this study based on execution time, scalability, and solution quality for a range of city sizes (5 to 150). Results reveal that EHO surpasses the others in achieving lower optimal costs.

Abstract

Full Text|PDF|XML

In this work, we introduce a hybrid method that combines Long Short-Term Memory (LSTM) neural networks with Taylor Series Expansion (TSE) to solve high-dimensional Fredholm Integral Equations of the second kind (SFIEs). Specifically, we focus on systems with up to 10000 dimensions, which are common in fields like fluid dynamics, electromagnetics, and quantum mechanics. Traditional methods for solving these equations, such as discretization, collocation, and iterative solvers, face significant challenges in high-dimensional spaces due to their computational cost and slow convergence. LSTM networks approximate the solution functions, and Taylor Series Expansion refines the approximation, ensuring higher accuracy and computational efficiency. Numerical experiments demonstrate that the hybrid method significantly outperforms traditional approaches in both accuracy and stability. This method provides a promising approach to solving complex high-dimensional integral equations efficiently in scientific and engineering applications.

Abstract

Full Text|PDF|XML

Neck injuries remain a critical concern in vehicle safety, particularly during dynamic movements and terrain-induced impacts. Traditional test dummies and wearable devices often fail to capture real-time biomechanical neck responses under such conditions. This study introduces a smart mannequin system designed to measure axial forces and cervical moments in realistic vehicle environments. The system integrates S-type load cells and HX711 amplifiers with a Raspberry Pi 4 for real-time processing, enhanced by Kalman filtering for signal clarity. Calibration was conducted using reference weights from 5 N to 40 N in 5 N increments, with each step validated against a force gauge. The mannequin was tested across various terrains, including straight tracks, inclines, sinusoidal roads, and uneven surfaces, representing realistic military and civilian vehicle conditions. Results showed minimal calibration deviation (2–4 N), with peak force measurements reaching 30.63 N and moment readings up to 1.25 Nm. Higher speeds reduced axial loading on stable tracks, while irregular terrain increased neck strain. The system consistently captured neck loading dynamics, offering a safe, repeatable alternative to human-based testing. Its practical application spans ergonomic vehicle design, occupant safety analysis, and fatigue detection in transport environments.

Abstract

Full Text|PDF|XML

This work analyzes how inlet tube velocities (2, 4, and 6 m/s) impact the water-ethylene glycol mixture’s flow behavior within the inlet tube in the engine oil heat exchanger cooling system in terms of temperature distribution, oil viscosity, pressure difference, and flow velocity distribution. From simulation findings, oil's viscosity reduced from 0.021 Pa.s at 2 m/s flow velocity to 0.015 Pa.s at 6 m/s flow velocity, suggesting a direct relationship between the thermal and flow rate. Pressure drop rises with the inlet velocity increase, from 2 to 6 m/s, with values of 0.45 and 0.92 Pa. In the tube-end bending investigation, influences on the velocity profile for emulsion were observed. Depending on the velocity gradient in curved tubes at 2 and 1.2 m/s was the maximum velocity at the sharply curved wall, 2.3 m/s, and at the inner wall, 1.7 m/s. The gradient at 4 m/s was 1.6 m/s, whereas at 6 m/s the gradient was 3.0 m/s. Heat transfer coefficient increases with velocity, ranging from 500 W/m²·K at 2 m/s to 950 at 6 m/s. This shows remarkable enhancement in convective heat transfer resulting from increased turbulence. There is also significant fluctuation in the velocity inside the tubes, and while it increases, the velocity towards uniform flow distribution will improve heat transfer within the tubes. This change inside the tubes reduces uneven heat distribution and helps increase the flow rate, especially when temperature differences grow and the main fluid experiences strong heat transfer. Heat transfer rate rises from 15 kW at 2 m/s velocity to 35 kW at 6 m/s velocity, and efficiency increases to 70% due to increasing inlet velocity.

Abstract

Full Text|PDF|XML

This article proposes an optimization framework for bus elimination in power system networks using the Kron Reduction Method (KRM), aimed at reducing system complexity while maintaining computational accuracy. Using the IEEE 14-bus system as a testbed, we evaluate seven sequential reduction scenarios, reducing the network from 14 to 7 buses. To improve the quality of reduction, the study integrates Kron’s Loss Equation (KLE) with electrical centrality measures to prioritize passive bus elimination based on loss sensitivity and network topology. The results demonstrate that indiscriminate bus removal can cause substantial deviations in voltage profiles and power loss estimations, whereas the proposed loss-aware approach achieves improved accuracy and stability in reduced models. Visualizations of Y-bus matrix transformations and voltage deviation metrics illustrate the trade-offs between model simplification and fidelity. The proposed methodology supports real-time system modelling and is scalable for larger grid applications. Future extensions include automated reduction strategies leveraging machine learning and applications in dynamic grid optimization.

Abstract

Full Text|PDF|XML

In real-life conditions, rubber components in truck tires are exposed to fluctuating loads, often leading to failure from the formation and growth of cracks-an issue especially common in retreaded tires. Tire retreading is one of the first methods of recycling tires by extending their life cycle, but the lack of knowledge and extensive research on quality tire retreading is putting the lives of road users at stake. The main objective of this research work is to study the spliced pre-cured treaded liners (PTLs) by its mechanical properties, and their endurance life cycle under variable stresses. Two types of PTL rubber compounds were studied: Compound 1, designed for steering axle tires, and Compound 2, designed for driving axle tires. These positions on a truck typically bear the highest loads, requiring materials with strong mechanical properties. The study evaluated these compounds using three tests: hardness, tensile strength, and endurance. The hardness test measured the resistance of the rubber to indentation using the Shore A scale, a standard method for rubber materials. Compound 1 (for steering tires) showed a Shore A hardness value of 62, while Compound 2 (for driving tires) had a value of 66. Both values fall within the industry standard range of 50–70 for tire rubber, indicating that the splicing process did not negatively affect the curing or hardness of the PTL materials. The tensile test demonstrated that the spliced joints maintained strong performance, with only a minor reduction in maximum load and elongation compared to unsliced material. The endurance test further confirmed the durability of the spliced PTL under simulated real-world conditions. Overall, the results show that the splicing and use of pre-cured treaded liners-incorporating recycled tread waste-can maintain the necessary mechanical properties for demanding truck tire applications, while also supporting more sustainable and efficient production practices.

Abstract

Full Text|PDF|XML

Additive manufacturing (AM), also known as 3D printing, is a process of creating physical objects directly from digital 3D models by adding material layer by layer. Unlike conventional manufacturing methods—such as subtractive machining or injection molding—which remove material or shape it within molds, AM builds parts incrementally, typically using heat, lasers, or electron beams to bond each layer. Among the seven standardized AM processes, Fused Deposition Modeling (FDM) is the most widely used. FDM works by heating and extruding thermoplastic filaments to form successive layers of a part. While AM offers unique advantages such as complex geometries, lightweight structures, and customization, it also introduces specific design constraints that differ from traditional manufacturing. This paper reviews key design challenges associated with AM, with a focus on FDM, and evaluates current methodologies developed to address these issues. A new design methodology is proposed to optimize part design according to specific machine and material constraints, leveraging the advantages of AM while minimizing its limitations. This approach ensures that designs are not only manufacturable but also meet performance requirements and are optimized for the given specifications. A case study applying this methodology to the FDM process highlights its effectiveness and suggests pathways for further improvements. The findings offer insights into how the new approach can contribute to future research and development in AM design optimization.

- no more data -