Javascript is required
Search
/
/
International Journal of Computational Methods and Experimental Measurements
IDA
International Journal of Computational Methods and Experimental Measurements (IJCMEM)
IJEI
ISSN (print): 2046-0546
ISSN (online): 2046-0554
Submit to IJCMEM
Review for IJCMEM
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 13
Archive
Home

International Journal of Computational Methods and Experimental Measurements (IJCMEM) is a peer-reviewed open-access journal dedicated to advancing research that integrates computational modeling with experimental measurement across scientific and engineering disciplines. The journal provides a platform for high-quality studies focusing on the development, validation, and application of numerical and experimental approaches to improve prediction accuracy, reliability, and engineering relevance. IJCMEM encourages contributions that explore the interplay between theory, simulations, and laboratory or field experiments in areas such as material behavior, structural dynamics, multiphysics coupling, fluid–structure interaction, thermal processes, and data-driven modeling. The journal particularly values research leveraging digital technologies, artificial intelligence, and advanced sensing and instrumentation for enhanced computational–experimental synergy. Committed to rigorous peer-review standards, research integrity, and timely dissemination of knowledge, IJCMEM is published quarterly by Acadlore, with issues released in March, June, September, and December.

  • Professional Editorial Standards - Every submission undergoes a rigorous and well-structured peer-review and editorial process, ensuring integrity, fairness, and adherence to the highest publication standards.

  • Efficient Publication - Streamlined review, editing, and production workflows enable the timely publication of accepted articles while ensuring scientific quality and reliability.

  • Gold Open Access - All articles are freely and immediately accessible worldwide, maximizing visibility, dissemination, and research impact.

Editor(s)-in-chief(1)
giulio lorenzini
Department of Industrial Systems and Technologies Engineering, University of Parma, Italy
giulio.lorenzini@unipr.it | website
Research interests: Vapotron and Enhanced Boiling Heat Transfer; Constructal Theory and Heat Exchanger Optimization; Droplet Evaporation and Thermal Cooling Applications; Chimney Effect and Thermal Stratification, etc.

Aims & Scope

Aims

International Journal of Computational Methods and Experimental Measurements (IJCMEM) is an international peer-reviewed open-access journal devoted to advancing the integration of computational modeling and experimental measurement in science and engineering. The journal provides a platform for high-quality studies aimed at improving prediction accuracy, reliability, and engineering applicability through combined numerical–experimental approaches.

IJCMEM fosters interdisciplinary research that bridges theoretical analysis, simulation techniques, experimental methodologies, and advanced data analytics. The journal welcomes conceptual, numerical, and laboratory-based investigations focusing on materials mechanics, dynamic loading, multiphysics coupling, fluid–structure interaction, thermal analysis, and related domains.

Through its commitment to connecting academic innovation with practical engineering challenges, IJCMEM promotes rigorous research that enhances digital simulation capabilities, strengthens measurement fidelity, and supports informed engineering decision-making. The journal particularly values contributions introducing hybrid modeling strategies, validation frameworks, and instrumentation-driven advancements for improved computational–experimental synergy.

Key features of IJCMEM include:

  • A strong emphasis on numerical–experimental integration for enhanced engineering accuracy and reliability;

  • Support for research that advances computational methods, field and laboratory measurements, and hybrid validation techniques;

  • Encouragement of studies leveraging digital technologies, AI, and advanced instrumentation for improved simulation fidelity;

  • Promotion of practical insights addressing real-world engineering challenges and decision-support needs;

  • A commitment to rigorous peer-review standards, research integrity, and timely open-access dissemination of knowledge.

Scope

The International Journal of Computational Methods and Experimental Measurements (IJCMEM) welcomes high-quality contributions that explore the development, application, and validation of computational and experimental techniques across a wide range of scientific and engineering domains. The journal invites submissions covering, though not limited to, the following key areas:

  • Computational–Experimental Integration and Hybrid Approaches

    Studies emphasizing the coupling of computational simulations with physical experiments for enhanced accuracy, reliability, and predictive capability. Topics include computer-assisted experimental control, data-driven calibration, hybrid modeling, and closed-loop simulation frameworks that combine real-time experiments with numerical solvers.

  • Numerical Modeling and Simulation Technologies

    Research focusing on the development and implementation of advanced numerical methods for solving nonlinear, multiphysics, and multiscale problems. Areas include finite element, boundary element, meshless, and particle-based methods; computational fluid dynamics; heat transfer and diffusion modeling; and dynamic system simulation.

  • Experimental Measurement, Validation, and Verification

    Innovative experimental methods designed for model validation and verification. Topics include direct, indirect, and in-situ measurements, uncertainty quantification, error propagation, and the establishment of benchmarking standards for computational models.

  • Data Acquisition, Signal Processing, and Digital Experimentation

    Studies addressing new instrumentation, sensor networks, and digital data acquisition systems for experimental analysis. Research in this area covers signal filtering, feature extraction, noise minimization, big-data processing for experiments, and AI-assisted data interpretation.

  • Material Behavior, Characterization, and Testing

    Comprehensive analyses of material response under static, dynamic, and cyclic loading conditions. Topics include fatigue and fracture mechanics, corrosion and wear, contact mechanics, surface effects, environmental degradation, and material property evolution under extreme conditions.

  • Thermal and Fluid Dynamics

    Research in computational and experimental thermofluid sciences, including convection and conduction modeling, multiphase and turbulent flow analysis, phase change processes, and heat transfer in porous or composite media.

  • Dynamic Loading, Impact, and Seismic Analysis

    Studies on structures subjected to shock, blast, impact, or seismic excitations. The journal welcomes integrated computational–experimental work on dynamic testing, structural resilience, and safety evaluation under extreme environments.

  • Nano- and Microscale Modeling and Measurement

    Research focusing on nanomechanics, microscale heat transfer, and interface phenomena. Topics include nanoindentation testing, microstructural modeling, atomic-scale simulations, and the development of nano-enabled experimental and computational methodologies.

  • Process Control, Optimization, and Digital Twins

    Contributions integrating simulation and experimentation for industrial process control, real-time optimization, and virtual prototyping. Emphasis is given to the application of digital twin technology and machine learning for predictive monitoring, fault detection, and system optimization.

  • Artificial Intelligence and Data-Driven Modeling

    Explorations of machine learning, deep learning, and data analytics applied to experimental data interpretation, model calibration, and uncertainty reduction. Research may include surrogate modeling, neural network-based simulations, and hybrid AI–physics-driven computational frameworks.

  • Multiscale and Multiphysics Coupling

    Studies addressing the hierarchical modeling of systems involving coupled physical phenomena—thermal, mechanical, chemical, or electromagnetic interactions—supported by experimental validation across scales.

  • Instrumentation, Sensors, and Measurement Innovation

    Advances in sensor design, optical measurement systems, imaging technologies, and non-invasive diagnostic methods. Topics include digital holography, 3D scanning, tomography, and infrared thermography for computational verification.

  • Environmental, Structural, and Biomedical Applications

    Applications of integrated computational–experimental approaches to environmental degradation, corrosion analysis, seismic and blast resilience, and biomedical problems such as tissue modeling, prosthetic design, and fluid–structure interaction in biological systems.

  • Reliability, Risk Analysis, and Uncertainty Quantification

    Research on model reliability, safety assessment, probabilistic methods, and vulnerability studies. Topics include stochastic simulations, sensitivity analysis, and reliability-based design supported by experimental evidence.

  • Emerging Fields and Cross-Disciplinary Studies

    Explorations into new experimental and computational frontiers, such as additive manufacturing, smart materials, robotics, and metamaterials. Studies highlighting cross-disciplinary methods that integrate physics-based simulations with experimental insights are particularly encouraged.

  • Case Studies and Applied Innovations

    Empirical and applied works demonstrating the use of computational–experimental integration in solving practical engineering challenges. IJCMEM values contributions that translate theoretical advances into real-world design, testing, and performance optimization.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML

In the 1960s, coinciding with the massive demand for credit cards, financial companies needed a method to know their exposure to risk insolvency. It began applying credit-scoring techniques. In the 1980s credit-scoring techniques were extended to loans due to the increased demand for credit and computational progress. In 2004, new recommendations of the Basel Committee (as called Basel II) on banking supervision appeared. With the ensuing global financial crisis, a new document, Basel III, appeared. It introduced more demanding changes on the control of borrowed capital.

Nowadays, one of the main problems not addressed is the presence of large datasets. This research is focused on calculating probabilities of default in home equity loans, and measuring the computational efficiency of some statistical and data mining methods. In order to do these, some Monte Carlo experiments with known techniques and algorithms have been developed.

These computational experiments reveal that large datasets need BigData techniques and algorithms that yield faster and unbiased estimators.

Abstract

Full Text|PDF|XML

Using oil-spill booms as floating barriers must respect environmental conditions, mechanical limitations and operational constraints. Numerical modelling of boom behaviour can be used in order to prepare or validate booming plans, which respect these constraints. We present simulations of boom behaviour during an exercise in Galicia to support existing contingency plans. The main inputs of the modelled simulations are: environmental data on meteorology and oceanography, pollution field data and technical specifications of commercially available booms. The barrier structural analysis uses four-step modelling with an adaptive geometry. Modelled results are used in two ways. Firstly, a pre-paredness approach is conducted with a three-section boom plan to protect a mussel farm near the Puebla del Caramiñal. Secondly, a post-experiment analysis is made with a four-section plan and time- dependant boundary conditions given by the five GPS buoys position records carried out during the experiment. This numerical validation of the boom plan is complementary to the operational training of the boom deployment. The model results reproduce the barriers’ behaviour during the exercise and improve contingency planning for future response. The proposed approach has been generalized to other environments such as estuaries, ports and lakes.

Abstract

Full Text|PDF|XML

The depth of closure of the beach profile, from now on termed as DoC, is a key parameter to perform effective evaluations of beach nourishments or coastal defence works. It is defined for a given time interval, as the closest depth to the shore at which there is no significant change in seabed elevation and no significant net sediment transport between the nearshore and offshore. To obtain this point it is necessary to compare profile surveys at a given period of time, and evaluate them to find the point in the profile where the depth variation is equal to, or less than, a pre-selected criteria. In order to manage all this information, a software application has been developed. On providing the input of the beach profiles, this tool offers the possibility of selecting the dates of the desired period of study, graph the profiles and then obtain, for each XY coordinate, all the required parameters, such as offshore distance, maximum, average and minimum depth, standard deviation and area difference between profiles. By evaluating each point along the profile, the DoC can be obtained at that point that meets the criteria. Moreover, this tool allows to graph not only the initial and final profile of the period, but all the beach profiles recorded, creating its maximum and minimum envelope. In addition, if the user introduces the parameters related to the equilibrium beach profile, this tool also corrects the area difference, taking into account the morphological changes (erosion– accretion) that may have occurred during the period studied. In conclusion, this tool has a friendly interface for obtaining the DoC with accuracy by interactive selection of the period of study. It also stores all the information and exports it to different formats.

Open Access
Research article
Factors Influencing the Retreat of the Coastline
m. lópez ,
j.i. pagán ,
i. lópez ,
a.j. tenza-abril ,
j. garcía-barba
|
Available online: 08-31-2027

Abstract

Full Text|PDF|XML

One of the main problems of coastlines around the world is their erosion. There are many studies that have tried to link coastal erosion with different parameters such as: maritime climate, sediment transport, sea level rise etc. However, it is unclear to what extent these factors influence coastal erosion. For example, the Intergovernmental Panel on Climate Change (IPCC) has predicted an increase in sea level at a much faster rate than that experienced in the first part of this century, reaching 1 m of elevation in some areas. Another factor to consider is the lack of sediment supply, since currently the contribution of new sediments from rivers or ravines is interrupted by anthropic activities carried out in their basins (dams, channelling, etc.). The big storms, increasingly frequent due to climate change, also should be considered, since they produce an off-shore sediments transport, so that these cross the depth of clo- sure, causing nonreturn of the sediment to the beach. Also, the sediment undergoes a process of wear due to various reasons such as the dissolution of the carbonate fraction and/or breakage and separation of the components of the particles. All these elements, to a greater or lesser extent, lead to the retreat of the coastline. Therefore, the aim of this study is to analyse the different factors causing the retreat of the coastline, in order to determine the degree of involvement of each of them and, therefore, be able to pose different proposals to reduce the consequences of coastal erosion.

Abstract

Full Text|PDF|XML

An axle bearing is one of the most important components to guarantee the service life of a rail car. In order to ensure the stable and reliable bearing life, it is essential to estimate the fatigue life of an axle bearing under the loading conditions. The fatigue life of a bearing is affected by many parameters such as material properties, heat treatment, lubrication conditions, operating temperature, loading conditions, bearing geometry, the internal clearance of bearing, and so on. Because these factors are so complicatedly related to each other, it is very important to investigate the effects of these factors on the axle bearing life. This paper presents the process of estimating the fatigue life of a railroad roller bearing, which takes into account geometric parameters of the bearing in the life calculation. The load distributions of the bearing were determined by solving numerically force and moment equilibrium equations with Lundberg’s approximate model. This paper focuses on analyzing the effects of bearing geometric parameters on the fatigue life using Taguchi method.

Abstract

Full Text|PDF|XML

The design and optimisation of a latent heat thermal storage system require knowledge of flow, heat and mass transfer during the melting (charging) and solidification (discharging) processes of high-temperature phase change materials (PCMs). Using fluent, numerical modeling was performed to study the impact of natural convection and turbulence in the melting process of a high- temperature PCM in a latent heat storage system with Ra = 1012. Numerical calculation was conducted, considering a two dimensional symmetric grid of a dual-tube element in a parallel flow shell and tube configuration where the heat transfer fluid passes through the tube and PCM fills the shell. Three melting processes of PCM were considered; pure conduction, conduction and natural convection, and finally the latter with turbulence. The first study showed a one dimensional melt front, evolving parallel to the tube, which results in lower peak temperatures and temperature gradients, higher heat transfer area for a longer period of time, however lower heat transfer rate due to natural convection being ignored. The second study presented a two dimensional melt front which evolves mainly perpendicular to the tube, shrinking downward, resulting in the loss of heat transfer area and higher peak temperatures and temperature gradient, however, the higher rate of heat transfer rate due to the creation of convection cells which facilitate mass and heat transfer. Including turbulence led to a higher mixing effect due to the higher velocity of convection cells, resulting in a more uniform process with lower peak temperature and temperature gradients and higher heat transfer rate. In a melting process with Ra>1011, including convection and turbulence impact provides more realistic data of flow, mass and heat transfer.

Abstract

Full Text|PDF|XML

Polyhydroxyalkanoates (PHAs) are a family of biodegradable and biocompatible polyesters that have recently attracted much industrial attention. The most representative PHA is poly(3-hydroxybutyrate) (PHB), though it presents several shortcomings such as brittleness and poor impact resistance. 3-hydroxy- hexanoate units can be incorporated in PHB to obtain poly(3-hydroxybutyrate-co-3-hydroxyhexanoate) (PHBHHx), a copolymer with improved mechanical properties, processability and biodegradability, more suitable for biomedical applications. In this study, chitosan-grafted polycaprolactone (CS-g- PCL)/PHBHHx fiber blends in different compositions were developed by wet electrospinning, and their morphology, biodegradability, mechanical and tribological properties were investigated. A direct correlation was found between the wear rate and the mechanical properties, pointing that fiber breakage is the mechanism responsible for both the abrasive wear and yield. The interactions between the components led to a synergistic effect on tensile and tribological properties at a blend composition of 70/30, resulting in an optimum combination of maximum stiffness, strength, ductility and toughness and minimum coefficient of friction and wear rate, ascribed to the lower porosity and higher crystallinity of this sample. Further, it exhibits the slowest degradation rate. These fiber blends are ideal candidates as scaffolds for tissue engineering applications.

Abstract

Full Text|PDF|XML

This paper discussed about the consequences of using different filler metal by metal inert gas (MIG) welding process on aluminium alloys Al 7075 sheet metal joint. Nowadays, Al 7075 is widely used in automobile and aviation industry due to its light weight, strong, and high hardness. Fusion welding, such as MIG and TIG were commonly used in joining the aluminium alloys due to its low cost. However, defects usually occurred using fusion welding because of the inaccurate welding parameters and types of filler metal used. The purpose of this study is to determine whether the filler metal with different elements and welding parameters affect the mechanical properties of welded Al 7075. Welding parameters used were current, voltage, welding speed, and Argon (Ar) as shielding gas. Two different types of filler metal were used which is Electrode Rod (ER) 4043 and ER5356 which is from Al-Si and Al-Mg based element, respectively. From microstructure analysis, fusion zone (FZ) of sample welded with ER4043 has a smaller grain size than that of with ER5356. Both filler produced equiaxed dendritic grain at FZ. Both samples welded with ER4043 and ER5356 has lower hardness value than heat affected zone (HAZ) and base metal (BM) due to the differences in their elements where ER4043 from Al-Si and ER5356 from Al-Mg group. The weld efficiency of sample welded using ER5356 was 61% which was higher compared to sample welded using ER4043 which at 43% and both sample was brittle fractured. Sample welded with ER5356 was fractured at HAZ due to porosity while sample welded with ER4043 fractured at FZ due to the oxide inclusion.

Abstract

Full Text|PDF|XML

In this study, we develop an efficient topology optimisation method with the H -matrix method and the boundary element method (BEM). In sensitivity analyses of topology optimisation, we need to solve a set of two algebraic equations whose coefficient matrices are common, particularly in many cases. For such cases, by using a direct solver such as LU decomposition to factorise the coefficient matrix, we can reduce the computational time for the sensitivity analysis. A coefficient matrix derived by the BEM is, however, fully populated, which causes high numerical costs for the LU decomposition. In this research, the LU decomposition is accelerated by using the H -matrix method for the sensitivity analyses of topology optimisation problems. We demonstrate the efficiency of the proposed method by a numerical example of a multi-objective optimisation problem for 2D electromagnetic field.

Abstract

Full Text|PDF|XML

Accurate fruit recognition in natural orchard environments remains a major challenge due to heavy occlusion, illumination variation, and dense clustering. Conventional object detectors, even those incorporating attention mechanisms such as YOLOv7 with attribute attention, often fail to preserve fine spatial details and lose robustness under complex visual conditions. To overcome these limitations, this study proposes DeepHarvestNet, a YOLOv8-based hybrid network that jointly learns depth and visual representations for precise apple detection and localization. The architecture integrates three key modules: (1) Efficient Bidirectional Cross-Attention (EBCA) for handling overlapping fruits and contextual dependencies; (2) Focal Modulation (FM) for enhancing visible apple regions under partial occlusion; and (3) KernelWarehouse Convolution (KWConv) for extracting scale-aware features across varying fruit sizes. In addition, a transformer-based AdaBins depth estimation module enables pixel-wise depth inference, effectively separating foreground fruits from the background to support accurate 3D positioning. Experimental results on a drone-captured orchard dataset demonstrate that DeepHarvestNet achieves a precision of 0.94, recall of 0.95, and F1-score of 0.95—surpassing the enhanced YOLOv7 baseline. The integration of depth cues significantly improves detection reliability and facilitates depth-aware decision-making, underscoring the potential of DeepHarvestNet as a foundation for intelligent and autonomous harvesting systems in precision agriculture.

Abstract

Full Text|PDF|XML

Phenol is a persistent and toxic pollutant in industrial wastewater, demanding efficient and sustainable removal technologies. Conventional treatment methods often suffer from high operational costs, incomplete degradation, and secondary contamination. In this study, ZnO–Fe$_2$O$_3$ nanocomposites were synthesized using pulsed laser ablation in liquid (PLAL)-a clean, surfactant-free, and environmentally benign route—to develop eco-friendly adsorbents for phenol removal. The structural, morphological, and optical characteristics of the as-prepared nanoparticles were examined using X-ray diffraction (XRD), scanning electron microscopy (SEM), UV-visible spectroscopy, and zeta potential analysis. The 50:50 ZnO–Fe$_2$O$_3$ composite demonstrated moderate colloidal stability (-28.54 mV), nanoscale crystallinity, and a heterogeneous surface morphology conducive to adsorption. Batch adsorption experiments at an initial phenol concentration of 100 mg/L revealed a maximum removal efficiency of 68.44% under 600 laser pulses after 50 minutes of contact time. The consistent optical band gap values (2.48-2.50 eV) across all samples indicated structural and electronic stability. The enhanced adsorption efficiency was attributed to synergistic interfacial interactions between ZnO and Fe$_2$O$_3$ within the nanocomposite matrix. Although the present work is limited to batch-scale trials under fixed conditions, future studies will investigate the effects of pH, adsorption kinetics, isotherm behavior, and material reusability. Overall, the findings highlight the potential of PLAL-fabricated ZnO–Fe$_2$O$_3$ nanocomposites as sustainable adsorbents for aqueous phenol remediation.

Open Access
Research article
Enhancing Real-Time Face Detection Performance Through YOLOv11 and Slicing-Aided Hyper Inference
muhammad fachrurrozi ,
muhammad naufal rachmatullah ,
akhiar wista arum ,
fiber monado
|
Available online: 10-13-2025

Abstract

Full Text|PDF|XML

Real-time face detection in crowded scenes remains challenging due to small-scale facial regions, heavy occlusion, and complex illumination, which often degrade detection accuracy and computational efficiency. This study presents an enhanced detection framework that integrates Slicing-Aided Hyper Inference (SAHI) with the YOLOv11 architecture to improve small-face recognition under diverse visual conditions. While YOLOv11 provides a high-speed single-stage detection backbone, it tends to lose fine spatial information through downsampling, limiting its sensitivity to tiny faces. SAHI addresses this limitation by partitioning high-resolution images into overlapping slices, enabling localized inference that preserves structural detail and strengthens feature representation for small targets. The proposed YOLOv11–SAHI system was trained and evaluated on the WIDER Face dataset across Easy, Medium, and Hard difficulty levels. Experimental results demonstrate that the integrated framework achieves Average Precision (AP) scores of 96.33%, 95.87%, and 90.81% for the respective subsets—outperforming YOLOv7, YOLOv5, and other lightweight detectors, and closely approaching RetinaFace accuracy. Detailed error analysis reveals that the combined model substantially enhances small-face detection in dense crowds but remains sensitive to severe occlusion, motion blur, and extreme pose variations. Overall, YOLOv11 coupled with SAHI offers a robust and computationally efficient solution for real-time face detection in complex environments, establishing a foundation for future work on pose-invariant feature learning and adaptive slicing optimization.

Open Access
Research article
Empirical Modeling of Sediment Deposition in Iraqi Water Channels Through Laboratory Experiments and Field Validation
Atheer Zaki Al-qaisi ,
israa hussein ali ,
zena hussein ali ,
fatima al-zahraa k. al-saeedy ,
mustafa a. al yousif
|
Available online: 10-13-2025

Abstract

Full Text|PDF|XML

Sediment deposition in Iraqi water channels represents a persistent constraint on agricultural irrigation and industrial water supply systems. Existing predictive models often neglect the unique hydraulic and sedimentological conditions of arid-region channels, limiting their applicability. This study integrates controlled laboratory experiments with statistical modeling to establish an empirical equation that quantifies sediment deposition mass (D) as a function of flow velocity (V), sediment concentration (C), and channel slope (S). A series of 54 experiments were conducted in a recirculating flume under precisely monitored conditions, including triplicate trials to ensure statistical robustness. The resulting power-law model, D=0.024·V-1.32·C0.89·S-0.75, exhibited strong predictive capability with R2=0.93, identifying flow velocity as the dominant governing parameter (56% influence). Optimal channel slopes between 5° and 7° were found to minimize deposition. Field validation within the Al-Diwaniyah irrigation network confirmed the model’s reliability, achieving 89% agreement between predicted and observed deposition values. These findings provide a practical and region-specific framework for improving channel design and maintenance strategies in arid environments. Future extensions will incorporate computational fluid dynamics (CFD) simulations and IoT-based monitoring to support adaptive sediment management.

Abstract

Full Text|PDF|XML

The integration of heterogeneous medical data remains a major challenge for clinical decision support systems (CDSS). Most existing deep learning (DL) approaches rely primarily on imaging modalities, overlooking the complementary diagnostic value of electronic health records (EHR) and physiological signals such as electrocardiograms (ECG). This study introduces MIMIC-EYE, a secure and explainable multi-modal framework that fuses ECG, chest X-ray (CXR), and MIMIC-III EHR data to enhance diagnostic performance and interpretability. The framework employs a rigorous preprocessing pipeline combining min–max scaling, multiple imputation by chained equations (MICE), Hidden Markov Models (HMMs), Deep Kalman Filters (DKF), and denoising autoencoders to extract robust latent representations. Multi-modal features are fused through concatenation and optimized using a Hybrid Slime Mould–Moth Flame (HSMMF) strategy for feature selection. The predictive module integrates ensemble DL architectures with attention mechanisms and skip connections to capture complex inter-modal dependencies. Model explainability is achieved through Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), enabling transparent clinical reasoning. Experimental results demonstrate superior performance, achieving 98.41% accuracy, 98.99% precision, and 98.0% sensitivity—outperforming state-of-the-art baselines. The proposed MIMIC-EYE framework establishes a secure, interpretable, and generalizable foundation for trustworthy AI-driven decision support in critical care environments.

Abstract

Full Text|PDF|XML

Drones have a problem with command transmission under Ultra-Reliable Low Latency Communication (URLLC) requirements. This paper discusses minimizing Packet Error Rate (PER) in an Unmanned Aerial Vehicle (UAV) relay system that transmits commands under Ultra-Reliable Low Latency Communication requirements. The problem is solved through joint optimization of block-length allocation and UAV placement. To tackle these challenges, the optimization problem was split into two sub-problems to analyze the convexity and monotonicity of each. An iterative optimization algorithm for PER minimization was then formulated, combining the Alternating Direction Method of the Multipliers algorithm (ADMM) with the bisection search method through a perturbation-based iterative approach. Simulation results confirm that the proposed algorithm achieves up to 16.42% improvement in computation time and up to 57.14% in convergence speed compared to the algorithm using the bisection method alone for both problems, and it gives the same performance as that of the exhaustive search method.

load more...
- no more data -
- no more data -