Javascript is required
Search
/
/
Information Dynamics and Applications
HF
Information Dynamics and Applications (IDA)
IJCMEM
ISSN (print): 2958-1486
ISSN (online): 2958-1494
Submit to IDA
Review for IDA
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 4
Archive
Home

Information Dynamics and Applications (IDA) stands out in the realm of academic publishing as a distinct peer-reviewed, open-access journal, primarily focusing on the dynamic nature and diverse applications of information technology and its related fields. Distinguishing itself from other journals in the domain, IDA dedicates itself to exploring both the underlying principles and the practical impacts of information technology, thereby bridging theoretical research with real-world applications. IDA not only covers the traditional aspects of information technology but also delves into emerging trends and innovations that set it apart in the scholarly community. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our expertise in orchestrating the peer-review, editing, and production processes, all accepted articles are published rapidly.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(2)
turker tuncer
Digital Forensics Engineering, Firat University, Turkey
turkertuncer@firat.edu.tr | website
Research interests: Feature Engineering; Image Processing; Signal Processing; Information Security; Pattern Recognition
gengxin sun
Computer Science and Technology, Qingdao University, China
sungengxin@qdu.edu.cn | website
Research interests: Big Data; Artificial Intelligence; Complex Networks; Social computing

Aims & Scope

Aims

Information Dynamics and Applications (IDA), as an international open-access journal, stands at the forefront of exploring the dynamics and expansive applications of information technology. This fully refereed journal delves into the heart of interdisciplinary research, focusing on critical aspects of information processing, storage, and transmission. With a commitment to advancing the field, IDA serves as a crucible for original research, encompassing reviews, research papers, short communications, and special issues on emerging topics. The journal particularly emphasizes innovative analytical and application techniques in various scientific and engineering disciplines.

IDA aims to provide a platform where detailed theoretical and experimental results can be published without constraints on length, encouraging comprehensive disclosure for reproducibility. The journal prides itself on the following attributes:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

The scope of IDA is diverse and expansive, encompassing a wide range of topics within the realm of information technology:

  • Artificial Intelligence (AI) and Machine Learning (ML): Investigating the latest developments in AI and ML, and their applications across various industries.

  • Digitalization and Data Science: Exploring the transformation brought about by digital technologies and the analytical power of data science.

  • Signal Processing and Simulation Optimization: Advancements in the field of signal processing, including audio, video, and communication signal processing, and the development of optimization techniques for simulations.

  • Social Networking and Ubiquitous Computing: Research on the impact of social media on society and the pervasiveness of computing in everyday life.

  • Industrial Engineering and Information Architecture: Studies on the integration of information technology in industrial engineering and the structuring of information systems.

  • Internet of Things (IoT): Delving into the connected world of IoT and its implications for smart cities, healthcare, and more.

  • Data Mining, Storage, and Manipulation: Techniques and innovations in extracting valuable insights from large data sets, and the management of data storage and manipulation.

  • Database Management and Decision Support Systems: Exploring advanced database management systems and the development of decision support systems.

  • Enterprise Systems and E-Commerce: The evolution and future of enterprise resource planning systems and the impact of e-commerce on global markets.

  • Knowledge-Based Systems and Robotics: The intersection of knowledge-based systems with robotics and automation.

  • Cybersecurity and Software as a Service (SaaS): Cutting-edge research in cybersecurity and the growing trend of SaaS in business and consumer applications.

  • Supply Chain Management and Systems Analysis: Innovations in supply chain management driven by information technology, and systems analysis in complex IT environments.

  • Quantum Computing and Optimization: The role of quantum computing in solving complex problems and its future potential.

  • Virtual and Augmented Reality: Exploring the implications of virtual and augmented reality technologies in education, training, entertainment, and more.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML

Dealing with privacy and security becomes more complicated nowadays with the emergence of big data era. Privacy, data value, and system efficiency should be managed using multiple solutions in current analytics. In this paper, privacy-preserving techniques were selected and reviewed for big data analysis to reduce threats imposed on healthcare data. Various security solutions, including k-anonymity, differential privacy, homomorphic encryption, and secure multi-party computation (SMPC), were programmed and examined using the Medical Information Mart for Intensive Care III (MIMIC-III) healthcare dataset. Assessments were conducted cautiously for each method of data collection in respect of security, time required, capacity of handling large data sets, usefulness of the data, and compliance with regulations. By using differential privacy, it was possible to maintain a balance between privacy and utility by allocating additional resources to the program. The security of data was facilitated by homomorphic encryption though it was not easy to operate and reduce the speed of computer systems. Moreover, achieving scalability in the SMPC required a significant amount of computing power. Although k-anonymity enhanced data utility, it was vulnerable to certain types of attacks. Protecting privacy in big data would limit the performance of systems; for multiple copies of data, scientists can now utilize analytics, differential privacy, and the SMPC, which is highly effective for analyzing private data. However, such approaches should be further optimized to handle real-time processing in big data applications. Experimental evaluation showed that processing 10,000 patient records using differential privacy took an average of 2.3 seconds per query and retained 92% of data utility, while homomorphic encryption required 15.7 seconds per query with 88% utility retention. The SMPC achieved a high degree of privacy with 12.5 seconds per query but slightly reduced scalability. As recommended in this study, the implementation of privacy-focused solutions in big data could help researchers and companies establish appropriate privacy policies in healthcare and other similar areas.

Abstract

Full Text|PDF|XML
Accurate and reliable detection of apples in complex orchard environments remains a challenging task due to varying illumination, cluttering backgrounds, and overlapping fruits. In this paper, the difficulties were tackled with a novel edge-enhanced detection framework proposed to integrate dynamic image smoothing, entropy-based edge amplification, and directional energy-driven contour extraction. An adaptive smoothing filter was adopted with a sigmoid-based weighting function to selectively preserve edge structures while suppressing noise in homogeneous regions. The input of Red Green Blue (RGB) image was subsequently transformed into the Hue, Saturation, and Value (HSV) color space to exploit hue information, thereby improving color-based feature discrimination. The introduction of a hybrid entropy-weighted gradient scheme helped strengthen edge detection, that is, the local image entropy modulated gradient magnitudes to emphasize structured regions. A global threshold was then applied to refine the enhanced edge map. Ultimately, continuous apple contours were extracted using a direction-constrained energy propagation approach, in which connected edge pixels were traced according to compass orientations, thus ensuring accurate contour assembly even under occlusion or low contrast. Experimental evaluations confirmed that the proposed framework substantially improved the accuracy of boundary detection across diverse imaging conditions; its potential application in automated fruit detection and precision harvesting was therefore highlighted.

Abstract

Full Text|PDF|XML

Hyperparameter search was found not making good use of compute resources as surrogate-based optimizers consume extensive memory and demand long set-up time. Meanwhile, projects running with fixed budgets require lean tuning tools. The current study presents Bounding Box Tuner (BBT) and conducts tests of its capability to attain maximum validation accuracy while reducing tuning time and memory use. The project team compared BBT with Random Search, Gaussian Processes for Bayesian Optimization, Tree-Structured Parzen Estimator (TPE), Evolutionary Search and Local Search to decide on the optimum option. Modified National Institute of Standards and Technology (MNIST) classification with a multilayer perceptron (0.11 M weights) and Tiny Vision Transformer (TinyViT) (9.5 M weights) were adopted. Each optimizer was assigned to run 50 trials. During the trial, early pruning stopped a run if validation loss rose for four epochs. All tests applied one NVIDIA GTX 1650 Ti GPU; the key metrics for measurement included best validation accuracy, total search time, and time per trial. As regards the perceptron task, BBT reached 97.88% validation accuracy in 1994 s whereas TPE obtained 97.98% in 2976 s. Concerning TinyViT, BBT achieved 94.92% in 2364 s, and GP-Bayesian gained 94.66% in 2191 s. It was discovered that BBT kept accuracy within 0.1 percentage points of the best competitor and reduced tuning time by one-third. The algorithm renders the surrogate model unnecessary, enforces constraints by design and exposes solely three user parameters. Supported by the evidence of these benefits, BBT was considered to be a practical option for rapid and resource-aware hyperparameter optimization in deep-learning pipelines.

Abstract

Full Text|PDF|XML
A novel electronic voting system (EVS) was developed by integrating blockchain technology and advanced facial recognition to enhance electoral security, transparency, and accessibility. The system integrates a public, permissionless blockchain—specifically the Ethereum platform—to ensure end-to-end transparency and immutability throughout the voting lifecycle. To reinforce identity verification while preserving voter privacy, a facial recognition technology based on the ArcFace algorithm was employed. This biometric approach enables secure, contactless voter authentication, mitigating risks associated with identity fraud and multiple voting attempts. The confluence of blockchain technology and facial recognition in a unified architecture was shown to improve system robustness against tampering, data breaches, and unauthorized access. The proposed system was designed within a rigorous research framework, and its technical implementation was critically assessed in terms of security performance, scalability, user accessibility, and system latency. Furthermore, potential ethical implications and privacy considerations were addressed through the use of decentralized identity management and encrypted biometric data storage. The integration strategy not only enhances the verifiability and auditability of election outcomes but also promotes greater inclusivity by enabling remote participation without compromising system integrity. This study contributes to the evolving field of electronic voting by demonstrating how advanced biometric verification and distributed ledger technologies can be synchronously leveraged to support democratic processes. The findings are expected to inform future deployments of secure, accessible, and transparent electoral platforms, offering practical insights for governments, policymakers, and technology developers aiming to modernize electoral systems in a post-digital era.

Abstract

Full Text|PDF|XML

To address the issue of estimating energy consumption in computer systems, this study investigates the contribution of various hardware parameters to energy fluctuations, as well as the correlation between these parameters. Based on this analysis, the CM model was proposed, selecting the most representative and monitorable parameters that reflect changes in system energy consumption. The CMP (Chip Multiprocessors) model adapts to different task states of the computer system by identifying primary components driving energy consumption under varying conditions. Energy consumption estimation was then conducted by monitoring these dominant parameters. Experiments across various task states demonstrate that the CMP model outperforms traditional FAN (Fuzzy Attack Net) and Cubic models, particularly when the computer system engages in data-intensive tasks.

Open Access
Research article
Crowd Density Estimation via a VGG-16-Based CSRNet Model
damla tatlıcan ,
nafiye nur apaydin ,
orhan yaman ,
mehmet karakose
|
Available online: 04-29-2025

Abstract

Full Text|PDF|XML
Accurate crowd density estimation has become critical in applications ranging from intelligent urban planning and public safety monitoring to marketing analytics and emergency response. In recent developments, various methods have been used to enhance the precision of crowd analysis systems. In this study, a Convolutional Neural Network (CNN)-based approach was presented for crowd density detection, wherein the Congested Scene Recognition Network (CSRNet) architecture was employed with a Visual Geometry Group (VGG)-16 backbone. This method was applied to two benchmark datasets—Mall and Crowd-UIT—to assess its effectiveness in real-world crowd scenarios. Density maps were generated to visualize spatial distributions, and performance was quantitatively evaluated using Mean Squared Error (MSE) and Mean Absolute Error (MAE) metrics. For the Mall dataset, the model achieved an MSE of 0.08 and an MAE of 0.10, while for the Crowd-UIT dataset, an MSE of 0.05 and an MAE of 0.15 were obtained. These results suggest that the proposed VGG-16-based CSRNet model yields high accuracy in crowd estimation tasks across varied environments and crowd densities. Additionally, the model demonstrates robustness in generalizing across different dataset characteristics, indicating its potential applicability in both surveillance systems and public space management. The outcomes of this investigation offer a promising direction for future research in data-driven crowd analysis, particularly in enhancing predictive reliability and real-time deployment capabilities of deep learning models for population monitoring tasks.

Abstract

Full Text|PDF|XML
The accurate segmentation of visual data into semantically meaningful regions remains a critical task across diverse domains, including medical diagnostics, satellite imagery interpretation, and automated inspection systems, where precise object delineation is essential for subsequent analysis and decision-making. Conventional segmentation techniques often suffer from limitations such as sensitivity to noise, intensity inhomogeneity, and weak boundary definition, resulting in reduced performance under complex imaging conditions. Although fuzzy set-based approaches have been proposed to improve adaptability under uncertainty, they frequently fail to maintain a balance between segmentation precision and robustness. To address these challenges, a novel segmentation framework was developed based on Pythagorean Fuzzy Sets (PyFSs) and local averaging, offering enhanced performance in uncertain and heterogeneous visual environments. By incorporating both membership and non-membership degrees, PyFSs allow a more flexible representation of uncertainty compared to classical fuzzy models. A local average intensity function was introduced, wherein the contribution of each pixel was adaptively weighted according to its PyFS membership degree, improving resistance to local intensity variations. An energy functional was formulated by integrating PyFS-driven intensity constraints, local statistical deviation measures, and regularization terms, ensuring precise boundary localization through level set evolution. Convexity of the energy formulation was analytically demonstrated to guarantee the stability of the optimization process. Experimental evaluations revealed that the proposed method consistently outperforms existing fuzzy and non-fuzzy segmentation algorithms, achieving superior accuracy in applications such as medical image analysis and natural scene segmentation. These results underscore the potential of PyFS-based models as a powerful and generalizable solution for uncertainty-resilient image segmentation in real-world applications.
Open Access
Research article
RTCNet: A Robust Hybrid Deep Learning Model for Soil Property Prediction Under Noisy Conditions
pape el hadji abdoulaye gueye ,
cherif bachir deme ,
adrien basse
|
Available online: 03-30-2025

Abstract

Full Text|PDF|XML
Accurate prediction of soil fertility and soil organic carbon (SOC) plays a critical role in precision agriculture and sustainable soil management. However, the high spatial-temporal variability inherent in soil properties, compounded by the prevalence of noisy data in real-world conditions, continues to pose significant modeling challenges. To address these issues, a robust hybrid deep learning model, termed RTCNet, was developed by integrating Recurrent Neural Networks (RNNs), Transformer architectures, and Convolutional Neural Networks (CNNs) into a unified predictive framework. Within RTCNet, a one-dimensional convolutional layer was employed for initial feature extraction, followed by MaxPooling for dimensionality reduction, while sequential dependencies were captured using RNN layers. A multi-head attention mechanism was embedded to enhance the representation of inter-variable relationships, thereby improving the model’s ability to handle complex soil data patterns. RTCNet was benchmarked against two conventional models—Artificial Neural Network (ANN) optimized with a Genetic Algorithm (GA), and a Transformer-CNN hybrid model. Under noise-free conditions, RTCNet achieved the lowest Mean Squared Error (MSE) of 0.1032 and Mean Absolute Error (MAE) of 0.1852. Notably, under increasing noise levels, RTCNet consistently maintained stable performance, whereas the comparative models exhibited significant performance degradation. These findings underscore RTCNet’s superior resilience and adaptability, affirming its utility in field-scale agricultural applications where sensor noise, data sparsity, and environmental fluctuations are prevalent. The demonstrated robustness and predictive accuracy of RTCNet position it as a valuable tool for optimizing nutrient management strategies, enhancing SOC monitoring, and supporting informed decision-making in sustainable farming systems.

Abstract

Full Text|PDF|XML
Enhancing the sharpness of blurred images continues to be a critical and persistent issue in the domain of image restoration and processing, requiring precise techniques to recover lost details and enhance visual clarity. This study proposes a novel model combines the strengths of fuzzy systems with mathematical transformations to address the complexities of blurred image restoration. The model operates through a multi-stage framework, beginning with pixel coordinate transformations and corrections to account for geometric distortions caused by blurring. Fuzzy logic is employed to handle uncertainties in blur estimation, utilizing membership functions to categorize blur levels and a rule-based system to dynamically adapt corrective actions. The fusion of fuzzy logic and mathematical transformations ensures localized and adaptive corrections, effectively restoring sharpness in blurred regions while the preservation of regions with minimal distortion. Additionally, fuzzy edge enhancement is introduced to emphasize edges and suppress noise, further improving image quality. The final restoration process includes normalization and structural constraints to ensure the output aligns with the original unblurred image. Experimental results showcase the performance and reliability of the developed framework to restore clarity, preserve fine details, and minimize artifacts, making it a robust solution for diverse blurring scenarios. The proposed approach offers a significant advancement in blurred image restoration, combining the adaptability of fuzzy logic with the precision of mathematical computations to achieve superior results.

Abstract

Full Text|PDF|XML
Quantum-enhanced sensing has emerged as a transformative technology with the potential to surpass classical sensing modalities in precision and sensitivity. This study explores the advancements and applications of quantum-enhanced sensing, emphasizing its capacity to bridge fundamental physics and practical implementations. The current progress in experimental demonstrations of quantum-enhanced sensing systems was reviewed, focusing on breakthroughs in metrology and the development of physically realizable sensor architectures. Two practical implementations of quantum-enhanced sensors based on trapped ions were proposed. The first design utilizes Ramsey interferometry with spin-squeezed atomic ensembles, employing laser-induced spin-exchange interactions to reconstruct the sensing Hamiltonian. This approach enables measurement rates to scale with the number of sensing atoms, achieving sensitivity enhancements beyond the standard quantum limit (SQL). The second implementation introduces mean-field interactions mediated by coupled optical cavities that share coherent atomic probes, enabling the realization of high-performance sensing systems. Both sensor systems were demonstrated to be feasible on state-of-the-art ion-trap platforms, offering promising benchmarks for future applications in metrology and imaging. Particular attention was given to the integration of quantum-enhanced sensing with complementary imaging technologies, which continues to gain traction in medical imaging and other fields. The mutual reinforcement of quantum and complementary technologies is increasingly supported by significant investments from governmental, academic, and commercial entities. The ongoing pursuit of improved measurement resolution and imaging fidelity underscores the interdependence of these innovations, advancing the transition of quantum-enhanced sensing from fundamental research to widespread practical use.

Abstract

Full Text|PDF|XML
Accurate detection of road cracks is essential for maintaining infrastructure integrity, ensuring road safety, and preventing costly structural damage. However, challenges such as varying illumination conditions, noise, irregular crack patterns, and complex background textures often hinder reliable detection. To address these issues, a novel Fuzzy-Powered Multi-Scale Optimization (FMSO) model was proposed, integrating adaptive fuzzy operators, multi-scale level set evolution, Dynamic Graph Energy Minimization (GEM), and Hybrid Swarm Optimization (HSO). The FMSO model employs multi-resolution segmentation, entropy-based fuzzy weighting, and adaptive optimization strategies to enhance detection accuracy, while adaptive fuzzy operators mitigate the impact of illumination variations. Multi-scale level set evolution refines crack boundaries with high precision, and GEM effectively separates cracks from intricate backgrounds. Furthermore, HSO dynamically optimizes segmentation parameters, ensuring improved accuracy. The model was rigorously evaluated using multiple benchmark datasets, with performance metrics including accuracy, precision, recall, and F1-score. Experimental results demonstrate that the FMSO model surpasses existing methods, achieving superior accuracy, enhanced precision, and higher recall. Notably, the model effectively reduces false positives while maintaining sensitivity to fine crack details. The integration of fuzzy logic and multi-scale optimization techniques renders the FMSO model highly adaptable to varying road conditions and imaging environments, making it a robust solution for infrastructure maintenance. This approach not only advances the field of road crack detection but also provides a scalable framework for addressing similar challenges in other domains of image analysis and pattern recognition.

Abstract

Full Text|PDF|XML
This study presents a novel image restoration method, designed to enhance defective fuzzy images, by utilizing the Fuzzy Einstein Geometric Aggregation Operator (FEGAO). The method addresses the challenges posed by non-linearity, uncertainty, and complex degradation in defective images. Traditional image enhancement approaches often struggle with the imprecision inherent in defect detection. In contrast, FEGAO employs the Einstein t-norm and t-conorm for non-linear aggregation, which refines pixel coordinates and improves the accuracy of feature extraction. The proposed approach integrates several techniques, including pixel coordinate extraction, regional intensity refinement, multi-scale Gaussian correction, and a layered enhancement framework, thereby ensuring superior preservation of details and minimization of artifacts. Experimental evaluations demonstrate that FEGAO outperforms conventional methods in terms of image resolution, edge clarity, and noise robustness, while maintaining computational efficiency. Comparative analysis further underscores the method’s ability to preserve fine details and reduce uncertainty in defective images. This work offers significant advancements in image restoration by providing an adaptive, efficient solution for defect detection, machine vision, and multimedia applications, establishing a foundation for future research in fuzzy logic-based image processing under degraded conditions.
load more...
- no more data -
- no more data -