Javascript is required
Search
/
/
Information Dynamics and Applications
HF
Information Dynamics and Applications (IDA)
IJCMEM
ISSN (print): 2958-1486
ISSN (online): 2958-1494
Submit to IDA
Review for IDA
Propose a Special Issue
Current State
Issue
Volume
2025: Vol. 4
Archive
Home

Information Dynamics and Applications (IDA) stands out in the realm of academic publishing as a distinct peer-reviewed, open-access journal, primarily focusing on the dynamic nature and diverse applications of information technology and its related fields. Distinguishing itself from other journals in the domain, IDA dedicates itself to exploring both the underlying principles and the practical impacts of information technology, thereby bridging theoretical research with real-world applications. IDA not only covers the traditional aspects of information technology but also delves into emerging trends and innovations that set it apart in the scholarly community. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our expertise in orchestrating the peer-review, editing, and production processes, all accepted articles are published rapidly.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(1)
turker tuncer
Digital Forensics Engineering, Firat University, Turkey
turkertuncer@firat.edu.tr | website
Research interests: Feature Engineering; Image Processing; Signal Processing; Information Security; Pattern Recognition

Aims & Scope

Aims

Information Dynamics and Applications (IDA), as an international open-access journal, stands at the forefront of exploring the dynamics and expansive applications of information technology. This fully refereed journal delves into the heart of interdisciplinary research, focusing on critical aspects of information processing, storage, and transmission. With a commitment to advancing the field, IDA serves as a crucible for original research, encompassing reviews, research papers, short communications, and special issues on emerging topics. The journal particularly emphasizes innovative analytical and application techniques in various scientific and engineering disciplines.

IDA aims to provide a platform where detailed theoretical and experimental results can be published without constraints on length, encouraging comprehensive disclosure for reproducibility. The journal prides itself on the following attributes:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

The scope of IDA is diverse and expansive, encompassing a wide range of topics within the realm of information technology:

  • Artificial Intelligence (AI) and Machine Learning (ML): Investigating the latest developments in AI and ML, and their applications across various industries.

  • Digitalization and Data Science: Exploring the transformation brought about by digital technologies and the analytical power of data science.

  • Signal Processing and Simulation Optimization: Advancements in the field of signal processing, including audio, video, and communication signal processing, and the development of optimization techniques for simulations.

  • Social Networking and Ubiquitous Computing: Research on the impact of social media on society and the pervasiveness of computing in everyday life.

  • Industrial Engineering and Information Architecture: Studies on the integration of information technology in industrial engineering and the structuring of information systems.

  • Internet of Things (IoT): Delving into the connected world of IoT and its implications for smart cities, healthcare, and more.

  • Data Mining, Storage, and Manipulation: Techniques and innovations in extracting valuable insights from large data sets, and the management of data storage and manipulation.

  • Database Management and Decision Support Systems: Exploring advanced database management systems and the development of decision support systems.

  • Enterprise Systems and E-Commerce: The evolution and future of enterprise resource planning systems and the impact of e-commerce on global markets.

  • Knowledge-Based Systems and Robotics: The intersection of knowledge-based systems with robotics and automation.

  • Cybersecurity and Software as a Service (SaaS): Cutting-edge research in cybersecurity and the growing trend of SaaS in business and consumer applications.

  • Supply Chain Management and Systems Analysis: Innovations in supply chain management driven by information technology, and systems analysis in complex IT environments.

  • Quantum Computing and Optimization: The role of quantum computing in solving complex problems and its future potential.

  • Virtual and Augmented Reality: Exploring the implications of virtual and augmented reality technologies in education, training, entertainment, and more.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML
In a highly competitive telecommunications environment, customer behavior data has become an important source of organizational knowledge for service innovation and strategic decision-making. The ability to transform large-scale user data into actionable knowledge is essential for effective customer retention and sustainable business development. This study develops a knowledge discovery framework that integrates a denoising autoencoder with an enhanced stacking learning strategy to support customer retention innovation. The denoising autoencoder is employed to extract latent behavioral representations from complex and noisy user data, enabling the identification of underlying patterns that are difficult to capture through conventional statistical features. These latent representations are further combined with structured indicators and integrated through a stacking ensemble composed of decision trees, random forests, and XGBoost to achieve robust knowledge fusion. Empirical results show that the proposed framework provides more reliable identification of high-risk customers and improves decision support quality in terms of accuracy and area under curve (AUC). The study demonstrates how artificial intelligence can serve as a mechanism for organizational knowledge creation and offers practical implications for data-driven service innovation and resource allocation in the telecommunications sector.
Open Access
Research article
An AI-Powered Adaptive Learning Framework for Personalized Education
habitam asimare sendeku ,
ravuri daniel ,
gaddam venu gopal
|
Available online: 12-17-2025

Abstract

Full Text|PDF|XML

An adaptive learning framework driven by artificial intelligence (AI) was proposed in which cognitive, emotional, and cultural dimensions of learner diversity were jointly modeled to address heterogeneous educational needs in a personalized and inclusive manner. Within the proposed system, learner adaptation was achieved through the coordinated deployment of multiple machine learning paradigms: models based on Decision Trees (DTs) were employed to dynamically align instructional content with learners’ cognitive profiles, Recurrent Neural Networks (RNNs) were utilized to capture temporal patterns in emotional engagement, and Collaborative Filtering (CF) techniques were applied to accommodate cultural preferences. The framework operates as a continuously adaptive system, enabling instructional content to be refined based on learner data derived from a dataset comprising 10,000 students. Experimental evaluation demonstrated that the proposed approach yielded statistically significant improvements in learning outcomes when compared with conventional instructional methods. Specifically, mean quiz and assignment scores were increased by 15.7% and 14.4%, respectively, while emotional engagement indicators exhibited an improvement of 35.8%. In addition, cultural satisfaction metrics were enhanced by 24.2%. These results suggest that the synergistic integration of cognitive, emotional, and cultural adaptation mechanisms contributes substantively to academic performance gains, heightened learner engagement, and improved educational equity. Beyond performance improvements, the proposed framework is designed with scalability and robustness, allowing for deployment across personalized educational contexts. As such, the framework offers a viable pathway for the development of next-generation personalized education systems capable of supporting diverse learners at scale while maintaining pedagogical effectiveness and inclusivity.

Abstract

Full Text|PDF|XML

Skin burns represent a major clinical concern due to their association with pain, functional impairment, sensory damage, and even life-threatening complications. Early and accurate assessment is critical for first aid, clinical intervention, and the prevention of secondary complications. However, conventional burn diagnosis remains highly dependent on visual inspection and clinical expertise, which can introduce subjectivity and delay timely decision-making. To address these limitations, a hybrid automated skin burn detection framework was proposed, integrating transformer-based feature extraction with classical machine learning classification. In this framework, discriminative visual features were extracted using multiple Vision Transformer (ViT) architectures, including ViT-B/16, ViT-L/16, ViT-B/32, and DINOv2 (a self-supervised Vision Transformer model). The extracted features were subsequently fused. Given the resulting high-dimensional feature space, dimensionality reduction was performed using the Chi-square (Chi$^2$) algorithm, through which 500 features were retained, reducing computational complexity and mitigating the risk of model overfitting. The reduced feature set was then employed for burn classification using six classifiers. Model effectiveness was assessed using accuracy, precision, sensitivity, and F1-score metrics. Experimental results demonstrated that the Support Vector Machine (SVM) classifier achieved the highest classification performance, yielding an accuracy of 82.29%. Comparable yet slightly lower accuracies were observed for the Light Gradient Boosting Machine (LGBM) (80.51%) and Extreme Gradient Boosting (XGBoost) (80.17%) classifiers. Overall, the proposed hybrid model consistently outperformed baseline models, highlighting its superior discriminative capability. These findings indicate that the proposed framework holds strong potential for integration into clinical decision support systems, offering a reliable and objective tool for automated skin burn detection.

Abstract

Full Text|PDF|XML

Facial expression recognition (FER) remains a challenging problem in computer vision owing to subtle inter-class visual differences, substantial intra-class variability, and severe class imbalance in commonly adopted benchmark datasets. In this study, a statistically rigorous comparative evaluation of two pretrained Convolutional Neural Network (CNN) architectures, MobileNetV2 and EfficientNet-B0, was conducted using the FER2013 dataset. To ensure methodological fairness and reproducibility, both architectures were fine-tuned and evaluated under strictly identical experimental conditions. Model performance was systematically assessed using overall classification accuracy and macro-averaged precision, recall, and F1-score to account for class imbalance, complemented by confusion matrix analysis and multi-class receiver operating characteristic area under the curve (ROC–AUC) evaluation. Beyond conventional performance reporting, the reliability and robustness of the observed differences were examined through McNemar’s test and paired bootstrap confidence intervals (CIs). The experimental results demonstrate that EfficientNet-B0 consistently outperforms MobileNetV2 across all evaluation criteria. Statistical analysis confirms that the observed performance gains are significant at the 5% significance level. These findings provide empirically grounded evidence for informed model selection in FER tasks and highlight the importance of integrating statistical validation into comparative deep learning studies. The results further suggest that EfficientNet-B0 offers a favorable balance between recognition accuracy and computational efficiency, making it a compelling candidate for real-world FER applications, including human–computer interaction, affect-aware systems, and assistive computing environments.

Abstract

Full Text|PDF|XML

This study introduces a grammar-based, chaotic-oriented programming language, termed ChaosL, to address persistent numerical precision and reproducibility challenges in the computational analysis of chaotic systems. The language, along with its compiler and parser, is designed end-to-end with consideration of chaotic maps. Numerical accuracy is systematically managed through grammar-level precision specification and automated error monitoring mechanisms, enabling exact control over floating-point representations, including single precision, double precision, and arbitrary-precision BigDecimal arithmetic with configurable decimal resolution of up to 100 digits. The proposed grammar natively supports ten widely studied one-dimensional and two-dimensional discrete chaotic maps, which may be composed using newly defined hybrid composition paradigms, namely alternate, blend, cascade, and feedback-driven coupling. To ensure computational reliability, multiple error assessment strategies are integrated, including direct error estimation, shadow computation, and interval arithmetic. In addition, ensemble-based simulation capabilities are incorporated to evaluate trajectory separation and estimate predictability horizons. The automated computation of Lyapunov exponents is embedded at the language level, achieving an accuracy of up to 99.6% while simultaneously enabling code-size reductions of approximately 85–92%. The adaptable architecture of ChaosL establishes a reproducible computational framework for discrete chaos research and facilitates the systematic identification of emergent behaviors in hybrid dynamical systems. Moreover, the design provides a scalable foundation for future extensions toward continuous-time systems, interactive visualization environments, and cloud-based collaborative experimentation, thereby advancing precision-aware computational practices in nonlinear dynamics and chaos theory.

Abstract

Full Text|PDF|XML
To strengthen competitiveness in the digital banking environment, the functional prioritization and optimization of customer service systems enabled by artificial intelligence (AI) must be systematically examined from a user-demand perspective. In this study, user requirements for AI-powered banking customer service were identified, classified, and prioritized through the Kano model combined with a structured questionnaire survey. Four functional dimensions comprising fourteen sub-functions were evaluated to determine their respective impacts on user satisfaction. The results demonstrate that priority should be assigned to rapid transfer to human agents, high response accuracy, risk alerts, service continuity and stability, privacy protection, secure identity verification, rapid response speed, comprehensive business coverage, multi-turn dialogue capability, and accurate user intent understanding. Based on these findings, a set of optimization strategies was proposed. A precise knowledge base should be constructed, and a high-availability system architecture should be deployed. Key algorithmic challenges related to semantic understanding and multi-turn dialogue management should be addressed. Full business-scenario coverage was recommended, while tiered authentication mechanisms and proactive risk-alert strategies should be implemented. Investment in non-core functions may be strategically deferred to achieve optimal resource allocation. By systematically categorizing user demand attributes and clarifying functional priorities, this study provides a robust theoretical foundation and practical decision-making framework for banks seeking to optimize AI-powered customer service systems and maximize user satisfaction in resource-constrained digital environments.

Abstract

Full Text|PDF|XML
Tomato farming in Upper Dir, a mountainous region of Khyber Pakhtunkhwa in Pakistan, faces significant agro-ecological challenges such as fluctuating temperatures, irregular rainfall, soil infertility, and limited access to modern farming techniques. The region has complex topography, characterized by steep slopes and varying elevations, which further constrains agricultural planning and productivity. To address these issues, this study proposed a Hybrid Nonlinear Environmental Response Model (H-NERM) integrated with a Fuzzy Logic–Based Decision Support System (FL-DSS), to cater for the unique agro-climatic conditions in this area. The model was validated with comprehensive field and climate data collected from 2020 to 2024, including soil samples from 30 agricultural sites, 5-year meteorological records from the Pakistan Meteorological Department (PMD), and farmer-reported tomato yield across Upper Dir. All simulations were performed in Matrix Laboratory (MATLAB) R2015a using the Fuzzy Logic Toolbox and custom nonlinear solvers. Comparative analysis was conducted with conventionally regression-based and rule-based decision systems to evaluate model performance. Results demonstrated that the proposed H-NERM + FL-DSS framework significantly enhanced accuracy of yield prediction, optimized irrigation efficiency, and improved resilience to climate variability. The model provides a robust, data-driven, and scalable solution for sustainable tomato farming in Upper Dir, with strong potential for application in other mountainous or climate-sensitive agricultural regions.
Open Access
Research article
Application of Deep Learning Techniques in the Diagnosis and Grading of Knee Osteoarthritis (OA)
varshita yeddula ,
ranganadha reddy aluru ,
parvathi devi budda
|
Available online: 08-25-2025

Abstract

Full Text|PDF|XML

Osteoarthritis (OA) affects approximately 240 million individuals globally. Knee osteoarthritis, a crippling ailment marked by joint stiffness, discomfort, and functional impairment, is particularly the most widespread kind of arthritis among the elderly. To assess the severity of this disease, physical symptoms, medical history, and further joint screening examinations including radiography, Magnetic Resonance Imaging (MRI), and Computed Tomography (CT) scans have frequently been considered. It is difficult to identify early development of this disease as conventional diagnostic methods could be subjective. Therefore, doctors utilize the Kellgren and Lawrence (KL) scale to evaluate the severity of knee OA with visual images obtained from X-ray or MRI. The detection and prediction of the severity of knee OA indeed requires a novel model that uses deep learning models, including Inception and Xception. Utilizing the KL grading scale, the model, including Xception, ResNet-50, and Inception-ResNet-v2 could determine the degree of knee OA suffered by patients. The experimental results revealed that the Xception network achieved the highest classification accuracy of 67%, surpassing ResNet-50 and Inception-ResNet-v2, demonstrating its superior ability to automatically grade OA severity from radiographic images.

Abstract

Full Text|PDF|XML

Dealing with privacy and security becomes more complicated nowadays with the emergence of big data era. Privacy, data value, and system efficiency should be managed using multiple solutions in current analytics. In this paper, privacy-preserving techniques were selected and reviewed for big data analysis to reduce threats imposed on healthcare data. Various security solutions, including k-anonymity, differential privacy, homomorphic encryption, and secure multi-party computation (SMPC), were programmed and examined using the Medical Information Mart for Intensive Care III (MIMIC-III) healthcare dataset. Assessments were conducted cautiously for each method of data collection in respect of security, time required, capacity of handling large data sets, usefulness of the data, and compliance with regulations. By using differential privacy, it was possible to maintain a balance between privacy and utility by allocating additional resources to the program. The security of data was facilitated by homomorphic encryption though it was not easy to operate and reduce the speed of computer systems. Moreover, achieving scalability in the SMPC required a significant amount of computing power. Although k-anonymity enhanced data utility, it was vulnerable to certain types of attacks. Protecting privacy in big data would limit the performance of systems; for multiple copies of data, scientists can now utilize analytics, differential privacy, and the SMPC, which is highly effective for analyzing private data. However, such approaches should be further optimized to handle real-time processing in big data applications. Experimental evaluation showed that processing 10,000 patient records using differential privacy took an average of 2.3 seconds per query and retained 92% of data utility, while homomorphic encryption required 15.7 seconds per query with 88% utility retention. The SMPC achieved a high degree of privacy with 12.5 seconds per query but slightly reduced scalability. As recommended in this study, the implementation of privacy-focused solutions in big data could help researchers and companies establish appropriate privacy policies in healthcare and other similar areas.

Abstract

Full Text|PDF|XML
Accurate and reliable detection of apples in complex orchard environments remains a challenging task due to varying illumination, cluttering backgrounds, and overlapping fruits. In this paper, the difficulties were tackled with a novel edge-enhanced detection framework proposed to integrate dynamic image smoothing, entropy-based edge amplification, and directional energy-driven contour extraction. An adaptive smoothing filter was adopted with a sigmoid-based weighting function to selectively preserve edge structures while suppressing noise in homogeneous regions. The input of Red Green Blue (RGB) image was subsequently transformed into the Hue, Saturation, and Value (HSV) color space to exploit hue information, thereby improving color-based feature discrimination. The introduction of a hybrid entropy-weighted gradient scheme helped strengthen edge detection, that is, the local image entropy modulated gradient magnitudes to emphasize structured regions. A global threshold was then applied to refine the enhanced edge map. Ultimately, continuous apple contours were extracted using a direction-constrained energy propagation approach, in which connected edge pixels were traced according to compass orientations, thus ensuring accurate contour assembly even under occlusion or low contrast. Experimental evaluations confirmed that the proposed framework substantially improved the accuracy of boundary detection across diverse imaging conditions; its potential application in automated fruit detection and precision harvesting was therefore highlighted.

Abstract

Full Text|PDF|XML

Hyperparameter search was found not making good use of compute resources as surrogate-based optimizers consume extensive memory and demand long set-up time. Meanwhile, projects running with fixed budgets require lean tuning tools. The current study presents Bounding Box Tuner (BBT) and conducts tests of its capability to attain maximum validation accuracy while reducing tuning time and memory use. The project team compared BBT with Random Search, Gaussian Processes for Bayesian Optimization, Tree-Structured Parzen Estimator (TPE), Evolutionary Search and Local Search to decide on the optimum option. Modified National Institute of Standards and Technology (MNIST) classification with a multilayer perceptron (0.11 M weights) and Tiny Vision Transformer (TinyViT) (9.5 M weights) were adopted. Each optimizer was assigned to run 50 trials. During the trial, early pruning stopped a run if validation loss rose for four epochs. All tests applied one NVIDIA GTX 1650 Ti GPU; the key metrics for measurement included best validation accuracy, total search time, and time per trial. As regards the perceptron task, BBT reached 97.88% validation accuracy in 1994 s whereas TPE obtained 97.98% in 2976 s. Concerning TinyViT, BBT achieved 94.92% in 2364 s, and GP-Bayesian gained 94.66% in 2191 s. It was discovered that BBT kept accuracy within 0.1 percentage points of the best competitor and reduced tuning time by one-third. The algorithm renders the surrogate model unnecessary, enforces constraints by design and exposes solely three user parameters. Supported by the evidence of these benefits, BBT was considered to be a practical option for rapid and resource-aware hyperparameter optimization in deep-learning pipelines.

Abstract

Full Text|PDF|XML
A novel electronic voting system (EVS) was developed by integrating blockchain technology and advanced facial recognition to enhance electoral security, transparency, and accessibility. The system integrates a public, permissionless blockchain—specifically the Ethereum platform—to ensure end-to-end transparency and immutability throughout the voting lifecycle. To reinforce identity verification while preserving voter privacy, a facial recognition technology based on the ArcFace algorithm was employed. This biometric approach enables secure, contactless voter authentication, mitigating risks associated with identity fraud and multiple voting attempts. The confluence of blockchain technology and facial recognition in a unified architecture was shown to improve system robustness against tampering, data breaches, and unauthorized access. The proposed system was designed within a rigorous research framework, and its technical implementation was critically assessed in terms of security performance, scalability, user accessibility, and system latency. Furthermore, potential ethical implications and privacy considerations were addressed through the use of decentralized identity management and encrypted biometric data storage. The integration strategy not only enhances the verifiability and auditability of election outcomes but also promotes greater inclusivity by enabling remote participation without compromising system integrity. This study contributes to the evolving field of electronic voting by demonstrating how advanced biometric verification and distributed ledger technologies can be synchronously leveraged to support democratic processes. The findings are expected to inform future deployments of secure, accessible, and transparent electoral platforms, offering practical insights for governments, policymakers, and technology developers aiming to modernize electoral systems in a post-digital era.
load more...
- no more data -
Most cited articles, updated regularly using citation data from CrossRef.
- no more data -