Javascript is required
Information Dynamics and Applications
Information Dynamics and Applications (IDA)
ISSN (print): 2958-1486
ISSN (online): 2958-1494
Submit to IDA
Review for IDA
Propose a Special Issue
Current State
2024: Vol. 3

Information Dynamics and Applications (IDA) stands out in the realm of academic publishing as a distinct peer-reviewed, open-access journal, primarily focusing on the dynamic nature and diverse applications of information technology and its related fields. Distinguishing itself from other journals in the domain, IDA dedicates itself to exploring both the underlying principles and the practical impacts of information technology, thereby bridging theoretical research with real-world applications. IDA not only covers the traditional aspects of information technology but also delves into emerging trends and innovations that set it apart in the scholarly community. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our proficiency in orchestrating the peer-review, editing, and production processes, all accepted articles see rapid publication.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

balamurugan balusamy
Shiv Nadar University, India | website
Research interests: Big Data; Network Security; Cloud Computing; Block Chain; Data Sciences; Engineering Education
gengxin sun
Qingdao University, China | website
Research interests: Big Data; Artificial Intelligence; Complex Networks

Aims & Scope


Information Dynamics and Applications (IDA), as an international open-access journal, stands at the forefront of exploring the dynamics and expansive applications of information technology. This fully refereed journal delves into the heart of interdisciplinary research, focusing on critical aspects of information processing, storage, and transmission. With a commitment to advancing the field, IDA serves as a crucible for original research, encompassing reviews, research papers, short communications, and special issues on emerging topics. The journal particularly emphasizes innovative analytical and application techniques in various scientific and engineering disciplines.

IDA aims to provide a platform where detailed theoretical and experimental results can be published without constraints on length, encouraging comprehensive disclosure for reproducibility. The journal prides itself on the following attributes:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.


The scope of IDA is diverse and expansive, encompassing a wide range of topics within the realm of information technology:

  • Artificial Intelligence (AI) and Machine Learning (ML): Investigating the latest developments in AI and ML, and their applications across various industries.

  • Digitalization and Data Science: Exploring the transformation brought about by digital technologies and the analytical power of data science.

  • Signal Processing and Simulation Optimization: Advancements in the field of signal processing, including audio, video, and communication signal processing, and the development of optimization techniques for simulations.

  • Social Networking and Ubiquitous Computing: Research on the impact of social media on society and the pervasiveness of computing in everyday life.

  • Industrial Engineering and Information Architecture: Studies on the integration of information technology in industrial engineering and the structuring of information systems.

  • Internet of Things (IoT): Delving into the connected world of IoT and its implications for smart cities, healthcare, and more.

  • Data Mining, Storage, and Manipulation: Techniques and innovations in extracting valuable insights from large data sets, and the management of data storage and manipulation.

  • Database Management and Decision Support Systems: Exploring advanced database management systems and the development of decision support systems.

  • Enterprise Systems and E-Commerce: The evolution and future of enterprise resource planning systems and the impact of e-commerce on global markets.

  • Knowledge-Based Systems and Robotics: The intersection of knowledge-based systems with robotics and automation.

  • Cybersecurity and Software as a Service (SaaS): Cutting-edge research in cybersecurity and the growing trend of SaaS in business and consumer applications.

  • Supply Chain Management and Systems Analysis: Innovations in supply chain management driven by information technology, and systems analysis in complex IT environments.

  • Quantum Computing and Optimization: The role of quantum computing in solving complex problems and its future potential.

  • Virtual and Augmented Reality: Exploring the implications of virtual and augmented reality technologies in education, training, entertainment, and more.

Recent Articles
Most Downloaded
Most Cited


Full Text|PDF|XML
Traditional methods for keyword extraction predominantly rely on statistical relationships between words, neglecting the cohesive structure of the extracted keyword set. This study introduces an enhanced method for keyword extraction, utilizing the Watts-Strogatz model to construct a word network graph from candidate words within the text. By leveraging the characteristics of small-world networks (SWNs), i.e., short average path lengths and high clustering coefficients, the method ascertains the relevance between words and their impact on sentence cohesion. A comprehensive weight for each word is calculated through a linear weighting of features including part of speech, position, and Term Frequency-Inverse Document Frequency (TF-IDF), subsequently improving the impact factors of the TextRank algorithm for obtaining the final weight of candidate words. This approach facilitates the extraction of keywords based on the final weight outcomes. Through uncovering the deep hidden structures of feature words, the method effectively reveals the connectivity within the word network graph. Experiments demonstrate superiority over existing methods in terms of precision, recall, and F1-measure.


Full Text|PDF|XML

The decentralised nature of cryptocurrency, coupled with its potential for significant financial returns, has elevated its status as a sought-after investment opportunity on a global scale. Nonetheless, the inherent unpredictability and volatility of the cryptocurrency market present considerable challenges for investors aiming to forecast price movements and secure profitable investments. In response to this challenge, the current investigation was conducted to assess the efficacy of three Machine Learning (ML) algorithms, namely, Gradient Boosting (GB), Random Forest (RF), and Bagging, in predicting the daily closing prices of six major cryptocurrencies, namely, Binance, Bitcoin, Ethereum, Solana, USD, and XRP. The study utilised historical price data spanning from January 1, 2015 to January 26, 2024 for Bitcoin, from January 1, 2018 to January 26, 2024 for Ethereum and XRP, from January 1, 2021 to January 26, 2024 for Solana, and from January 1, 2019 to January 26, 2024 for USD. A novel approach was adopted wherein the lagging prices of the cryptocurrencies were employed as features for prediction, as opposed to the conventional method of using opening, high, and low prices, which are not predictive in nature. The data set was divided into a training set (80%) and a testing set (20%) for the evaluation of the algorithms. The performance of these ML algorithms was systematically compared using a suite of metrics, including R2, adjusted R2, Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The findings revealed that the GB algorithm exhibited superior performance in predicting the prices of Bitcoin and Solana, whereas the RF algorithm demonstrated greater efficacy for Ethereum, USD, and XRP. This comparative analysis underscores the relative advantages of RF over GB and Bagging algorithms in the context of cryptocurrency price prediction. The outcomes of this study not only contribute to the existing body of knowledge on the application of ML algorithms in financial markets but also provide actionable insights for investors navigating the volatile cryptocurrency market.

Open Access
Research article
Enhancing 5G LTE Communications: A Novel LDPC Decoder for Next-Generation Systems
divyashree yamadur venkatesh ,
komala mallikarjunaiah ,
mallikarjunaswamy srikantaswamy ,
ke huang
Available online: 03-21-2024


Full Text|PDF|XML

The advent of fifth-generation (5G) long-term evolution (LTE) technology represents a critical leap forward in telecommunications, enabling unprecedented high-speed data transfer essential for today’s digital society. Despite the advantages, the transition introduces significant challenges, including elevated bit error rate (BER), diminished signal-to-noise ratio (SNR), and the risk of jitter, undermining network reliability and efficiency. In response, a novel low-density parity check (LDPC) decoder optimized for 5G LTE applications has been developed. This decoder is tailored to significantly reduce BER and improve SNR, thereby enhancing the performance and reliability of 5G communications networks. Its design accommodates advanced switching and parallel processing capabilities, crucial for handling complex data flows inherent in contemporary telecommunications systems. A distinctive feature of this decoder is its dynamic adaptability in adjusting message sizes and code rates, coupled with the augmentation of throughput via reconfigurable switching operations. These innovations allow for a versatile approach to optimizing 5G networks. Comparative analyses demonstrate the decoder’s superior performance relative to the quasi-cyclic low-density check code (QCLDC) method, evidencing marked improvements in communication quality and system efficiency. The introduction of this LDPC decoder thus marks a significant contribution to the evolution of 5G networks, offering a robust solution to the pressing challenges faced by next-generation communication systems and establishing a new standard for high-speed wireless connectivity.

Open Access
Research article
A Comparative Review of Internet of Things Model Workload Distribution Techniques in Fog Computing Networks
nandini gowda puttaswamy ,
anitha narasimha murthy ,
houssem degha
Available online: 03-17-2024


Full Text|PDF|XML

In the realm of fog computing (FC), a vast array of intelligent devices collaborates within an intricate network, a synergy that, while promising, has not been without its challenges. These challenges, including data loss, difficulties in workload distribution, a lack of parallel processing capabilities, and security vulnerabilities, have necessitated the exploration and deployment of a variety of solutions. Among these, software-defined networks (SDN), double-Q learning algorithms, service function chains (SFC), virtual network functions (VNF) stand out as significant. An exhaustive survey has been conducted to explore workload distribution methodologies within Internet of Things (IoT) architectures in FC networks. This investigation is anchored in a parameter-centric analysis, aiming to enhance the efficiency of data transmission across such networks. It delves into the architectural framework, pivotal pathways, and applications, aiming to identify bottlenecks and forge the most effective communication channels for IoT devices under substantial workload conditions. The findings of this research are anticipated to guide the selection of superior simulation tools, validate datasets, and refine strategies for data propagation. This, in turn, is expected to facilitate optimal power consumption and enhance outcomes in data transmission and propagation across multiple dimensions. The rigorous exploration detailed herein not only illuminates the complexities of workload distribution in FC networks but also charts a course towards more resilient and efficient IoT ecosystems.

Open Access
Research article
Enhancing Image Captioning and Auto-Tagging Through a FCLN with Faster R-CNN Integration
shalaka prasad deore ,
taibah sohail bagwan ,
prachiti sunil bhukan ,
harsheen tejindersingh rajpal ,
shantanu bharat gade
Available online: 02-02-2024


Full Text|PDF|XML

In the realm of automated image captioning, which entails generating descriptive text for images, the fusion of Natural Language Processing (NLP) and computer vision techniques is paramount. This study introduces the Fully Convolutional Localization Network (FCLN), a novel approach that concurrently addresses localization and description tasks within a singular forward pass. It maintains spatial information and avoids detail loss, streamlining the training process with consistent optimization. The foundation of FCLN is laid by a Convolutional Neural Network (CNN), adept at extracting salient image features. Central to this architecture is a Localization Layer, pivotal in precise object detection and caption generation. The FCLN architecture amalgamates a region detection network, reminiscent of Faster Region-CNN (R-CNN), with a captioning network. This synergy enables the production of contextually meaningful image captions. The incorporation of the Faster R-CNN framework facilitates region-based object detection, offering precise contextual understanding and inter-object relationships. Concurrently, a Long Short-Term Memory (LSTM) network is employed for generating captions. This integration yields superior performance in caption accuracy, particularly in complex scenes. Evaluations conducted on the Microsoft Common Objects in Context (MS COCO) test server affirm the model's superiority over existing benchmarks, underscoring its efficacy in generating precise and context-rich image captions.

Open Access
Research article
Optimizing Misinformation Control: A Cloud-Enhanced Machine Learning Approach
muhammad daniyal baig ,
waseem akram ,
hafiz burhan ul haq ,
hassan zahoor rajput ,
muhammad imran
Available online: 01-24-2024


Full Text|PDF|XML
The digital age has witnessed the rampant spread of misinformation, significantly impacting the medical and financial sectors. This phenomenon, fueled by various sources, contributes to public distress and information warfare, necessitating robust countermeasures. In response, a novel model has been developed, integrating cloud computing with advanced machine learning techniques. This model prioritizes the identification and mitigation of false information through optimized classification strategies. Utilizing diverse datasets for predictive analysis, the model employs state-of-the-art algorithms, including K-Nearest Neighbors (KNN) and Random Forest (RF), to enhance accuracy and efficiency. A distinctive feature of this approach is the implementation of cloud-empowered transfer learning, providing a scalable and optimized solution to address the challenges posed by the vast, yet often unreliable, information available online. By harnessing the potential of cloud computing and machine learning, this model offers a strategic approach to combating the prevalent issue of misinformation in the digital world.


Full Text|PDF|XML

This study examines the aspects that can impact an organization's cloud security posture and the consequences for their cloud adoption strategies. Based on a thorough examination of existing literature, a conceptual framework is developed that includes several aspects such as organisational, technical, regulatory, operational, and human elements. The cloud security readiness is influenced by these five types of characteristics. A research instrument is utilised to evaluate the hypotheses pertaining to those aspects. The pilot survey showcases the research tool within the framework of a representative sample of organisations. In addition to conducting instrument testing, the initial responses also validate the importance of several elements that impact cloud security. The prominence of technical capabilities as a key factor underscores their vital contribution to bolstering cybersecurity readiness. Regulatory factors have a significant role in emphasising the necessity of compliance in cloud security. Organisational elements, such as managerial support, training, budget allocation, policy adherence, and governance, have a moderate impact. The presence of human elements also appears to contribute to and emphasise the necessity of promoting security awareness and alertness. This study enhances the existing body of knowledge on cloud security by offering insights into the various complex issues involved. The results can provide guidance to professionals seeking to enhance the cloud security of enterprises and scholars studying the changing cloud environment.


Full Text|PDF|XML

The landscape of livestock farming is undergoing a significant transformation, primarily influenced by the integration of the Internet of Things (IoT) technology. This systematic literature review (SLR) critically examines the role of IoT in enhancing cow health monitoring, a burgeoning field of research drawing considerable attention in recent years. Spanning articles published from 2017 to 2023 in eminent academic forums, this study meticulously selected and analyzed thirty publications. These were chosen through a structured process, evaluating each for its relevance based on title and abstract. The review encapsulates a thorough investigation of the applications, sensors, and devices underpinning IoT-based cow health monitoring systems. It is observed that the current research landscape is dynamically evolving, marked by emerging trends and noticeable gaps in technology and application. This synthesis of existing literature offers an insightful overview of the potential and limitations inherent in current IoT solutions, highlighting their efficacy in real-world scenarios. Furthermore, this review delineates the challenges faced and posits future research directions to address unresolved issues in cow health monitoring. The primary objective of this systematic analysis is to consolidate research findings, thereby advancing the understanding of IoT's impact in this field. It also aims to foster a comprehensive dialogue on the technological advancements and their implications for future research endeavors in livestock farming.


Full Text|PDF|XML

The swift global spread of Corona Virus Disease 2019 (COVID-19), identified merely four months prior, necessitates rapid and precise diagnostic methods. Currently, the diagnosis largely depends on computed tomography (CT) image interpretation by medical professionals, a process susceptible to human error. This research delves into the utility of Convolutional Neural Networks (CNNs) in automating the classification of COVID-19 from medical images. An exhaustive evaluation and comparison of prominent CNN architectures, namely Visual Geometry Group (VGG), Residual Network (ResNet), MobileNet, Inception, and Xception, are conducted. Furthermore, the study investigates ensemble approaches to harness the combined strengths of these models. Findings demonstrate the distinct advantage of ensemble models, with the novel deep learning (DL)+ ensemble technique notably surpassing the accuracy, precision, recall, and F-score of individual CNNs, achieving an exceptional rate of 99.5%. This remarkable performance accentuates the transformative potential of CNNs in COVID-19 diagnostics. The significance of this advancement lies not only in its reliability and automated nature, surpassing traditional, subjective human interpretation but also in its contribution to accelerating the diagnostic process. This acceleration is pivotal for the effective implementation of containment and mitigation strategies against the pandemic. The abstract delineates the methodological choices, highlights the unparalleled efficacy of the DL+ ensemble technique, and underscores the far-reaching implications of employing CNNs for COVID-19 detection.

Open Access
Research article
Enhancing Healthcare Data Security in IoT Environments Using Blockchain and DCGRU with Twofish Encryption
kumar raja depa ramachandraiah ,
naga jagadesh bommagani ,
praveen kumar jayapal
Available online: 11-30-2023


Full Text|PDF|XML

In the rapidly evolving landscape of digital healthcare, the integration of cloud computing, Internet of Things (IoT), and advanced computational methodologies such as machine learning and artificial intelligence (AI) has significantly enhanced early disease detection, accessibility, and diagnostic scope. However, this progression has concurrently elevated concerns regarding the safeguarding of sensitive patient data. Addressing this challenge, a novel secure healthcare system employing a blockchain-based IoT framework, augmented by deep learning and biomimetic algorithms, is presented. The initial phase encompasses a blockchain-facilitated mechanism for secure data storage, authentication of users, and prognostication of health status. Subsequently, the modified Jellyfish Search Optimization (JSO) algorithm is employed for optimal feature selection from datasets. A unique health status prediction model is introduced, leveraging a Deep Convolutional Gated Recurrent Unit (DCGRU) approach. This model ingeniously combines Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) processes, where the GRU network extracts pivotal directional characteristics, and the CNN architecture discerns complex interrelationships within the data. Security of the data management system is fortified through the implementation of the twofish encryption algorithm. The efficacy of the proposed model is rigorously evaluated using standard medical datasets, including Diabetes and EEG Eyestate, employing diverse performance metrics. Experimental results demonstrate the model's superiority over existing best practices, achieving a notable accuracy of 0.884. Furthermore, comparative analyses with the Advanced Encryption Standard (AES) and Elliptic Curve Cryptography (ECC) models reveal enhanced performance metrics, with the proposed model achieving a processing time and throughput of 40 and 45.42, respectively.

Open Access
Research article
Comparative Analysis of Seizure Manifestations in Alzheimer’s and Glioma Patients via Magnetic Resonance Imaging
jayanthi vajiram ,
sivakumar shanmugasundaram ,
rajeswaran rangasami ,
utkarsh maurya
Available online: 10-24-2023


Full Text|PDF|XML

A notable association between Alzheimer's Disease and Epilepsy, two divergent neurological conditions, has been established through previous research, illustrating an elevated seizure development risk in individuals diagnosed with Alzheimer’s Disease (AD). The hippocampus, fundamental in both seizure and tumour pathology, is intricately investigated herein. The subsequent aberrant electrical activity within this brain region, frequently implicated in seizure onset and propagation, underpins a complex relationship between degenerative cerebral changes and seizure incidence. Symptomatic manifestations in hippocampal glioma include, but are not limited to, seizures, memory deficits, and language difficulties, contingent upon the tumour's location and size. Thus, the cruciality of proficient seizure detection and analysis is underscored. Employing canny edge detection and thresholding to delineate contours and boundaries within images, an analysis was conducted by transmuting grayscale or colour images into a binary format. The input dataset, utilised for the training and testing of machine and deep-learning models, comprised images of seizures. These models were subsequently trained to discern patterns and features within the images, facilitating the differentiation between two predefined classes. Resultantly, the models predicted, with a defined accuracy level, the presence or absence of a seizure within a new image. The Support Vector Machine (SVM) and Convolutional Neural Network (CNN) models demonstrated classification accuracies of 96% and 95%, respectively. By analysing performance metrics on a per-slice basis, the localization of seizure activity within the brain could be visualised, offering valuable insights into regions affected by this activity. The amalgamation of edge detection, feature extraction, and classification models proficiently discriminated between seizure and non-seizure activities, providing pivotal insights for the diagnosis and therapeutic strategies for epilepsy. Further, studying these neurological alterations can illuminate the progression and severity of cognitive and emotional deficits within affected individuals, whilst advancements in diagnostic methodologies, such as Magnetic Resonance Imaging (MRI), facilitate an enriched comparative analysis.

load more...
- no more data -
- no more data -