Javascript is required
Search
Volume 4, Issue 2, 2025
Open Access
Research article
Crowd Density Estimation via a VGG-16-Based CSRNet Model
damla tatlıcan ,
nafiye nur apaydin ,
orhan yaman ,
mehmet karakose
|
Available online: 04-29-2025

Abstract

Full Text|PDF|XML
Accurate crowd density estimation has become critical in applications ranging from intelligent urban planning and public safety monitoring to marketing analytics and emergency response. In recent developments, various methods have been used to enhance the precision of crowd analysis systems. In this study, a Convolutional Neural Network (CNN)-based approach was presented for crowd density detection, wherein the Congested Scene Recognition Network (CSRNet) architecture was employed with a Visual Geometry Group (VGG)-16 backbone. This method was applied to two benchmark datasets—Mall and Crowd-UIT—to assess its effectiveness in real-world crowd scenarios. Density maps were generated to visualize spatial distributions, and performance was quantitatively evaluated using Mean Squared Error (MSE) and Mean Absolute Error (MAE) metrics. For the Mall dataset, the model achieved an MSE of 0.08 and an MAE of 0.10, while for the Crowd-UIT dataset, an MSE of 0.05 and an MAE of 0.15 were obtained. These results suggest that the proposed VGG-16-based CSRNet model yields high accuracy in crowd estimation tasks across varied environments and crowd densities. Additionally, the model demonstrates robustness in generalizing across different dataset characteristics, indicating its potential applicability in both surveillance systems and public space management. The outcomes of this investigation offer a promising direction for future research in data-driven crowd analysis, particularly in enhancing predictive reliability and real-time deployment capabilities of deep learning models for population monitoring tasks.

Abstract

Full Text|PDF|XML

To address the issue of estimating energy consumption in computer systems, this study investigates the contribution of various hardware parameters to energy fluctuations, as well as the correlation between these parameters. Based on this analysis, the CM model was proposed, selecting the most representative and monitorable parameters that reflect changes in system energy consumption. The CMP (Chip Multiprocessors) model adapts to different task states of the computer system by identifying primary components driving energy consumption under varying conditions. Energy consumption estimation was then conducted by monitoring these dominant parameters. Experiments across various task states demonstrate that the CMP model outperforms traditional FAN (Fuzzy Attack Net) and Cubic models, particularly when the computer system engages in data-intensive tasks.

Abstract

Full Text|PDF|XML
A novel electronic voting system (EVS) was developed by integrating blockchain technology and advanced facial recognition to enhance electoral security, transparency, and accessibility. The system integrates a public, permissionless blockchain—specifically the Ethereum platform—to ensure end-to-end transparency and immutability throughout the voting lifecycle. To reinforce identity verification while preserving voter privacy, a facial recognition technology based on the ArcFace algorithm was employed. This biometric approach enables secure, contactless voter authentication, mitigating risks associated with identity fraud and multiple voting attempts. The confluence of blockchain technology and facial recognition in a unified architecture was shown to improve system robustness against tampering, data breaches, and unauthorized access. The proposed system was designed within a rigorous research framework, and its technical implementation was critically assessed in terms of security performance, scalability, user accessibility, and system latency. Furthermore, potential ethical implications and privacy considerations were addressed through the use of decentralized identity management and encrypted biometric data storage. The integration strategy not only enhances the verifiability and auditability of election outcomes but also promotes greater inclusivity by enabling remote participation without compromising system integrity. This study contributes to the evolving field of electronic voting by demonstrating how advanced biometric verification and distributed ledger technologies can be synchronously leveraged to support democratic processes. The findings are expected to inform future deployments of secure, accessible, and transparent electoral platforms, offering practical insights for governments, policymakers, and technology developers aiming to modernize electoral systems in a post-digital era.

Abstract

Full Text|PDF|XML

Hyperparameter search was found not making good use of compute resources as surrogate-based optimizers consume extensive memory and demand long set-up time. Meanwhile, projects running with fixed budgets require lean tuning tools. The current study presents Bounding Box Tuner (BBT) and conducts tests of its capability to attain maximum validation accuracy while reducing tuning time and memory use. The project team compared BBT with Random Search, Gaussian Processes for Bayesian Optimization, Tree-Structured Parzen Estimator (TPE), Evolutionary Search and Local Search to decide on the optimum option. Modified National Institute of Standards and Technology (MNIST) classification with a multilayer perceptron (0.11 M weights) and Tiny Vision Transformer (TinyViT) (9.5 M weights) were adopted. Each optimizer was assigned to run 50 trials. During the trial, early pruning stopped a run if validation loss rose for four epochs. All tests applied one NVIDIA GTX 1650 Ti GPU; the key metrics for measurement included best validation accuracy, total search time, and time per trial. As regards the perceptron task, BBT reached 97.88% validation accuracy in 1994 s whereas TPE obtained 97.98% in 2976 s. Concerning TinyViT, BBT achieved 94.92% in 2364 s, and GP-Bayesian gained 94.66% in 2191 s. It was discovered that BBT kept accuracy within 0.1 percentage points of the best competitor and reduced tuning time by one-third. The algorithm renders the surrogate model unnecessary, enforces constraints by design and exposes solely three user parameters. Supported by the evidence of these benefits, BBT was considered to be a practical option for rapid and resource-aware hyperparameter optimization in deep-learning pipelines.

- no more data -