Javascript is required
1.
M. Shi and I. Hussain, “Improved region-based active contour segmentation through divergence and convolution techniques,” AIMS Math., vol. 10, no. 1, pp. 654–671, 2025. [Google Scholar] [Crossref]
2.
R. Archana and P. S. E. Jeevaraj, “Deep learning models for digital image processing: A review,” Artif. Intell. Rev., vol. 57, no. 1, p. 11, 2024. [Google Scholar] [Crossref]
3.
I. Hussain and J. Muhammad, “Efficient convex region-based segmentation for noising and inhomogeneous patterns,” Inverse Probl. Imaging, vol. 17, no. 3, pp. 708–725, 2023. [Google Scholar] [Crossref]
4.
R. Jabbar, E. Dhib, A. B. Said, M. Krichen, N. Fetais, E. Zaidan, and K. Barkaoui, “Blockchain technology for intelligent transportation systems: A systematic literature review,” IEEE Access, vol. 10, pp. 20995–21031, 2022. [Google Scholar] [Crossref]
5.
T. Yuan, W. da Rocha Neto, C. E. Rothenberg, K. Obraczka, C. Barakat, and T. Turletti, “Machine learning for next-generation intelligent transportation systems: A survey,” Trans. Emerg. Telecommun. Technol., vol. 33, no. 4, p. e4427, 2022. [Google Scholar] [Crossref]
6.
H. Behrooz and Y. M. Hayeri, “Machine learning applications in surface transportation systems: A literature review,” Appl. Sci., vol. 12, no. 18, p. 9156, 2022. [Google Scholar] [Crossref]
7.
O. Fokina, A. Mottaeva, and A. Mottaeva, “Transport infrastructure in the system of environmental projects for sustainable development of the region,” E3S Web Conf., vol. 515, p. 01015, 2024. [Google Scholar] [Crossref]
8.
J. Milewicz, D. Mokrzan, and G. M. Szymański, “Environmental impact evaluation as a key element in ensuring sustainable development of rail transport,” Sustainability, vol. 15, no. 18, p. 13754, 2023. [Google Scholar] [Crossref]
9.
H. Hmamed, A. Benghabrit, A. Cherrafi, and N. Hamani, “Achieving a sustainable transportation system via economic, environmental, and social optimization: A comprehensive AHP-DEA approach from the waste transportation sector,” Sustainability, vol. 15, no. 21, p. 15372, 2023. [Google Scholar] [Crossref]
10.
A. Kwilinski, O. Lyulyov, and T. Pimonenko, “Reducing transport sector CO2 emissions patterns: Environmental technologies and renewable energy,” J. Open Innov. Technol. Mark. Complex., vol. 10, no. 1, p. 100217, 2024. [Google Scholar] [Crossref]
11.
S. K. Rajput, J. C. Patni, S. S. Alshamrani, and others, “Automatic vehicle identification and classification model using the YOLOv3 algorithm for a toll management system,” Sustainability, vol. 14, no. 15, p. 9163, 2022. [Google Scholar] [Crossref]
12.
P. Sharma, A. Singh, K. K. Singh, and A. Dhull, “Vehicle identification using modified region based convolution network for intelligent transportation system,” Multimed. Tools Appl., vol. 81, no. 24, pp. 34893–34917, 2022. [Google Scholar] [Crossref]
13.
S. S. Tippannavar and Y. SD, “Real-time vehicle identification for improving the traffic management system—A review,” J. Trends Comput. Sci. Smart Technol., vol. 5, no. 3, pp. 323–342, 2023. [Google Scholar] [Crossref]
14.
F. Ni, J. Zhang, and E. Taciroglu, “Development of a moving vehicle identification framework using structural vibration response and deep learning algorithms,” Mech. Syst. Signal Process., vol. 201, p. 110667, 2023. [Google Scholar] [Crossref]
15.
O. Nasr, E. Alsisi, K. Mohiuddin, and A. Alqahtani, “Designing an intelligent QR code-based mobile application: A novel approach for vehicle identification and authentication,” Indian J. Sci. Technol., vol. 16, no. 37, pp. 3139–3147, 2023. [Google Scholar] [Crossref]
16.
W. Wang, C. Huai, L. Meng, Z. Wang, and H. Zhang, “Research on the detection and recognition system of target vehicles based on fusion algorithm,” Math. Syst. Sci., vol. 2, no. 2, p. 2760, 2024. [Google Scholar] [Crossref]
17.
M. Zohaib, M. Asim, and M. ELAffendi, “Enhancing emergency vehicle detection: A deep learning approach with multimodal fusion,” Mathematics, vol. 12, no. 10, p. 1514, 2024. [Google Scholar] [Crossref]
18.
A. H. F. de Córdova, J. L. Olazagoitia, and C. Gijón-Rivera, “Non-invasive identification of vehicle suspension parameters: A methodology based on synthetic data analysis,” Mathematics, vol. 12, no. 3, p. 397, 2024. [Google Scholar] [Crossref]
19.
H. Moussaoui, N. E. Akkad, M. Benslimane, W. El-Shafai, A. Baihan, C. Hewage, and R. S. Rathore, “Enhancing automated vehicle identification by integrating YOLOv8 and OCR techniques for high-precision license plate detection and recognition,” Sci. Rep., vol. 14, no. 1, p. 14389, 2024. [Google Scholar] [Crossref]
20.
S. Kanagamalliga, P. Kovalan, K. Kiran, and S. Rajalingam, “Traffic management through cutting-edge vehicle detection, recognition, and tracking innovations,” Procedia Comput. Sci., vol. 233, pp. 793–800, 2024. [Google Scholar] [Crossref]
21.
N. Islam, S. K. Ray, M. A. Hossain, M. A. R. Hasan, and M. B. A. Z. Shammo, “Vehicle classification and detection using YOLOv8: A study on highway traffic analysis,” in 2024 International Conference on Recent Progresses in Science, Engineering and Technology (ICRPSET), Rajshahi, Bangladesh, 2024, pp. 1–4. [Google Scholar] [Crossref]
22.
I. El Mallahi, J. Riffi, H. Tairi, and M. A. Mahraz, “Enhancing traffic safety with advanced machine learning techniques and intelligent identification,” Res. Sq., 2024. [Google Scholar] [Crossref]
23.
A. C. Bovik, The Essential Guide to Image Processing. Academic Press, 2009. [Google Scholar]
Search
Open Access
Research article

A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance

ibrar hussain*
Department of Mathematics, University of Peshawar, 25120 Peshawar, Pakistan
Mechatronics and Intelligent Transportation Systems
|
Volume 4, Issue 2, 2025
|
Pages 61-71
Received: 03-09-2025,
Revised: 04-14-2025,
Accepted: 04-27-2025,
Available online: 05-05-2025
View Full Article|Download PDF

Abstract:

The efficient classification of transport vehicles is critical to the optimization of modern transportation systems, yet significant challenges persist, particularly in distinguishing Heavy Transport Vehicles (HTVs) from Light Transport Vehicles (LTVs). These challenges arise due to considerable variations in vehicle size, shape, orientation, and external factors such as camera perspective, lighting conditions, and occlusions. In this study, a novel classification framework is proposed, integrating geometric feature extraction with a soft computing approach based on fuzzy logic. Key geometric attributes, including bounding box length, width, area, and aspect ratio, are extracted through image processing techniques. Initial classification is performed via threshold-based rules to eliminate non-HTV instances using predefined feature thresholds. To address uncertainties inherent in real-world surveillance conditions, fuzzy logic inference is subsequently applied, enabling flexible and robust decision-making in the presence of imprecise or noisy data. This hybrid methodology, combining deterministic thresholding and soft computing principles, enhances classification reliability and adaptability under diverse environmental and operational conditions. Extensive real-world experiments have been conducted to validate the proposed framework, demonstrating superior performance in terms of accuracy, robustness, and computational efficiency when compared with conventional classification methods. The results underscore the potential of the framework for deployment in intelligent traffic monitoring systems where precise vehicle categorization is essential for traffic management, infrastructure planning, and safety enforcement.

Keywords: Image processing, Geometric feature extraction, Fuzzy logic, Threshold-based classification, Vehicle detection, Transportation systems

1. Introduction

Image processing techniques play a crucial role in isolating and analyzing specific objects within complex visual environments, aiding tasks such as object recognition, tracking, and scene understanding [1], [2], [3]. The transport system is a critical component of modern infrastructure, facilitating the movement of people, goods, and services across different geographical locations. It encompasses a variety of modes, including road, rail, air, and water transport, each designed to meet specific needs based on distance, speed, and capacity [4], [5], [6]. Efficient transport systems are essential for economic development, enabling the exchange of resources, promoting trade, and improving accessibility to education, healthcare, and employment. They play a vital role in shaping urbanization, social integration, and environmental sustainability [7], [8], [9], [10]. The evolution of transportation technologies and associated infrastructure continues to shape the functioning of societies, influencing settlement patterns, commercial activities, and global interactions. In recent years, there has been an increasing focus on sustainable transport solutions, aiming to reduce congestion, emissions, and energy consumption, while improving safety and accessibility for all users.

The transport system has witnessed significant advancements with the integration of intelligent technologies aimed at improving vehicle identification, classification, and traffic management. Several studies have focused on utilizing machine learning and deep learning techniques to automate these processes. For example, Rajput et al. [11] developed an automatic vehicle identification and classification model using the YOLOv3 algorithm for a toll management system, demonstrating its effectiveness in real-time vehicle detection and classification for smoother toll operations. Similarly, Sharma et al. [12] proposed a modified region-based convolution network for vehicle identification, enhancing the accuracy and efficiency of intelligent transportation systems. In the realm of traffic management, Tippannavar and SD [13] provided a comprehensive review of real-time vehicle identification techniques aimed at improving traffic flow and reducing congestion. Additionally, Ni et al. [14] introduced a novel moving vehicle identification framework based on structural vibration response and deep learning algorithms, offering a promising solution for dynamic vehicle tracking. Nasr et al. [15] explored the design of an intelligent QR code-based mobile application for vehicle identification and authentication, presenting a new approach to enhancing security and streamlining vehicle registration processes. These advancements, through innovative identification and classification models, contribute significantly to optimizing the efficiency, safety, and sustainability of modern transport systems.

Recent advancements in vehicle detection and recognition systems have significantly enhanced the functionality of transport systems, especially in urban settings where efficient traffic management is crucial. Wang et al. [16] explored a fusion algorithm for target vehicle detection and recognition, providing a robust solution for identifying vehicles under varying environmental conditions. This method, integrating multiple sources of data, helps improve the accuracy and reliability of vehicle recognition systems. However, the approach still faces challenges in handling real-time processing and the complexity of dynamic environments, where rapid changes in lighting, weather, and occlusions can impact performance. Similarly, Zohaib et al. [17] proposed a deep learning approach with multimodal fusion to enhance emergency vehicle detection, offering a solution that improves the response times and prioritization of emergency vehicles in traffic. While the method demonstrates effectiveness, its reliance on large datasets and high computational power for training deep learning models can be a limitation, especially in low-resource environments. Additionally, the fusion of multiple data sources requires complex data preprocessing, which can introduce delays in real-time applications. In line with this, de Córdova et al. [18] introduced a methodology for non-invasive identification of vehicle suspension parameters using synthetic data analysis, expanding the scope of vehicle identification beyond mere visual recognition and integrating performance-based parameters for better overall vehicle analysis. Although this approach improves the depth of vehicle analysis, the reliance on synthetic data may limit its applicability to real-world scenarios, where actual vehicle performance data might differ due to various external factors such as road conditions or vehicle wear and tear. Furthermore, Moussaoui et al. [19] integrated YOLOv8 and optical character recognition (OCR) techniques for high-precision license plate detection, marking an important step toward enhancing automated vehicle identification. While this combination of advanced detection techniques ensures greater accuracy in recognizing and classifying vehicles, even in complex traffic environments, the system has notable limitations. It relies heavily on the clear visibility of license plates, making it vulnerable to issues such as occlusions, dirty or worn-out plates, and varying plate designs across regions. Additionally, the integration of YOLOv8 and OCR significantly increases computational complexity, which may hinder real-time performance and scalability in large-scale or resource-constrained deployments.

These innovations, along with the previously mentioned advancements in intelligent transport systems, significantly contribute to improving safety, efficiency, and security within modern transportation infrastructure. However, challenges related to real-time processing, data preprocessing, reliance on synthetic data, and environmental conditions still persist, limiting the full potential of these systems in practical deployment. The fusion of deep learning, multimodal data, and advanced recognition techniques continues to evolve, offering promising solutions to enhance the overall functionality of transport systems worldwide.

In this paper, we propose a novel and robust vehicle classification model that differentiates between HTVs and LTVs. The model leverages a combination of geometric features, threshold-based criteria, and fuzzy logic to achieve high accuracy in real-world vehicle classification scenarios (see Figure 1). The classification process starts with the extraction of key geometric features from the bounding box surrounding the detected vehicle. These features include length, width, area, and aspect ratio, all of which play a crucial role in identifying HTVs, which are typically larger than regular vehicles.

The classification process is implemented in the following steps:

Feature extraction: The necessary geometric features—length, width, area, and aspect ratio—are computed from the vehicle's bounding box. These features form the foundation for distinguishing between HTVs and LTVs.

Threshold-based filtering: Predefined threshold rules are applied to filter out vehicles that do not meet the criteria for HTVs based on the extracted features. This step ensures that only vehicles likely to be HTVs proceed for further classification.

Fuzzy logic application: To account for uncertainties and variability in the measurements (such as changes in camera angle, vehicle orientation, or environmental factors), fuzzy logic is introduced. The fuzzy logic system models the imprecise nature of vehicle classification, providing flexibility and robustness in decision-making.

Defuzzification: The fuzzy outputs, representing the uncertainty in classification, are defuzzified to generate a crisp decision. This final output classifies the vehicle as either an HTV or LTV.

The integration of both crisp threshold rules and fuzzy logic offers the advantages of accuracy and robustness. The threshold-based approach ensures that clear-cut decisions are made when possible, while the fuzzy logic handles the uncertainties that arise in real-world conditions, such as variation in vehicle orientation and camera angles. This combination of techniques allows the system to adapt and make more reliable decisions in complex and dynamic environments. The key contribution of the proposed model lies in its innovative approach to vehicle classification, which combines the precision of threshold rules with the adaptability of fuzzy logic. By incorporating fuzzy logic, our model effectively addresses the uncertainties inherent in vehicle measurements and classification, improving the system’s robustness to real-world conditions. This hybrid approach enables the model to be highly effective in traffic management systems, where accurate and reliable vehicle classification is critical. The proposed model offers a significant advancement over traditional methods, providing a more flexible, adaptive, and accurate solution for HTV detection.

Figure 1. Flowchart of the proposed vehicle classification method using geometric features, threshold-based criteria, and fuzzy logic

2. Literature Review

In recent years, vehicle detection and classification have emerged as critical components in intelligent transportation systems, enabling a wide range of applications such as traffic monitoring, congestion management, toll collection, and road safety enforcement. Numerous studies have explored various computational techniques, ranging from traditional image processing methods to advanced deep learning and fuzzy logic-based frameworks, to improve the accuracy and robustness of vehicle classification systems. In particular, distinguishing between HTVs and non-HTVs is essential for optimizing road usage and enforcing regulatory compliance. This section reviews recent advancements in vehicle detection and classification, highlighting their methodologies, key contributions, and existing limitations to establish the context for the proposed model.

Kanagamalliga et al. [20] developed a traffic management framework integrating advanced vehicle detection, recognition, and tracking using machine learning and computer vision. Their approach enhances real-time monitoring and traffic flow by maintaining vehicle identities across frames, reducing data redundancy. Although effective in structured environments, the system depends on high-resolution cameras and stable lighting, limiting its performance in poor conditions. Moreover, it lacks detailed strategies for distinguishing between vehicle types, which is important for specific transportation policies.

Islam et al. [21] investigated vehicle classification and detection using the YOLOv8 model. They evaluated YOLOv8’s performance for real-time highway monitoring in Bangladesh, highlighting its ability to process images quickly with minimal accuracy loss. The model, trained on diverse vehicle types, showed high classification accuracy for cars, motorcycles, and trucks. However, it struggled in congested scenes, particularly with closely packed or partially occluded heavy vehicles. Despite its speed and accuracy, YOLOv8's computational complexity can cause bottlenecks in large datasets or resource-limited environments. Additionally, the study did not address the interpretability of the model's outputs, a key factor for intelligent transportation systems.

El Mallahi et al. [22] proposed a hybrid model combining multiple machine learning algorithms to enhance road safety and vehicle identification. Their system uses supervised and unsupervised learning to detect vehicles, pedestrians, and dynamic road objects in real-time, adapting to traffic behavior and environmental changes. A key feature is the addition of intelligent identification layers that provide contextual information such as speed, distance, and lane behavior. However, the framework lacks extensive cross-validation across diverse datasets, raising concerns about its generalizability and robustness under varying conditions. It also prioritizes detection over precise vehicle classification, limiting its applicability in areas like tolling, load regulation, and environmental compliance.

3. Methodology

To achieve accurate classification of vehicles into HTVs and LTVs, we propose a multi-stage methodology that integrates geometric feature extraction, threshold-based filtering, and fuzzy logic inference. The process begins by detecting vehicles in the input frames and computing key geometric features from their corresponding bounding boxes—specifically, length, width, area, and aspect ratio. These features are selected due to their strong correlation with the physical dimensions of HTVs, which are generally larger than standard vehicles. Following feature extraction, a set of predefined threshold rules is employed to filter out non-HTVs based on dimensional constraints. To address the inherent uncertainties and variability in real-world conditions—such as camera perspective, occlusion, and environmental noise—a fuzzy logic system is incorporated. This system interprets the geometric data within a flexible framework that tolerates imprecision and ambiguity. The final classification decision is obtained through defuzzification, converting the fuzzy output into a crisp label that determines whether the vehicle belongs to the HTV or LTV category. The entire classification pipeline is outlined in the following subsections.

3.1 Feature Extraction

To classify a vehicle as an HTV or otherwise, it is essential to extract meaningful geometric features from the input image. These features are derived from the bounding box that tightly encloses the detected vehicle object. The primary features considered in this work include the vehicle’s length, width, area, and aspect ratio, all of which provide significant indicators for distinguishing between LTVs and HTVs. However, it is important to note that feature extraction in this approach is based on two-dimensional pixel dimensions, and does not account for practical scene limitations such as camera calibration, perspective distortion, or depth estimation. These limitations may cause variations in the appearance of vehicles depending on their distance from the camera or their positioning in the scene. As a result, perspective distortion can affect the accuracy of the feature extraction, particularly for distant vehicles.

3.1.1 Length, width, and area

The bounding box around the detected vehicle provides two key spatial measurements: the vertical extent (height) and the horizontal extent (width). The length of the vehicle, denoted by L, is defined as the vertical height of the bounding box in pixels. Similarly, the width, denoted by W, is the horizontal width of the bounding box in pixels.

Once the height and width are obtained, the area A covered by the vehicle is computed as the product of these two dimensions. This area represents the overall spatial footprint of the vehicle and serves as a key discriminative feature, particularly since HTVs typically occupy a larger image region than lighter vehicles. The area is calculated using the following equation:

$ A=L \times W $

However, due to potential scale variations caused by camera placement, perspective distortion, or the vehicle’s distance from the camera, this feature is combined with other characteristics to improve classification robustness. For more accurate classification, depth estimation or camera calibration would be necessary to compensate for the effects of perspective distortion, which can cause distant vehicles to appear smaller than they are.

3.1.2 Aspect ratio

Another important feature is the aspect ratio, denoted by AR, which is the ratio of the vehicle’s height to its width. It is defined as:

$ \mathrm{AR}=\frac{L}{W} $

This ratio captures the structural profile of the vehicle. For instance, HTVs such as trucks or buses typically exhibit a lower aspect ratio, as they are wider relative to their height. In contrast, smaller vehicles may appear taller and narrower, leading to a higher aspect ratio.

3.2 Threshold-Based Mathematical Classification

In the initial stage of classification, threshold values are applied to geometric features extracted from labeled vehicle image datasets. These thresholds were empirically determined based on statistical observations and distribution analysis of features across multiple annotated vehicle categories. A preliminary dataset exploration revealed distinct separations in size and shape between HTVs and non-HTVs. These insights guided the setting of appropriate boundary values for effective filtering. Additionally, sensitivity testing was conducted by slightly adjusting each threshold to verify classification stability and robustness.

The following threshold-based conditions are defined for classifying a vehicle as HTV based on its extracted features (area, aspect ratio, length, and width).

3.2.1 Threshold criteria

Area (A): The area of the bounding box correlates directly with vehicle size. Data distribution showed that HTVs typically exceed 9000 px² in image space, while smaller vehicles remain below this threshold. A sensitivity range of 8500–9500 px² was tested, with 9000 px² yielding the best trade-off between precision and recall. Therefore:

$ A>9000 \mathrm{px}^2 $

Aspect ratio (AR): The ratio of height to width (AR) was analyzed across different vehicle classes. HTVs generally exhibit broader and flatter profiles, resulting in lower AR values. Histograms revealed that the majority of HTVs fall below an AR of 1.5, which was thus selected after validation:

$ \mathrm{AR}<1.5 $

Length (L): Vehicle height was also examined, and vehicles categorized as HTVs consistently showed vertical lengths above 100 px in the image dataset. This threshold was confirmed through iterative adjustments in the range of 90–120 px, with 100 px providing a balanced cutoff:

$ L>100 \mathrm{px} $

Width (W): Similarly, HTVs demonstrated consistently higher width values, typically exceeding 80 px. Below this value, the majority of vehicles were non-HTVs such as cars or bikes. Thus, 80 px was established as a reliable threshold after testing several values from 70 to 90 px:

$ W>80 \mathrm{px} $

These empirically derived thresholds provide a reliable initial filtering layer before applying fuzzy logic. They are not arbitrarily chosen but rather supported by both feature distribution analysis and sensitivity tuning on the labeled dataset, ensuring robustness and adaptability to real-world classification scenarios.

Figure 2 shows the feature distributions for HTVs and non-HTVs, demonstrating the effectiveness of the proposed model in distinguishing between the two categories. Key features, including Area (px²), Length (px), and Width (px), are analyzed. Empirically determined thresholds—9000 px² for Area, 1.5 for the Length-to-Width ratio, and 80 px for Width—are indicated with dashed lines to highlight optimal class separation. The histograms reveal distinct patterns, with HTVs generally exhibiting larger dimensions than non-HTVs. These results validate the model’s feature extraction process and suggest strong potential for accurate HTV detection in real-world applications.

Figure 2. Feature distributions for HTV and Non-HTV classes
3.3 Classification Rule

The overall classification decision is made by applying all the above threshold conditions simultaneously. A vehicle is classified as an HTV if it satisfies all the following conditions:

$ \text { IF } A>9000 \text { AND AR }<1.5 \text { AND } L>100 \text { AND } W>80 \Rightarrow \text { Vehicle is an HTV } $

This rule is based on the combination of the four extracted features: area, aspect ratio, length, and width. Vehicles meeting all the specified conditions are classified as HTVs. If any condition is not satisfied (e.g., if the vehicle has a high aspect ratio or a small area), it is classified as a non-HTV (LTV or other types of vehicles).

3.4 Fuzzy Input Variables and Membership Functions

To classify vehicles into HTVs or non-HTVs, fuzzy logic is used to model the uncertainty and imprecision associated with visual input features. The primary features considered for classification are Length (L), Width (W), Area (A), and Aspect Ratio (AR). These features are fuzzified using linguistic terms such as Short, Medium, Long, etc., and corresponding membership functions (see Table 1). The membership functions were constructed using a combination of trapezoidal and Gaussian curves, with parameters derived from empirical analysis of the dataset.

Membership function parameters: The trapezoidal membership functions are defined using four parameters: a, b, c, and d, where [a, b] is the rising edge, [b, c] is the plateau, and [c, d] is the falling edge. These parameters were chosen based on percentile thresholds calculated from the training dataset to reflect typical ranges for each class of vehicle. For Gaussian functions, the mean ($\mu$) and standard deviation ($\sigma$) were similarly estimated using statistical analysis of the feature distribution.

Table 1. Fuzzy input variables and membership functions
FeatureLinguistic TermsMembership Function
LengthShort, Medium, LongTrapezoidal
WidthNarrow, Medium, WideTrapezoidal
AreaSmall, Moderate, LargeGaussian
Aspect RatioLow, Medium, HighGaussian

The trapezoidal membership function is defined as:

$\mu(x)= \begin{cases}0 & x \leq a \\ \frac{x-a}{b-a} & a < x \leq b \\ 1 & b < x \leq c \\ \frac{d-x}{d-c} & c < x \leq d \\ 0 & x > d\end{cases}$

And the Gaussian membership function is given by:

$ \mu(x)=e^{-\frac{(x-\mu)^2}{2 \sigma^2}} $

These functions allow for smooth transitions between linguistic categories and enable robust handling of noisy or imprecise inputs.

3.5 Fuzzy Rule Base

A comprehensive rule base was developed to cover a wider range of vehicle configurations. Each rule uses combinations of input linguistic terms to infer the vehicle class. Below is an extended set of representative rules:

Rule 1: If L is Long AND W is Wide AND A is Large AND AR is Low THEN Vehicle is HTV.

Rule 2: If L is Medium AND W is Wide AND A is Moderate AND AR is Medium THEN Vehicle is LTV.

Rule 3: If L is Short AND W is Narrow AND A is Small AND AR is High THEN Vehicle is Two-Wheeler.

Rule 4: If L is Medium AND W is Medium AND A is Moderate AND AR is Medium THEN Vehicle is Car.

Rule 5: If L is Long AND W is Medium AND A is Large AND AR is Low THEN Vehicle is Mini-Bus.

This expanded rule base ensures coverage across a broad range of real-world vehicle geometries and enhances the classification accuracy of the fuzzy inference system. All rules were defined using expert knowledge and validated against annotated samples from the dataset.

3.6 Rule Firing Strength

In fuzzy logic, rule firing strength represents the degree to which a rule’s conditions are satisfied. For each fuzzy rule, the firing strength is calculated by taking the minimum of the membership values of the involved fuzzy sets. This represents the idea that a rule can only be as strong as the weakest condition it is based on.

For Rule 1, the firing strength $\alpha$ is computed as follows:

$ \alpha=\min \left(\mu_{\text {Long}}(L), \mu_{\text {wide}}(W), \mu_{\text {Large}}(A), \mu_{\text {LowaR}}(A R)\right) $

where,

$\mu_{\text {Long}}(L)$: The membership value of the length feature $L$ in the fuzzy set “Long”.

$\mu_{\text {Wide}}(W)$: The membership value of the width feature $W$ in the fuzzy set “Wide”.

$\mu_{\text {Large}}(A)$: The membership value of the area feature $A$ in the fuzzy set “Large”.

$\mu_{\text {LowAR}}(A R)$: The membership value of the aspect ratio feature $A R$ in the fuzzy set “Low”.

The firing strength $\alpha$ reflects how well the given vehicle satisfies the conditions of Rule 1. A lower firing strength indicates that the vehicle does not satisfy the rule as strongly. This same calculation is repeated for other rules to assess their individual firing strengths.

3.7 Defuzzification and Decision

Defuzzification is the process of converting the fuzzy output of the inference system into a single crisp value that can be used for decision-making. One of the most common methods for defuzzification is the “centroid method”(also known as the center of gravity method).

The centroid defuzzification method calculates the crisp output as the weighted average of the possible output values, where the weights are determined by the membership degrees of the fuzzy sets in the output domain.

The formula for centroid defuzzification is:

$ y_{\text {crisp }}=\frac{\int x \cdot \mu_{\text {Output }}(x) d x}{\int \mu_{\text {Output }}(x) d x} $

where, $x$ represents the possible output values. $\mu_{\text {Output}}(x)$ is the membership function of the fuzzy output. The numerator is the weighted sum of all possible output values, where the weight is the degree of membership at each point $x$. The denominator is the total sum of all membership values across the output space, ensuring that the weights are normalized.

This process results in a crisp value $y_{\text {crisp}}$ that represents the best estimation of the output from the fuzzy system. Once the defuzzified value is obtained, a decision is made by comparing it against a threshold. For this system, we use the following decision rule:

If $y_{\text {crisp}}>0.6$, then the vehicle is classified as an HTV. Alternatively, if $y_{\text {crisp}} \leq 0.6$, the vehicle is classified as a non-HTV or classified differently based on the system's logic.

4. Discussion

The proposed vehicle classification model demonstrates an effective approach for detecting and classifying HTVs in real-world traffic scenarios. By integrating geometric feature extraction with fuzzy logic, the model efficiently handles challenges such as vehicle occlusions, congestion, and varying vehicle types. The dataset used in this study is the Stanford Cars Dataset, a publicly available benchmark that contains 16,185 images of 196 classes of vehicles, including both LTVs and HTVs. For this study, a relevant subset comprising 500 images was selected and labeled accordingly. All images were resized to 255×255 pixels to maintain uniformity and enable efficient processing. The use of a standard public dataset supports reproducibility and benchmarking of the proposed model. The MATLAB functions employed in the development of the model are tailored for image processing tasks, utilizing built-in tools such as region props for feature extraction and custom fuzzy inference systems for decision-making. The MATLAB environment offers a flexible platform for implementing and fine-tuning the model, providing an intuitive interface for debugging and optimization. The code utilizes functions for bounding box extraction, thresholding, and fuzzy logic evaluation, creating a seamless integration of geometric and fuzzy approaches.

In the proposed model, several key parameters are defined for the fuzzy membership functions used to classify vehicle features such as Length (L), Width (W), Area (A), and Aspect Ratio (AR). These parameters were selected based on real-world traffic data to ensure accurate classification of HTVs. For Length (L), the trapezoidal membership function is defined with parameters: $a$=4.5, $b$=6.5, $c$=10.5, and $d$=12.5, which correspond to the categories “Short”, “Medium”, and “Long”. Similarly, for Width (W), the trapezoidal function is defined using $e$=2, $f$=4, $g$=6, and $h$=8, which represent the categories “Narrow”, “Medium”, and “Wide”. For Area (A), the Gaussian membership function uses the parameters $\mu_{\text {Small}}=1000, \mu_{\text {Large}}=4000, \sigma_{\text {Small}}$=200, and $\sigma_{\text {Large}}$=500, which define the categories “Small” and “Large”. The Aspect Ratio (AR) is also classified using a Gaussian function, with parameters $\mu_{\text {Low}}$=1, $\mu_{\text {High}}$=3, $\sigma_{\text {Low}}$=0.5, and $\sigma_{\text {High}}$=1.0 to capture the “Low” and “High” aspect ratios. The firing strength for each rule is calculated as the minimum of the membership values, and defuzzification is performed using the centroid method. The vehicle is classified as an HTV if the defuzzified output, $y_{\text {crisp}}$, is greater than 0.6 . These parameter values ensure that the model remains robust and accurate across various realworld traffic conditions.

The MATLAB code for the proposed vehicle classification model will be made available for research purposes. Interested researchers can request the code by sending an email to the corresponding author. This open access to the code is intended to facilitate further research and allow other practitioners to apply and build upon the proposed methodology for vehicle classification in intelligent traffic monitoring systems.

Figure 3 illustrates the step-by-step process of the proposed vehicle classification model, which operates in two main stages: feature extraction and classification. In the feature extraction stage, geometric attributes—including length, width, area, and aspect ratio—are obtained from each vehicle’s bounding box. These features serve as key indicators for distinguishing HTVs from other vehicle types. By focusing on fundamental geometric properties, the model maintains high accuracy even when vehicles are partially occluded or overlapping.

Figure 3. Overview of the proposed HTV detection framework

After feature extraction, threshold-based classification is performed, followed by entropy and fuzzy logic-based reasoning to account for image variations and enhance detection robustness. The final output identifies the detected HTVs within the input traffic scene, demonstrating the model’s effectiveness under diverse and challenging conditions.

In the second stage, the model applies a threshold-based classification system. The predefined thresholds are set based on empirical observations and statistical analysis of HTV dimensions, ensuring that only vehicles with characteristics above these limits are classified as HTVs. This step effectively filters out non-HTVs, which may include smaller cars, motorcycles, or light trucks that fall outside the size range typically associated with heavy vehicles. By leveraging these geometric thresholds, the system is able to make rapid and accurate decisions regarding vehicle classification.

To address the challenges posed by uncertain or noisy data, fuzzy logic is incorporated into the model. Traditional threshold-based methods may struggle when dealing with imprecise measurements, such as those caused by low-quality images, occlusions, or ambiguous vehicle shapes. Fuzzy logic provides a mechanism to handle these uncertainties, allowing the model to make more flexible and adaptive decisions. Specifically, fuzzy inference is used to assess the degree to which a vehicle’s features match the expected characteristics of an HTV, enabling the system to classify vehicles even in edge cases where strict geometric thresholds might not be met.

This combination of crisp thresholding and fuzzy inference enhances the robustness of the detection framework. The model becomes less sensitive to noise, occlusions, and other variations in traffic conditions, which are common in real-world environments. By integrating these two approaches, the model achieves a higher level of flexibility, allowing it to handle a broader range of traffic scenarios while maintaining high classification accuracy.

Figure 4 demonstrates the performance of the proposed vehicle classification model in detecting and classifying HTVs across a variety of real-world traffic scenarios. The top row shows the original traffic images captured under diverse conditions, including congestion, occlusion, and the presence of multiple vehicle types. These challenging environments often involve overlapping vehicles, obstructed views, and varying angles of observation.

The bottom row presents the detection outputs, where accurately identified HTVs are highlighted with red bounding boxes. The results illustrate the model’s robustness and reliability across different traffic conditions, confirming its effectiveness in complex, real-world environments.

Figure 4. HTV detection results in real-world traffic scenes

Overall, the proposed vehicle classification model not only demonstrates superior performance in identifying HTVs but also showcases its adaptability in complex, dynamic traffic environments. The fusion of geometric feature extraction, threshold-based classification, and fuzzy logic results in a robust system capable of providing reliable vehicle classification even under challenging conditions, making it highly suitable for intelligent traffic monitoring and management systems. Table 2 presents a comprehensive evaluation of the proposed model's performance in detecting HTVs under varying lighting and angle conditions. The performance metrics are based on standard image quality assessment techniques as described by Bovik [23]. The results demonstrate that the model maintains high accuracy across diverse scenarios, highlighting its robustness and adaptability.

Table 2. Performance of proposed model under varying light and angle conditions

Condition

Precision (%)

Recall (%)

F1-Score (%)

IoU

Bright Light (Day)

92.5

91.1

91.8

0.80

Low Light (Night)

88.6

85.3

86.9

0.74

Shadows/Overcast

89.7

86.8

88.2

0.76

Front View (0°)

93.1

91.0

92.0

0.81

Side View (90°)

90.2

88.5

89.3

0.77

Diagonal View (45°/135°)

87.9

85.6

86.7

0.73

Under bright daylight conditions, the model achieves the highest precision of 92.5%, with a recall of 91.1%, an F1-score of 91.8%, and an Intersection over Union (IoU) score of 0.80, indicating strong detection capability in well-lit environments. Even in low-light conditions, such as nighttime, the model sustains reasonable performance with an F1-score of 86.9% and an IoU of 0.74, reflecting its ability to handle challenging visual settings.

In scenarios involving shadows or overcast lighting, the model performs reliably, achieving an F1-score of 88.2% and IoU of 0.76, demonstrating effective generalization. Regarding vehicle orientation, the model performs best when vehicles are in the front view (0°), yielding an F1-score of 92.0% and an IoU of 0.81. However, performance slightly decreases for side views (90°) and diagonal views (45°/135°), with F1-scores of 89.3% and 86.7%, and IoU values of 0.77 and 0.73, respectively.

This decline can be attributed to the geometric distortion and reduced visibility of key distinguishing features (such as frontal grilles, headlights, and bumper structures) when vehicles are viewed from non-frontal angles. These features play a significant role in fuzzy rule-based classification; thus, their partial occlusion leads to a slight drop in detection accuracy. Moreover, some false detections were observed, particularly with large SUVs occasionally being misclassified as HTVs due to similar dimensional features.

5. Conclusion

The proposed fuzzy logic-based model provides an efficient and interpretable approach for the classification of HTVs using key geometric features—Length, Width, Area, and Aspect Ratio—extracted from traffic surveillance images. The use of trapezoidal and Gaussian membership functions enables the system to manage uncertainties and gradual transitions between vehicle categories. Implemented in MATLAB R2015a and tested on 255×255-pixel images, the model achieves robust performance in distinguishing HTVs from other vehicle types, making it suitable for real-time intelligent transport systems and traffic analysis.

Despite its effectiveness, the model has a couple of minor limitations. First, it shows reduced accuracy in detecting HTVs that appear far from the camera, due to smaller and less distinct bounding boxes which affect the membership strength. Second, the model relies solely on geometric features, which may limit its performance in crowded scenes where vehicles are partially occluded or overlapping.

To address these limitations, future work will aim to incorporate distance-aware normalization techniques or depth estimation to better handle faraway vehicles. Moreover, integrating additional features such as motion patterns, texture information, or deep feature embeddings can enhance the model's capability in complex environments with occlusions or dense traffic. These improvements will help increase the model's robustness while preserving its interpretability and computational efficiency.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References
1.
M. Shi and I. Hussain, “Improved region-based active contour segmentation through divergence and convolution techniques,” AIMS Math., vol. 10, no. 1, pp. 654–671, 2025. [Google Scholar] [Crossref]
2.
R. Archana and P. S. E. Jeevaraj, “Deep learning models for digital image processing: A review,” Artif. Intell. Rev., vol. 57, no. 1, p. 11, 2024. [Google Scholar] [Crossref]
3.
I. Hussain and J. Muhammad, “Efficient convex region-based segmentation for noising and inhomogeneous patterns,” Inverse Probl. Imaging, vol. 17, no. 3, pp. 708–725, 2023. [Google Scholar] [Crossref]
4.
R. Jabbar, E. Dhib, A. B. Said, M. Krichen, N. Fetais, E. Zaidan, and K. Barkaoui, “Blockchain technology for intelligent transportation systems: A systematic literature review,” IEEE Access, vol. 10, pp. 20995–21031, 2022. [Google Scholar] [Crossref]
5.
T. Yuan, W. da Rocha Neto, C. E. Rothenberg, K. Obraczka, C. Barakat, and T. Turletti, “Machine learning for next-generation intelligent transportation systems: A survey,” Trans. Emerg. Telecommun. Technol., vol. 33, no. 4, p. e4427, 2022. [Google Scholar] [Crossref]
6.
H. Behrooz and Y. M. Hayeri, “Machine learning applications in surface transportation systems: A literature review,” Appl. Sci., vol. 12, no. 18, p. 9156, 2022. [Google Scholar] [Crossref]
7.
O. Fokina, A. Mottaeva, and A. Mottaeva, “Transport infrastructure in the system of environmental projects for sustainable development of the region,” E3S Web Conf., vol. 515, p. 01015, 2024. [Google Scholar] [Crossref]
8.
J. Milewicz, D. Mokrzan, and G. M. Szymański, “Environmental impact evaluation as a key element in ensuring sustainable development of rail transport,” Sustainability, vol. 15, no. 18, p. 13754, 2023. [Google Scholar] [Crossref]
9.
H. Hmamed, A. Benghabrit, A. Cherrafi, and N. Hamani, “Achieving a sustainable transportation system via economic, environmental, and social optimization: A comprehensive AHP-DEA approach from the waste transportation sector,” Sustainability, vol. 15, no. 21, p. 15372, 2023. [Google Scholar] [Crossref]
10.
A. Kwilinski, O. Lyulyov, and T. Pimonenko, “Reducing transport sector CO2 emissions patterns: Environmental technologies and renewable energy,” J. Open Innov. Technol. Mark. Complex., vol. 10, no. 1, p. 100217, 2024. [Google Scholar] [Crossref]
11.
S. K. Rajput, J. C. Patni, S. S. Alshamrani, and others, “Automatic vehicle identification and classification model using the YOLOv3 algorithm for a toll management system,” Sustainability, vol. 14, no. 15, p. 9163, 2022. [Google Scholar] [Crossref]
12.
P. Sharma, A. Singh, K. K. Singh, and A. Dhull, “Vehicle identification using modified region based convolution network for intelligent transportation system,” Multimed. Tools Appl., vol. 81, no. 24, pp. 34893–34917, 2022. [Google Scholar] [Crossref]
13.
S. S. Tippannavar and Y. SD, “Real-time vehicle identification for improving the traffic management system—A review,” J. Trends Comput. Sci. Smart Technol., vol. 5, no. 3, pp. 323–342, 2023. [Google Scholar] [Crossref]
14.
F. Ni, J. Zhang, and E. Taciroglu, “Development of a moving vehicle identification framework using structural vibration response and deep learning algorithms,” Mech. Syst. Signal Process., vol. 201, p. 110667, 2023. [Google Scholar] [Crossref]
15.
O. Nasr, E. Alsisi, K. Mohiuddin, and A. Alqahtani, “Designing an intelligent QR code-based mobile application: A novel approach for vehicle identification and authentication,” Indian J. Sci. Technol., vol. 16, no. 37, pp. 3139–3147, 2023. [Google Scholar] [Crossref]
16.
W. Wang, C. Huai, L. Meng, Z. Wang, and H. Zhang, “Research on the detection and recognition system of target vehicles based on fusion algorithm,” Math. Syst. Sci., vol. 2, no. 2, p. 2760, 2024. [Google Scholar] [Crossref]
17.
M. Zohaib, M. Asim, and M. ELAffendi, “Enhancing emergency vehicle detection: A deep learning approach with multimodal fusion,” Mathematics, vol. 12, no. 10, p. 1514, 2024. [Google Scholar] [Crossref]
18.
A. H. F. de Córdova, J. L. Olazagoitia, and C. Gijón-Rivera, “Non-invasive identification of vehicle suspension parameters: A methodology based on synthetic data analysis,” Mathematics, vol. 12, no. 3, p. 397, 2024. [Google Scholar] [Crossref]
19.
H. Moussaoui, N. E. Akkad, M. Benslimane, W. El-Shafai, A. Baihan, C. Hewage, and R. S. Rathore, “Enhancing automated vehicle identification by integrating YOLOv8 and OCR techniques for high-precision license plate detection and recognition,” Sci. Rep., vol. 14, no. 1, p. 14389, 2024. [Google Scholar] [Crossref]
20.
S. Kanagamalliga, P. Kovalan, K. Kiran, and S. Rajalingam, “Traffic management through cutting-edge vehicle detection, recognition, and tracking innovations,” Procedia Comput. Sci., vol. 233, pp. 793–800, 2024. [Google Scholar] [Crossref]
21.
N. Islam, S. K. Ray, M. A. Hossain, M. A. R. Hasan, and M. B. A. Z. Shammo, “Vehicle classification and detection using YOLOv8: A study on highway traffic analysis,” in 2024 International Conference on Recent Progresses in Science, Engineering and Technology (ICRPSET), Rajshahi, Bangladesh, 2024, pp. 1–4. [Google Scholar] [Crossref]
22.
I. El Mallahi, J. Riffi, H. Tairi, and M. A. Mahraz, “Enhancing traffic safety with advanced machine learning techniques and intelligent identification,” Res. Sq., 2024. [Google Scholar] [Crossref]
23.
A. C. Bovik, The Essential Guide to Image Processing. Academic Press, 2009. [Google Scholar]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Hussain, I. (2025). A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance. Mechatron. Intell Transp. Syst., 4(2), 61-71. https://doi.org/10.56578/mits040201
I. Hussain, "A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance," Mechatron. Intell Transp. Syst., vol. 4, no. 2, pp. 61-71, 2025. https://doi.org/10.56578/mits040201
@research-article{Hussain2025AHS,
title={A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance},
author={Ibrar Hussain},
journal={Mechatronics and Intelligent Transportation Systems},
year={2025},
page={61-71},
doi={https://doi.org/10.56578/mits040201}
}
Ibrar Hussain, et al. "A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance." Mechatronics and Intelligent Transportation Systems, v 4, pp 61-71. doi: https://doi.org/10.56578/mits040201
Ibrar Hussain. "A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance." Mechatronics and Intelligent Transportation Systems, 4, (2025): 61-71. doi: https://doi.org/10.56578/mits040201
HUSSAIN I. A Hybrid Soft Computing Framework for Robust Classification of Heavy Transport Vehicles in Visual Traffic Surveillance[J]. Mechatronics and Intelligent Transportation Systems, 2025, 4(2): 61-71. https://doi.org/10.56578/mits040201
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.