Hospital-Specific Automated Waste Segregation for High-Accuracy Real-Time Classification
Abstract:
Healthcare facilities generate heterogeneous waste streams that must be accurately segregated at the point of disposal to mitigate occupational exposure risks, reduce downstream treatment costs, and ensure compliance with stringent biomedical waste regulations. However, most existing automated waste segregation systems have been developed for domestic or general-purpose scenarios and are poorly adapted to the operational complexity and safety requirements of hospital environments. In this study, a hospital-specific automated waste segregation system was designed, implemented, and experimentally evaluated for real-time classification of five clinically relevant waste categories: infectious waste, sharps, pharmaceutical waste, recyclable waste, and general waste. The proposed system integrates an ultrasonic sensor with a Raspberry Pi 4B platform executing a lightweight MobileNetV2 model, coupled with a motorised mechanical sorting mechanism. A curated dataset comprising 6,868 labelled hospital-waste images was constructed and used to fine-tune the model to ensure robustness under embedded deployment constraints. Experimental validation under simulated hospital disposal scenarios demonstrated an overall classification accuracy of 97%, with end-to-end segregation cycle times ranging from 8 to 12 seconds per item across repeated trials. These results indicate that high-accuracy, real-time waste classification can be achieved using low-cost embedded hardware and compact deep learning architectures. The proposed approach establishes a practical and scalable foundation for intelligent healthcare waste management at the point of disposal, offering a viable pathway toward safer clinical environments, improved operational efficiency, and the broader adoption of edge AI solutions in resource-constrained healthcare settings.1. Introduction
Healthcare facilities generate a wide range of waste streams, including infectious materials, sharps, pharmaceutical residues, recyclables, and general refuse (Nwachukwu et al., 2013; Olukanni et al., 2022). Appropriate segregation of these waste categories at the point of generation is necessary to prevent cross-contamination, reduce occupational hazards, and ensure compliance with national and international health regulations such as WHO guidelines and country-specific policies (Ibrahim et al., 2023). In developing nations, poor segregation practices mean that up to 60% of this waste is improperly handled, heightening the risk of infections, injuries, and environmental contamination (Olukanni et al., 2022). In many hospitals, segregation is still performed manually, which is a process that is prone to human error, time-consuming, and increases the risk of healthcare staff injury and infection (Olukanni et al., 2022). Misclassification or mixing of hazardous and non-hazardous waste brings about an increase in waste treatment costs and complications in downstream disposal (Abdel-Shafy & Mansour, 2018).
Although several automated waste segregation systems are already in existence, most are designed for household or municipal solid waste, and they only address a small subset of waste categories (Akinlade et al., 2022; Flores & Tan, 2019; Pučnik et al., 2024). These existing systems are inappropriate for the complex and safety-demanding environment of healthcare facilities (Miamiliotis & Talias, 2023). In addition, many reported approaches rely on cloud-based image processing or bulky industrial equipment, which limits their real-time performance and makes integration difficult in small and medium-sized hospitals and healthcare, especially in resource-constrained settings (Alowais et al., 2023; Taiwo et al., 2025).
In medical waste contexts, several studies have made progress but still fall short of comprehensive hospital waste management. Cuarto et al. (2023) delved into the design of a sensor-based system for hospital waste, classifying three categories (electronic, pathological and sharp wastes) using microcontrollers driven by a You Only Look Once (YOLO) v5 algorithm. While the study addressed hospital needs, it lacked deep learning integration and comprehensive category coverage, particularly for pharmaceutical and recyclable waste. Gan et al. (2024) developed an embedded Convolutional Neural Network (CNN) system using GoogLeNet to classify medical waste into three categories (general infection, dangerous infection, and general garbage), achieving a high accuracy of 99.34% on a curated dataset of 2,025 images. However, their system focused on detection without physical sorting and did not include pharmaceutical or recyclable waste, limiting its applicability in diverse hospital settings (Ajagbe et al., 2024; Balogun et al., 2025). Similarly, Hermawan et al. (2023) implemented a YOLOv5-based system on a Raspberry Pi for hazardous medical waste classification, achieving 85–96% accuracy with a latency of up to 5 seconds. While their use of a lightweight platform aligns with this study, their system’s higher latency and focus on detection rather than sorting make it less efficient for high-throughput hospital environments.
Ahmed & Shanto (2024) conducted a comparative analysis of YOLO variants (YOLOv5, YOLOv7, and YOLOv8-m) for surgical waste detection, reporting a mean Average Precision of 82.4% for YOLOv8-m. Their work set a benchmark for precision in hazardous waste detection but did not incorporate physical sorting or address all hospital waste categories. Huang et al. (2023) proposed a capsule-network-based waste classification approach known as the ResMsCapsule network, which integrates residual connections and multi-scale feature extraction to enhance spatial relationship encoding in garbage images. By preserving part–whole relationships often lost in conventional CNNs, their model achieved a classification accuracy of 91.41% on the TrashNet dataset while using only 40% of the parameters of ResNet18. Despite its improved accuracy and lightweight design, the approach was evaluated solely on static images and did not address real-time waste sorting or deployment in automated segregation systems. Bruno et al. (2023) achieved 100% accuracy in offline medical waste classification using a curated dataset, demonstrating the potential of CNNs with high-quality data, but their lack of real-time processing limits the study’s practicality in dynamic hospital settings. Kabilan et al. (2024) proposed a CNN-based system for smart cities, classifying waste into biodegradable, non-biodegradable, biomedical, and electronic types with 91.87% accuracy. While their system supports edge-device deployment, it is not tailored specifically for hospital waste management and does not address physical sorting.
Zhou et al. (2022) proposed one of the pioneering deep learning frameworks for medical waste classification using a ResNeXt model with transfer learning to classify eight categories of hospital waste. Their dataset of 3,480 labelled images collected from a single hospital achieved an accuracy of 97.2% and an F1-score of 97%. Although their results demonstrated the effectiveness of deep learning for medical waste categorisation, the study was limited to offline classification and did not address real-time deployment, physical sorting, or challenges such as mixed waste scenarios commonly encountered in hospital settings. Building on this direction, Kunwar & Rai (2025) introduced a healthcare waste classification approach aligned with national and WHO colour-coding guidelines. Their work compared multiple architectures, including ResNeXt-50, EfficientNet-B0, MobileNetV3, and YOLOv5-s, achieving a best accuracy of 95.06% using YOLOv5-s. While the study demonstrated the value of aligning AI systems with regulatory frameworks, it did not incorporate mechanical sorting or real-time deployment in clinical environments, and the dataset lacked broader contextual diversity.
Nafiz et al. (2023) designed an innovative prototype named ConvoWaste, which integrated computer vision and automation for real-time waste segregation. The system combined a deep CNN with a conveyor-based sorting mechanism controlled by servo motors. It utilised a camera to capture images of waste items, which were then processed and classified automatically before being mechanically separated into respective bins. A classification accuracy of 98% was reported, illustrating the potential for merging AI with electromechanical systems for automated waste management. However, ConvoWaste primarily targeted general domestic waste rather than medical waste. The lack of biomedical-specific datasets and safety considerations limited its applicability in hospital environments, where materials often include hazardous items such as needles, contaminated gloves, and body-fluid-soaked gauze. Therefore, while the system proved that automation is feasible, it did not meet the rigorous accuracy, safety, and specificity standards required for hospital use.
In a related study, Huang et al. (2023) introduced a capsule-network-based waste classification model called ResMsCapsule to address the limitations of conventional convolutional neural networks in capturing spatial relationships in waste images. By combining residual connections and multi-scale feature extraction with a capsule network architecture, the model improved the recognition of complex and visually similar waste items. When evaluated on the TrashNet dataset, ResMsCapsule achieved a classification accuracy of 91.41 percent while using only 40 percent of the parameters of ResNet18, indicating a good balance between performance and model complexity. However, the study focused solely on static image classification and did not consider real-time waste sorting or deployment on embedded or edge-based systems. As a result, although the findings demonstrate the potential of capsule networks for improved visual understanding, their practical application in automated waste segregation systems for hospital environments remains largely unexplored.
Another perspective was presented by Wulandari (2020), who explored a sensor-based approach rather than a vision-based one. Their system utilized a combination of proximity, moisture, and gas sensors integrated with an Arduino microcontroller to detect and classify waste materials. The categorized waste was then directed into designated bins using simple automation controls. This approach demonstrated how non-visual data could aid waste segregation while minimizing human contact with hazardous materials. However, the study provided no quantitative performance metrics such as accuracy or precision, and it lacked scalability. Moreover, because medical waste can vary widely in texture, shape, and chemical composition, relying solely on sensor readings may not be sufficient to achieve high classification accuracy. The system also did not account for visual differentiation between infectious and non-infectious items, a critical requirement in hospital waste management.
In response to the limitations identified in existing studies, this research presents the development of a hospital-specific AI-driven waste segregation system capable of classifying waste into five categories: infectious waste, sharps, pharmaceutical waste, recyclable waste, and general waste in real time. The system integrates a lightweight image-based recognition model with a motorised sorting mechanism coordinated by a microcontroller, enabling automated segregation directly at the point of disposal. Experimental testing demonstrates high classification accuracy and reliable processing suitable for deployment in small- to medium-sized hospitals, outpatient clinics, and medium-traffic departments, particularly within African healthcare settings where resource limitations and inconsistent segregation practices remain major challenges. By focusing on hospital-relevant categories, real-time operation, and low-cost embedded deployment, the system aims to improve segregation accuracy, reduce occupational risk, and support safer waste management practices.
This research is organized as follows. Section 2 presents the materials and methods, system design, selection of equipment, model development, sorting procedure and experimental setup. Section 3 describes the experimental results. Section 4 presents the discussion of the results, and Section 5 presents the conclusion and directions for future work.
2. Methodology
The automated waste segregation system was designed to classify and sort hospital waste into five categories of infectious waste, sharps, pharmaceutical waste, recyclable waste, and general waste in real time at the point of disposal by patients and healthcare personnel. The architecture of the system shown in Figure 1 integrates three modules: (i) a sensing and image acquisition unit, (ii) an embedded recognition module, and (iii) a motorised sorting mechanism. The system comprises sensing, recognition, and sorting modules. When waste is detected, the camera captures an image which is processed by the embedded classifier, and the corresponding bin is automatically aligned for disposal. This design supports segregation directly at the source and significantly reduces the need for downstream manual sorting in hospital environments.

A Raspberry Pi 4 Model B (4 GB RAM) was selected as a suitable microcontroller due to its low power consumption, built-in interfaces, and compatibility with lightweight deep learning models. Image capture was performed using a Raspberry Pi camera module (8 MP), providing sufficient resolution for image recognition of waste items. Waste proximity detection at the input chute was achieved using an HC-SR04 ultrasonic sensor to trigger lid opening, image capture and classification into the correct category. The specifications of the major hardware components are summarised in Table 1 below.
Component Type | Specification/Model | Function |
Embedded controller | Raspberry Pi 4 Model B (4 GB RAM) | System control and image classification |
Camera module | Raspberry Pi camera (8 MP, CSI interface) | Image acquisition |
Sensor | Ultrasonic sensor (HC-SR04) | Waste detection and trigger |
Motors | NEMA 17 stepper motor; MG995 & SG90 servo motors | Bin rotation and lid/chute actuation |
Drivers & power | TMC2206 stepper driver; 12 V DC (motors); 5 V 3 A (controller) | Motion control and power regulation |
Supporting hardware | Prototype board, buck converter, connectors, custom frame with five bins | Mechanical and electrical integration |
For actuation, a NEMA-17 stepper motor driven by a TMC2206-1.2v stepper motor driver was used to rotate the circular base, which housed the five compartments and allowed for alignment to the appropriate section under the chute system. One MG995 servo motor controlled the main lid, while another was used for the opening and closing motion of the chute system. The motors were powered by a regulated 12 V direct current supply (8.5 A), whereas the Raspberry Pi and sensors operated at 5 V. All components were fitted on a fabricated frame with dimensions shown in Figure 2 to approximate hospital waste. The electrical wiring of the system is shown in Figure 3.


The utilised dataset contained a mixture of custom images and image data obtained from an open-source directory (Pitakaso et al., 2023). Out of the 6,868 images used in this study, 5,393 were obtained from Kaggle after removing low-quality samples (e.g. blurred or cut-off images), while an additional 1,475 images were manually captured using the Raspberry Pi camera module with the different waste categories placed on different surfaces.
The dataset comprised hospital waste images that were organised into the desired five categories: infectious waste, sharps, pharmaceutical waste, recyclable waste, and general waste, as shown in Table 2. The combined dataset included images captured under different lighting conditions and backgrounds, allowing the model to learn more realistic variations. A total of 6,868 images were collected at different angles to approximate real hospital scenarios. Representative sample images from each class are presented in Figure 4.
Category | Color Code | Examples |
Infectious waste | Yellow | Blood-soaked dressings and lab protective gear |
Sharps | Red | Needles, scalpels, and broken glass |
Pharmaceutical waste | Blue | Antibiotics and residual drugs |
General waste | Black | Food scraps, paper, and packaging |
Recyclable waste | White/transparent | Plastic bottles and glass containers |

To prepare the images for training, each one was resized to 224 × 224 pixels and adjusted so that the brightness values fell between 0 and 1. This resolution was selected after preliminary experiments with alternative sizes (160 × 160 and 448 × 448), which resulted in lower accuracy and increased computational cost, respectively. The 224 × 224 input size, therefore, provided the best balance between performance and efficiency.
To help the model learn better and perform well on new data, techniques like flipping the images sideways and upside down, rotating them randomly, and tweaking their brightness were used. Because the prototype includes an internal light source, brightness was relatively stable during real-time use, but augmentation was still applied to improve robustness. Grayscale experiments were also tested but resulted in poorer performance, indicating the importance of colour features for classification. After that, the dataset was divided into three parts: one for training the model, one for validating its performance, and one for final testing, as shown in Table 3.
Color Code | Waste Category | Train Images | Validation Images | Test Images |
Black | General waste | 1,050 | 268 | 331 |
Blue | Pharmaceutical waste | 156 | 39 | 87 |
Red | Sharps | 1,423 | 356 | 446 |
White | Recyclable waste | 357 | 89 | 112 |
Yellow | Infectious waste | 1,373 | 343 | 438 |
Total | - | 4,359 | 1,095 | 1,414 |
A MobileNetV2 CNN was used due to its lightweight depthwise separable convolutions and suitability for embedded deployment. The architecture comprised an input layer (224 × 224 × 3), a sequence of inverted residual blocks, and a global average pooling layer connected to a fully connected softmax classifier for five output categories. Transfer learning was applied using a MobileNetV2 model pretrained on the ImageNet dataset, with the final classification layer replaced to predict the five target waste classes. Several training configurations were explored during development, including adjustments to learning rate, batch size, and training duration, in order to achieve stable convergence and optimal performance on the dataset. To improve generalisation and reduce overfitting, a Dropout layer with a rate of 0.3 was incorporated within the classifier head. The model was trained using the Adam optimizer (learning rate = 0.0001) over 50 epochs with a batch size of 32, employing categorical cross-entropy as the loss function. Model performance was continuously monitored using validation accuracy and loss, and the learning curves consistent convergence without signs of severe overfitting, indicating that the selected configuration was appropriate for the task. After training, the model was converted to TensorFlow Lite format for deployment on a Raspberry Pi 4, where real-time inference tests were conducted to evaluate classification accuracy and latency under operational conditions.
The sorting mechanism automatically directs each classified waste item into its designated compartment, eliminating manual handling. When the ultrasonic sensor detects waste at the input chute, the camera captures an image that is processed by the embedded MobileNetV2 model for real-time classification. Based on the predicted category, the control algorithm drives a NEMA-17 stepper motor (via a TMC2206 controller) to rotate the circular bin to the appropriate position. Servo motors then open the lid and release the item into the selected compartment before resetting to their default states for the next cycle, as depicted in Figure 5 below, ensuring fast and reliable operation across all waste types.

The performance of the hospital-specific automated waste segregation system was evaluated under controlled laboratory conditions simulating a hospital disposal environment. Five waste categories, namely, infectious waste, sharps, pharmaceutical waste, recyclable waste, and general waste, were used for testing. The camera was positioned approximately 11 cm adjacent to the chute platform, with a consistent bright light provided by the internal illumination of the system. The background surface was wood-brown in colour. Waste items were presented at different orientations, although object size was limited by the chute dimensions, with the largest items approximately the size of an IV fluid bottle.
Each item was presented individually at the input chute. Upon detection, the system captured an image, classified the waste using the embedded MobileNetV2 model and directed it into the appropriate compartment using the motorised sorting mechanism. For each trial, classification output, inference time and total cycle time from detection to deposition were recorded. The experiment was repeated 20 times per category to obtain average values. Performance metrics included overall accuracy and per-class accuracy. Timing metrics measured inference time per image (model latency) and total sorting time per item (mechanical response + inference).
3. Experimental Results
The dataset consisted of 6,868 labelled images spanning five distinct hospital waste categories. The allocation of images across training, validation, and test sets is detailed in Table 3. The MobileNetV2 model was trained over 50 epochs with a batch size of 32. Both training and validation accuracy exhibited a consistent upward trend, ultimately converging at 98.12%, with negligible signs of overfitting. The validation loss remained low and stable throughout the training process, suggesting strong generalisation capability. The study kept an eye on the model's performance all the time by looking at its validation accuracy and loss. The learning curves (Figure 6) revealed that the model was always getting better without any evidence of severe overfitting. On the test set, the model achieved an overall classification accuracy of 98.12%. The confusion matrix of predicted versus actual categories is presented in Figure 7, illustrating the model’s performance for each class.


The embedded system implemented on the Raspberry Pi 4B achieved an average total cycle time (detection to deposition) of 8 to 12 seconds per item, with the time varying slightly across waste categories and showing no consistent performance pattern between classes. The system maintained consistent performance across repeated cycles, with no missed steps or actuator failures observed over 20 consecutive operations. A functional test was carried out to confirm the correct operation of the sensing, classification and sorting subsystems under repeated use, with real-time testing performance shown in Table 4. Table 5 shows a comparative analysis of automated medical waste classification systems.
Waste Category | Samples Tested | Correctly Classified | Accuracy (%) |
General waste | 20 | 20 | 100.0 |
Pharmaceutical waste | 20 | 17 | 85.0 |
Sharps | 20 | 20 | 100.0 |
Recyclable waste | 20 | 20 | 100.0 |
Infectious waste | 20 | 20 | 100.0 |
Overall | 100 | 97 | 97.0 |
Study | Waste Categories | Approach | Platform | Accuracy | Real-Time Sorting | Deployment Practicality | Key Limitations |
This study | 5 categories (WHO-aligned: infectious waste, sharps, pharmaceutical waste, recyclable waste, general waste) | MobileNetV2 + mechanical sorter | Raspberry Pi 4B + camera + motors | 97.0% (real-time test) | Yes (8–12 s full cycle) | High (low-cost, embedded, point-of-disposal sorting) | Cannot process multiple items simultaneously; struggles with transparent objects |
Zhou et al. (2022) | 8 categories (e.g., syringes, gloves, gauze, and bottles) | ResNeXt + transfer learning | PC-based (offline) | 97.2% | No | Medium (classification only, no deployment) | No real-time use, no physical sorting |
Kunwar & Rai (2025) | Multiple (aligned with WHO color codes) | YOLOv5-s, EfficientNet-B0, etc. | Smartphones/edge devices | 95.06% (best model) | Partial (software-level only) | Medium (no mechanical system) | No automated sorting mechanism |
Gan et al. (2024) | 3 categories | GoogLeNet CNN | Embedded system | 99.34% | No | Low–medium | Limited categories; no sorting |
Hermawan et al. (2023) | Hazardous waste only | YOLOv5 | Raspberry Pi | 85–96% | Partial (detection only) | Medium | Focused only on detection |
Nafiz et al. (2023) | Domestic waste | CNN + conveyor system | Microcontroller + motors | 98% | Yes | Medium | Not designed for medical waste |
4. Discussion
The results of this study demonstrate that a hospital-specific automated waste segregation system, powered by a lightweight CNN, can achieve high classification accuracy and reliable real-time sorting performance on an embedded platform. The MobileNetV2 model achieved an overall classification accuracy of 97%, with a full sorting cycle time ranging between 8 and 12 seconds per item. These results compare favorably with previously reported systems focusing on municipal or household waste (Jouhara et al., 2017). By expanding the classification scope to five hospital waste categories and enabling near-real-time operation on a Raspberry Pi, the system addresses key limitations of earlier approaches, which often supported fewer classes or relied on cloud-based processing. For instance, Zhou et al. (2022) reported 97.2% accuracy using a ResNeXt model on eight waste categories, but their system was limited to offline classification without mechanical sorting or real-time deployment. Similarly, Kunwar & Rai (2025) achieved 95.06% accuracy using YOLOv5-s while aligning classification with WHO colour-coding standards, yet their system lacked physical sorting and broader contextual diversity. Other approaches proposed by Gan et al. (2024) and Hermawan et al. (2023) demonstrated high accuracy on small datasets or on lightweight platforms, but often omitted pharmaceutical or recyclable waste categories, and their systems focused on detection rather than real-time sorting. In comparison, the system proposed in this study combines high-accuracy, near-real-time operation on a Raspberry Pi with direct mechanical sorting, addressing the practical limitations of previous methods.
Functional testing showed consistent performance across repeated trials, confirming the system’s suitability for deployment directly at the point of disposal in hospital settings. The absence of actuator failures or missed steps across 20 consecutive operations also indicates that the mechanical design is sufficiently robust for continuous use. When compared to manual segregation, which is often inconsistent, slow, and prone to human error, the system clearly shows efficiency benefits. With a sorting cycle of 8 to 12 seconds per item, it can process around 300 to 450 items per hour, which would normally require several human operators. Automated sorting also lowers the risk of staff being exposed to infectious or hazardous waste, which can reduce safety-related incidents and associated costs. While the initial cost of the hardware is higher than using simple manual bins, the savings from reduced labour, fewer misclassifications, and easy maintenance with features like the enclosed camera and nylon-lined bins make the system more cost-effective over time. Altogether, these points show that automated segregation is both practical and economically beneficial for hospitals.
Practical usability was also considered: the camera module is enclosed to reduce dust exposure, lens cleaning is minimal, and each waste bin is lined with nylon to facilitate easy waste removal and prevent waste from sticking to the container walls. These design choices enhance operational convenience and ensure that the system remains effective in busy hospital environments.
A notable outcome is the model’s ability to maintain high classification accuracy across all categories, despite variations in lighting conditions and object orientation introduced during testing. This highlights the effectiveness of the data augmentation strategies used during training and shows that the model generalises reasonably well. However, occasional misclassifications were observed in the pharmaceutical waste (blue) category. In particular, single-colour tablets were sometimes mistaken for general waste, while some drug containers were confused with recyclable items due to their similar visual appearance. This suggests that visual similarity, rather than model instability, was the main cause of these errors. These observations provide useful direction for further improvement, especially through the collection of more diverse pharmaceutical samples. Future work will focus on expanding the dataset to include a broader diversity of waste types and increasing the representation of underperforming classes, such as pharmaceutical (blue) waste. The integration of additional sensors, such as near-infrared or weight-based sensors, may further improve discrimination between visually similar materials. Design improvements will also target multi-object handling and the incorporation of battery backup to enhance robustness during power interruptions. At the system level, a lightweight monitoring dashboard could be developed to support data logging and deployment management across multiple hospital units. These enhancements would further improve scalability and practical adoption across healthcare facilities.
Despite its effectiveness, the proposed system has several limitations. Transparent materials such as glass and clear plastic packaging can reduce classification reliability due to visual ambiguity. The system is designed to process one waste item at a time; therefore, classification accuracy decreases when multiple objects are deposited simultaneously, which remains a key limitation of the current chute-based design. In addition, the system does not include backup power support and becomes non-operational during power outages.
5. Conclusion
This study successfully developed and evaluated a hospital-specific automated waste segregation system integrating a Raspberry Pi 4B, camera module, stepper motor, servo motors and an ultrasonic sensor with a MobileNetV2-based image classification model. The prototype was designed to detect incoming waste at the point of disposal, classify it into five standardised hospital waste categories: infectious waste, sharps, pharmaceutical waste, general non-hazardous waste, and recyclable waste, and direct each item into the appropriate compartment using a motorised sorting mechanism. The primary objectives were to design and build a multi-compartment waste bin tailored to hospital requirements, to integrate a lightweight deep learning model capable of high-accuracy classification on embedded hardware, and to test the system’s performance under realistic conditions. These objectives were achieved through the combined development of hardware and software modules, resulting in a functional system capable of real-time classification and sorting directly at the source of waste generation.
By focusing specifically on hospital waste and by employing a Raspberry Pi instead of industrial Programmable Logic Controller-based solutions, the system addresses gaps in existing research. It demonstrates that cost-effective, embedded AI solutions can handle multiple waste categories with high precision. With a classification accuracy of 97% and an average sorting time between 8 and 12 seconds per item, the system offers a practical alternative to conventional manual segregation, which is often inconsistent, error-prone and hazardous to waste handlers. Importantly, the system is designed to be extensible to additional waste categories. Scaling can be achieved through targeted dataset expansion to include new waste types, coupled with transfer learning or fine-tuning of the MobileNetV2 model on these augmented datasets. Additional data augmentation strategies and incremental learning approaches could further ensure high classification accuracy as new categories are added without retraining the model from scratch. Overall, the findings confirm the feasibility of deploying lightweight deep learning models for real-time hospital waste classification. These implementation strategies provide a clear technical path for scaling the system to larger facilities, more complex waste streams, or integration with broader hospital waste management infrastructures.
Conceptualisation, methodology writing–original draft, and data curation, S.A.Atanda, S.O.A., S.A.Ajagbe and A.K.A.; project administration, resources, methodology, validation, writing–original draft, visualisation, editing and software, A.K.A. and S.A.Ajagbe; editing, review and supervision, A.K.A. and O.K.A. All authors have read and agreed to the published version of the manuscript.
The data used to support the research findings are available from the corresponding author upon request.
The authors acknowledge the Department of Electrical and Biomedical Engineering, Abiola Ajimobi Technical University, Ibadan, Nigeria and Department of Computer Science, Department of Computer Science, University of Zululand, Kwadlangezwa 3886 KZN, South Africa.
The authors declare no conflict of interest.
