Javascript is required
[1] Ahangar, M.N., Ahmed, Q.Z., Khan, F.A., Hafeez, M. (2021). A survey of autonomous vehicles: Enabling communication technologies and challenges. Sensors, 21(3): 706. [Crossref]
[2] Faisal, A., Kamruzzaman, M., Yigitcanlar, T., Currie, G. (2019). Understanding autonomous vehicles: A systematic literature review on capability, impact, planning and policy. Journal of Transport and Land Use, 12(1): 45-72.
[3] Balasubramaniam, A., Pasricha, S. (2022). Object detection in autonomous vehicles: Status and open challenges. arXiv preprint arXiv:2201.07706. [Crossref]
[4] Gomez, V.V., Cortes, A.S., Noguer, F.M. (2015). Object detection for autonomous driving using deep learning. In Meeting of the Universitat Politecnica de Catalunya, Spain.
[5] Li, Y., Ibanez-Guzman, J. (2020). Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Processing Magazine, 37(4): 50-61. [Crossref]
[6] Fernandes, D., Afonso, T., Girão, P., Gonzalez, D., Silva, A., Névoa, R., Novais, P., Monteiro, J., Melo-Pinto, P. (2021). Real-time 3D object detection and SLAM fusion in a low-cost LiDAR test vehicle setup. Sensors, 21(24): 8381. [Crossref]
[7] Benciolini, T., Wollherr, D., Leibold, M. (2023). Non-conservative trajectory planning for automated vehicles by estimating intentions of dynamic obstacles. IEEE Transactions on Intelligent Vehicles, 8(3): 2463-2481. [Crossref]
[8] Zhao, J.F., Liang, B.D., Chen, Q.X. (2018). The key technology toward the self-driving car. International Journal of Intelligent Unmanned Systems, 6(1): 2-20. [Crossref]
[9] Eggers, F., Eggers, F. (2022). Drivers of autonomous vehicles—analyzing consumer preferences for self-driving car brand extensions. Marketing Letters, 33: 89-112. [Crossref]
[10] Chen, S.T., Jian, Z.Q., Huang, Y.H., Chen, Y., Zhou, Z.L., Zheng, N.N. (2019). Autonomous driving: cognitive construction and situation understanding. Science China Information Sciences, 62: 81101. [Crossref]
[11] Peiris, S., Berecki-Gisolf, J., Chen, B., Fildes, B. (2020). Road trauma in regional and remote Australia and New Zealand in preparedness for ADAS technologies and autonomous vehicles. Sustainability, 12(11): 4347. [Crossref]
[12] Gopalswamy, S., Rathinam, S. (2018). Infrastructure enabled autonomy: A distributed intelligence architecture for autonomous vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, pp. 986-992. [Crossref]
[13] Bagschik, G., Nolte, M., Ernst, S., Maurer, M. (2018). A system's perspective towards an architecture framework for safe automated vehicles. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, pp. 2438-2445. [Crossref]
[14] Yeong, D.J., Velasco-Hernandez, G., Barry, J., Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6): 2140. [Crossref]
[15] Parekh, D., Poddar, N., Rajpurkar, A., Chahal, M., Kumar, N., Joshi, G.P., Cho, W. (2022). A review on autonomous vehicles: Progress, methods and challenges. Electronics, 11(14): 2162. [Crossref]
[16] Nguyen, N.D., Do, T., Ngo, T.D., Le, D.D. (2020). An evaluation of deep learning methods for small object detection. Journal of Electrical and Computer Engineering, 2020: 1-18. [Crossref]
[17] Chu, W.Q., Cai, D. (2018). Deep feature based contextual model for object detection. Neurocomputing, 275: 1035-1042. [Crossref]
[18] Changalvala, R., Malik, H. (2019). LiDAR data integrity verification for autonomous vehicle. IEEE Access, 7: 138018-138031. [Crossref]
[19] Carranza-García, M., Torres-Mateo, J., Lara-Benítez, P., García-Gutiérrez, J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing, 13(1): 89. [Crossref]
[20] Zhao, X.M., Sun, P.P., Xu, Z.G., Min, H.G., Yu, H.K. (2020). Fusion of 3D LiDAR and camera data for object detection in autonomous vehicle applications. IEEE Sensors Journal, 20(9): 4901-4913. [Crossref]
[21] Pathak, A.R., Pandey, M., Rautaray, S. (2018). Application of deep learning for object detection. Procedia Computer Science, 132: 1706-1717. [Crossref]
[22] Kocur, V., Ftáčnik, M. (2020). Detection of 3D bounding boxes of vehicles using perspective transformation for accurate speed measurement. Machine Vision and Applications, 31: 62. https://doi.org/10.100710.1007/s00138-020-01117-x
[23] Han, Y.Z., Liu, X.F., Sheng, Z.F., Ren, Y.T., Han, X., You, J., Liu, R.S., Luo, Z.S. (2020). Wasserstein loss-based deep object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 998-999.
[24] Xu, H., Yao, L.W., Zhang, W., Liang, X.D., Li, Z.G. (2019). Auto-fpn: Automatic network architecture adaptation for object detection beyond classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649-6658.
[25] Li, Y.F., Wang, H.X., Dang, L.M., Nguyen, T.N., Han, D., Lee, A., Jang, I., Moon, H. (2020). A deep learning-based hybrid framework for object detection and recognition in autonomous driving. IEEE Access, 8: 194228-194239. [Crossref]
[26] Patole, S.M., Torlak, M., Wang, D., Ali, M. (2017). Automotive radars: A review of signal processing techniques. IEEE Signal Processing Magazine, 34(2): 22-35. [Crossref]
[27] Unlu, E., Zenou, E., Riviere, N., Dupouy, P.E. (2019). Deep learning-based strategies for the detection and tracking of drones using several cameras. IPSJ Transactions on Computer Vision and Applications, 11(1): 1-13. [Crossref]
[28] Angesh, A., Trivedi, M.M. (2019). No blind spots: Full-surround multi-object tracking for autonomous vehicles using cameras and lidars. IEEE Transactions on Intelligent Vehicles, 4(4): 588-599. [Crossref]
[29] Chaudhary, S., Wuttisittikulkij, L., Saadi, M., Sharma, A., Al Otaibi, S., Nebhen, J., Rodriguez, D.Z., Kumar, S., Sharma, V., Phanomchoeng, G., Chancharoen, R. (2021). Coherent detection-based photonic radar for autonomous vehicles under diverse weather conditions. PLoS ONE, 16(11): e0259438. [Crossref]
[30] Wei, Z., Zhang, F., Chang, S., Liu, Y., Wu, H., Feng, Z. (2022). Mmwave radar and vision fusion for object detection in autonomous driving: A review. Sensors, 22(7): 2542. [Crossref]
[31] Manoharan, S. (2019). An improved safety algorithm for artificial intelligence enabled processors in self driving cars. Journal of Artificial Intelligence, 1(02): 95-104. [Crossref]
[32] Zaarane, A., Slimani, I., Al Okaishi, W., Atouf, I., Hamdoun, A. (2020). Distance measurement system for autonomous vehicles using stereo camera. Array, 5: 100016. [Crossref]
[33] Younis, A., Li, S.X., Jn, S., Hai, Z. (2020). Real-time object detection using pre-trained deep learning models MobileNet-SSD. In Proceedings of 2020 6th International Conference on Computing and Data Engineering, New York, USA, pp. 44-48. [Crossref]
[34] Deng, J., Xuan, X.J., Wang, W.F., Li, Z., Yao, H.W., Wang, Z.Q. (2020). A review of research on object detection based on deep learning. In Journal of Physics: Conference Series: The 2020 International Seminar on Artificial Intelligence, Networking and Information Technology, Shanghai, China, 1684: 012028. [Crossref]
[35] Ferdous, S.N., Mostofa, M., Nasrabadi, N.M. (2019). Super resolution-assisted deep aerial vehicle detection. Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 11006: 432-443.
[36] Jain, A., Nandan, D., Meduri, P. (2023). Data export and optimization technique in connected vehicle. Ingénierie des Systèmes d'Information, 28(2): 517-525. [Crossref]
Search

Acadlore takes over the publication of IJTDI from 2025 Vol. 9, No. 4. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.

Open Access
Research article

Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation

vaibhav saini1,
mvv prasad kantipudi1*,
pramoda meduri2
1
Symbiosis Institute of Technology (SIT), Symbiosis International (Deemed University) (SIU), 412115 Pune, India
2
Verolt Engineering Pvt. Ltd, 411001 Pune, India
International Journal of Transport Development and Integration
|
Volume 7, Issue 4, 2023
|
Pages 341-351
Received: 09-20-2023,
Revised: 11-27-2023,
Accepted: 12-07-2023,
Available online: 12-27-2023
View Full Article|Download PDF

Abstract:

Autonomous vehicles necessitate robust stability and safety mechanisms for effective navigation, relying heavily upon advanced perception and precise environmental awareness. This study addresses the object detection challenge intrinsic to autonomous navigation, with a focus on the system architecture and the integration of cutting-edge hardware and software technologies. The efficacy of various object recognition algorithms, notably the Single Shot Detector (SSD) and You Only Look Once (YOLO), is rigorously compared. Prior research has indicated that SSD, when augmented with depth estimation techniques, demonstrates superior performance in real-time applications within complex environments. Consequently, this research proposes an optimized SSD algorithm paired with a Zed camera system. Through this integration, a notable improvement in detection accuracy is achieved, with a precision increase to 87%. This advancement marks a significant step towards resolving the critical challenges faced by autonomous vehicles in object detection and distance estimation, thereby enhancing their operational safety and reliability.

Keywords: Autonomous vehicle, Safe driving performance, Perception & vision, Object recognition, Camera, LiDAR, RADAR, SSD, YOLO, ZED-Camera

1. Introduction

Recent advancements in autonomous vehicles (AVs) have garnered noteworthy attention, with a commensurate increase in research dedicated to this domain [1]. A critical component of AV technology is the object detection mechanism, which incorporates artificial intelligence and sensor-based methodologies to ensure driver safety [2]. Autonomous vehicles offer the promise to enhance driving comfort and reduce incidents resulting from vehicle collisions. These vehicles are engineered to sense and navigate their environment on highways autonomously, without human intervention [3, 4].

The suite of sensors distributed throughout the vehicle is integral to its functionality. An array of sensors, including LiDARs, radars, and cameras, is employed to survey and interpret the surrounding milieu [5]. The process of environmental sensing, or perception, encompasses several sub-tasks: object detection, object classification, 3D position estimation, and simultaneous localization and mapping (SLAM). Object detection itself involves localization—determining an object's position within an image—and classification—assigning a category to each detected object, such as a traffic light, vehicle, or pedestrian [6].

In autonomous driving systems, object detection is deemed one of the most crucial processes for safe navigation. It is essential for enabling the vehicle's controller to anticipate and maneuver around potential obstacles [7]. Therefore, the employment of precise object detection algorithms is imperative. The complexity of the requisite system architecture is necessitated by the need to process a multitude of features within the vehicle [8].

In the present study, the objective is to refine object detection accuracy using robust tools such as the Zed Camera in conjunction with algorithms like SSD, which have demonstrated superior performance in real-time scenarios. The Zed Camera, in particular, has proven to be an invaluable sensor for the collection of depth data, especially in challenging and dynamic environments. A robust perception system, integrating multiple sensors and sophisticated algorithms, such as the proposed SSD Algorithm, is requisite for AVs to achieve accurate object recognition and informed decision-making. To enhance the vehicles' perceptual capabilities, reliability, and safety, a synthesis of various sensors and algorithms, including the Zed Camera, is often pursued by researchers in the field of autonomous vehicles.

2. Related Work

Object identification is one of the most researched topics in computer vision and self-driving vehicles. The process of object detection often begins with the extraction of features from the input picture using several algorithms, including RCNN, SSD, and YOLO. During the training phase, the CNN learns the feature in the object and detect the object. Localization, which includes locating an item inside an image, and classification, which entails giving the object a class (such as "pedestrian," "vehicle," or "obstacle"), are the two sub-tasks that make-up object detection.

Carranza-García et al. [19] contrasted single-stage like Yolo V3 and two-stage detectors like Faster R-CNN. Before deep diving into object detection algorithm let us understand the taxonomy includes in the process which is explained in the next section.

3. Methodology

Object detection in Autonomous Vehicle is very important feature to make the vehicle more advanced. Multiple things will need to be recognized in a single image. Multiple item detection in an image and distance estimation are some difficult issues, but with our work applied, it is possible to do so accurately and in real time [32]. We have implemented improved SSD ("Single shot detector") in our Algorithm model to have accurate and reliable results. SSD is a well-liked object detection method that has become known for its accuracy and speed in real time. By utilizing both the camera's precise depth data and the algorithm's object detection abilities, we combined the SSD with stereo depth information from the ZED camera, potentially improving object detection capabilities.

4. Results and Discussion

We have examined the implementation part in this section and the results that are derived from the analysis. We can divide the implementation part into various categories and is defined below:

Input Data:

The main task in deep learning is the construction of the algorithm that can learn from the data or to make predictions on this data. This SSD algorithm is used for data driven predictions. For our implementation, ZED camera has been used in the vehicle to capture the images. The camera is installed at the front of the vehicle so that it can capture the images appearing in the front. For this application we have taken both color and depth images that can be seen in Figures 7 and 8. The advantage of using the depth image is to calculate the distance of the object from the vehicle.

5. Conclusion

In this research, we have studied about the Autonomous Vehicle and its system architecture. It has two parts one is hardware which includes various sensors such as Camera, LiDAR, RADAR which perceives the information from this hardware and then fed into the software part of the vehicle. The software architecture is the core of the entire system which has the operating system, algorithms which takes the input data from different sensors and apply logic for the decision-making. This logic’s output is then taken by the control modules which regulates the acceleration, motion of the vehicle. Advanced technologies like machine learning computer vision are being applied for this process. There are various algorithms available like Convolutional Neural Networks (CNN), R-CNN, YOLO etc. but our customized SSD model is preferable for real time predictions and considerably has less localization errors, computationally inexpensive and require less storage & processing power for the obstacle detection. The object distance estimate algorithm was created using the mono-depth technique. The overall model has been trained on stereo data and draws inferences on monocular views. Also, we have tested the suggested software model and algorithm in real-time environment with Zed Camera mounted on the vehicle which gives the outstanding results with accuracy of 87%. We may combine the object detection technique with the estimation distance to share the feature extraction layers, thereby improving its efficiency. The possible benefits of incorporating the SSD algorithm with the ZED camera in self-driving vehicles are demonstrated by applications such as the autonomous golf buggy in the golf course, load automobiles on construction sites, and for other autonomous industries. Such applications allow for improved perception, increased safety, and effective navigation in a variety of dynamic environments. Autonomous vehicles will be far more reliable if their algorithms can adjust to varied lighting situations, diverse surroundings, and different object orientations.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References
[1] Ahangar, M.N., Ahmed, Q.Z., Khan, F.A., Hafeez, M. (2021). A survey of autonomous vehicles: Enabling communication technologies and challenges. Sensors, 21(3): 706. [Crossref]
[2] Faisal, A., Kamruzzaman, M., Yigitcanlar, T., Currie, G. (2019). Understanding autonomous vehicles: A systematic literature review on capability, impact, planning and policy. Journal of Transport and Land Use, 12(1): 45-72.
[3] Balasubramaniam, A., Pasricha, S. (2022). Object detection in autonomous vehicles: Status and open challenges. arXiv preprint arXiv:2201.07706. [Crossref]
[4] Gomez, V.V., Cortes, A.S., Noguer, F.M. (2015). Object detection for autonomous driving using deep learning. In Meeting of the Universitat Politecnica de Catalunya, Spain.
[5] Li, Y., Ibanez-Guzman, J. (2020). Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Processing Magazine, 37(4): 50-61. [Crossref]
[6] Fernandes, D., Afonso, T., Girão, P., Gonzalez, D., Silva, A., Névoa, R., Novais, P., Monteiro, J., Melo-Pinto, P. (2021). Real-time 3D object detection and SLAM fusion in a low-cost LiDAR test vehicle setup. Sensors, 21(24): 8381. [Crossref]
[7] Benciolini, T., Wollherr, D., Leibold, M. (2023). Non-conservative trajectory planning for automated vehicles by estimating intentions of dynamic obstacles. IEEE Transactions on Intelligent Vehicles, 8(3): 2463-2481. [Crossref]
[8] Zhao, J.F., Liang, B.D., Chen, Q.X. (2018). The key technology toward the self-driving car. International Journal of Intelligent Unmanned Systems, 6(1): 2-20. [Crossref]
[9] Eggers, F., Eggers, F. (2022). Drivers of autonomous vehicles—analyzing consumer preferences for self-driving car brand extensions. Marketing Letters, 33: 89-112. [Crossref]
[10] Chen, S.T., Jian, Z.Q., Huang, Y.H., Chen, Y., Zhou, Z.L., Zheng, N.N. (2019). Autonomous driving: cognitive construction and situation understanding. Science China Information Sciences, 62: 81101. [Crossref]
[11] Peiris, S., Berecki-Gisolf, J., Chen, B., Fildes, B. (2020). Road trauma in regional and remote Australia and New Zealand in preparedness for ADAS technologies and autonomous vehicles. Sustainability, 12(11): 4347. [Crossref]
[12] Gopalswamy, S., Rathinam, S. (2018). Infrastructure enabled autonomy: A distributed intelligence architecture for autonomous vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, pp. 986-992. [Crossref]
[13] Bagschik, G., Nolte, M., Ernst, S., Maurer, M. (2018). A system's perspective towards an architecture framework for safe automated vehicles. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, pp. 2438-2445. [Crossref]
[14] Yeong, D.J., Velasco-Hernandez, G., Barry, J., Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6): 2140. [Crossref]
[15] Parekh, D., Poddar, N., Rajpurkar, A., Chahal, M., Kumar, N., Joshi, G.P., Cho, W. (2022). A review on autonomous vehicles: Progress, methods and challenges. Electronics, 11(14): 2162. [Crossref]
[16] Nguyen, N.D., Do, T., Ngo, T.D., Le, D.D. (2020). An evaluation of deep learning methods for small object detection. Journal of Electrical and Computer Engineering, 2020: 1-18. [Crossref]
[17] Chu, W.Q., Cai, D. (2018). Deep feature based contextual model for object detection. Neurocomputing, 275: 1035-1042. [Crossref]
[18] Changalvala, R., Malik, H. (2019). LiDAR data integrity verification for autonomous vehicle. IEEE Access, 7: 138018-138031. [Crossref]
[19] Carranza-García, M., Torres-Mateo, J., Lara-Benítez, P., García-Gutiérrez, J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing, 13(1): 89. [Crossref]
[20] Zhao, X.M., Sun, P.P., Xu, Z.G., Min, H.G., Yu, H.K. (2020). Fusion of 3D LiDAR and camera data for object detection in autonomous vehicle applications. IEEE Sensors Journal, 20(9): 4901-4913. [Crossref]
[21] Pathak, A.R., Pandey, M., Rautaray, S. (2018). Application of deep learning for object detection. Procedia Computer Science, 132: 1706-1717. [Crossref]
[22] Kocur, V., Ftáčnik, M. (2020). Detection of 3D bounding boxes of vehicles using perspective transformation for accurate speed measurement. Machine Vision and Applications, 31: 62. https://doi.org/10.100710.1007/s00138-020-01117-x
[23] Han, Y.Z., Liu, X.F., Sheng, Z.F., Ren, Y.T., Han, X., You, J., Liu, R.S., Luo, Z.S. (2020). Wasserstein loss-based deep object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 998-999.
[24] Xu, H., Yao, L.W., Zhang, W., Liang, X.D., Li, Z.G. (2019). Auto-fpn: Automatic network architecture adaptation for object detection beyond classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649-6658.
[25] Li, Y.F., Wang, H.X., Dang, L.M., Nguyen, T.N., Han, D., Lee, A., Jang, I., Moon, H. (2020). A deep learning-based hybrid framework for object detection and recognition in autonomous driving. IEEE Access, 8: 194228-194239. [Crossref]
[26] Patole, S.M., Torlak, M., Wang, D., Ali, M. (2017). Automotive radars: A review of signal processing techniques. IEEE Signal Processing Magazine, 34(2): 22-35. [Crossref]
[27] Unlu, E., Zenou, E., Riviere, N., Dupouy, P.E. (2019). Deep learning-based strategies for the detection and tracking of drones using several cameras. IPSJ Transactions on Computer Vision and Applications, 11(1): 1-13. [Crossref]
[28] Angesh, A., Trivedi, M.M. (2019). No blind spots: Full-surround multi-object tracking for autonomous vehicles using cameras and lidars. IEEE Transactions on Intelligent Vehicles, 4(4): 588-599. [Crossref]
[29] Chaudhary, S., Wuttisittikulkij, L., Saadi, M., Sharma, A., Al Otaibi, S., Nebhen, J., Rodriguez, D.Z., Kumar, S., Sharma, V., Phanomchoeng, G., Chancharoen, R. (2021). Coherent detection-based photonic radar for autonomous vehicles under diverse weather conditions. PLoS ONE, 16(11): e0259438. [Crossref]
[30] Wei, Z., Zhang, F., Chang, S., Liu, Y., Wu, H., Feng, Z. (2022). Mmwave radar and vision fusion for object detection in autonomous driving: A review. Sensors, 22(7): 2542. [Crossref]
[31] Manoharan, S. (2019). An improved safety algorithm for artificial intelligence enabled processors in self driving cars. Journal of Artificial Intelligence, 1(02): 95-104. [Crossref]
[32] Zaarane, A., Slimani, I., Al Okaishi, W., Atouf, I., Hamdoun, A. (2020). Distance measurement system for autonomous vehicles using stereo camera. Array, 5: 100016. [Crossref]
[33] Younis, A., Li, S.X., Jn, S., Hai, Z. (2020). Real-time object detection using pre-trained deep learning models MobileNet-SSD. In Proceedings of 2020 6th International Conference on Computing and Data Engineering, New York, USA, pp. 44-48. [Crossref]
[34] Deng, J., Xuan, X.J., Wang, W.F., Li, Z., Yao, H.W., Wang, Z.Q. (2020). A review of research on object detection based on deep learning. In Journal of Physics: Conference Series: The 2020 International Seminar on Artificial Intelligence, Networking and Information Technology, Shanghai, China, 1684: 012028. [Crossref]
[35] Ferdous, S.N., Mostofa, M., Nasrabadi, N.M. (2019). Super resolution-assisted deep aerial vehicle detection. Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 11006: 432-443.
[36] Jain, A., Nandan, D., Meduri, P. (2023). Data export and optimization technique in connected vehicle. Ingénierie des Systèmes d'Information, 28(2): 517-525. [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Saini, V., Kantipudi, M. P., & Meduri, P. (2023). Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation. Int. J. Transp. Dev. Integr., 7(4), 341-351. https://doi.org/10.18280/ijtdi.070408
V. Saini, M. P. Kantipudi, and P. Meduri, "Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation," Int. J. Transp. Dev. Integr., vol. 7, no. 4, pp. 341-351, 2023. https://doi.org/10.18280/ijtdi.070408
@research-article{Saini2023EnhancedSA,
title={Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation},
author={Vaibhav Saini and Mvv Prasad Kantipudi and Pramoda Meduri},
journal={International Journal of Transport Development and Integration},
year={2023},
page={341-351},
doi={https://doi.org/10.18280/ijtdi.070408}
}
Vaibhav Saini, et al. "Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation." International Journal of Transport Development and Integration, v 7, pp 341-351. doi: https://doi.org/10.18280/ijtdi.070408
Vaibhav Saini, Mvv Prasad Kantipudi and Pramoda Meduri. "Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation." International Journal of Transport Development and Integration, 7, (2023): 341-351. doi: https://doi.org/10.18280/ijtdi.070408
SAINI V, KANTIPUDI M P, MEDURI P. Enhanced SSD Algorithm-Based Object Detection and Depth Estimation for Autonomous Vehicle Navigation[J]. International Journal of Transport Development and Integration, 2023, 7(4): 341-351. https://doi.org/10.18280/ijtdi.070408