Javascript is required
1.
S. Helmstetter and H. Paulheim, “Collecting a large scale dataset for classifying fake news tweets using weak supervision,” Futur. Internet, vol. 13, no. 5, p. 114, 2021. [Google Scholar] [Crossref]
2.
A. Zakharchenko, T. Peráček, S. Fedushko, Y. Syerov, and O. Trach, “When fact-checking and ‘BBC standards’ are helpless: ‘Fake newsworthy event’ manipulation and the reaction of the ‘high-quality media’ on it,” Sustainability, vol. 13, no. 2, p. 573, 2021. [Google Scholar] [Crossref]
3.
J. Noureen and M. Asif, “Crowdsensing: Socio-technical challenges and opportunities,” Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 3, pp. 363–369, 2017. [Google Scholar] [Crossref]
4.
H. Reddy, N. Raj, M. Gala, and A. Basava, “Text-mining-based fake news detection using ensemble methods,” Int. J. Autom. Comput., vol. 17, no. 2, pp. 210–221, 2020. [Google Scholar] [Crossref]
5.
T. Bian, X. Xiao, T. Y. Xu, P. L. Zhao, W. B. Huang, Y. Rong, and J. Z. Huang, “Rumor detection on social media with bi-directional graph convolutional networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, New York, USA, 2020, pp. 549–556. [Google Scholar] [Crossref]
6.
K. R. Cao, B. H. Wang, H. Y. Ding, L. Lv, J. W. Tian, H. Hu, and F. K. Gong, “Achieving reliable and secure communications in wireless-powered NOMA systems,” IEEE Trans. Veh. Technol., vol. 70, no. 2, pp. 1978–1983, 2021. [Google Scholar] [Crossref]
7.
P. Chen, H. Y. Liu, R. Y. Xin, T. Carval, J. L. Zhao, Y. N. Xia, and Z. M. Zhao, “Effectively detecting operational anomalies in large-scale IoT data infrastructures by using a GAN-based predictive model,” Comput. J., vol. 65, no. 11, pp. 2909–2925, 2022. [Google Scholar] [Crossref]
8.
J. Chen, Q. C. Wang, W. M. Peng, H. T. Xu, X. D. Li, and W. Q. Xu, “Disparity-based multiscale fusion network for transportation detection,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 18855–18863, 2022. [Google Scholar] [Crossref]
9.
B. Cheng, D. Zhu, S. Zhao, and J. L. Chen, “Situation-aware IoT service coordination using the event-driven SOA paradigm,” IEEE Trans. Netw. Serv. Manage., vol. 13, no. 2, pp. 349–361, 2016. [Google Scholar] [Crossref]
10.
F. P. Guo, W. Zhou, Q. B. Lu, and C. Zhang, “Path extension similarity link prediction method based on matrix algebra in directed networks,” Comput. Commun., vol. 187, no. 2, pp. 83–92, 2022. [Google Scholar] [Crossref]
11.
B. Li, X. L. Zhou, Z. K. Ning, X. Y. Guan, and K. F. Yiu, “Dynamic event-triggered security control for networked control systems with cyber-attacks: A model predictive control approach,” vol. 612, no. 3, pp. 384–398, 2022. [Google Scholar] [Crossref]
12.
A. Nasiri, “Deep learning based sound event detection and classification,” phdthesis, University of South Carolina, Columbia, 2021. [Online]. Available: http://scholarcommons.sc.edu/ [Google Scholar]
13.
H. H. C. Nguyen, V. K. Solanki, D. Van Thang, and T. T. Nguyen, “Resource allocation for heterogeneous cloud computing,” Resource, vol. 9, no. 1–2, pp. 1–15, 2017. [Google Scholar] [Crossref]
14.
P. Mahesh Kumar, “A new human voice recognition system,” Asian J. Sci. Appl. Technol., vol. 5, no. 2, pp. 23–30, 2016. [Google Scholar] [Crossref]
15.
S. Dua, S. S. Kumar, Y. Albagory, R. Ramalingam, A. Dumka, R. Singh, M. Rashid, A. Gehlot, S. S. Alshamrani, and A. S. AlGhamdi, “Developing a speech recognition system for recognizing tonal speech signals using a convolutional neural network,” Appl. Sci., vol. 12, no. 12, p. 6223, 2022. [Google Scholar] [Crossref]
16.
M. A. Anusuya and S. K. Katti, “Speech recognition by machine, a review,” Int. J. Comput. Sci. Inf. Sec., vol. 6, no. 3, pp. 181–205, 2009. [Google Scholar] [Crossref]
17.
J. G. Aleixandre, M. Elgendi, and C. Menon, “The use of audio signals for detecting COVID-19: A systematic review,” Sensors, vol. 22, no. 21, p. 8114, 2022. [Google Scholar] [Crossref]
18.
J. Xie, K. Hu, M. Y. Zhu, J. H. Yu, and Q. B. Zhu, “Investigation of different CNN-based models for improved bird sound classification,” vol. 7, pp. 175353–175361, 2019. [Google Scholar] [Crossref]
19.
H. H. C. Nguyen, H. V. Dang, N. M. N. Pham, V. S. Le, and T. T. Nguyen, “Deadlock detection for resource allocation in heterogeneous distributed platforms,” in Recent Advances in Information and Communication Technology, Cham: Springer, 2015. [Google Scholar] [Crossref]
20.
V. N. Thatha, S. Donepudi, M. A. Safali, S. P. Praveen, T. T. Nguyen, and H. H. C. Nguyen, “Security and risk analysis in the cloud with software defined networking architecture,” Int. J. Electr. Comput. Eng., vol. 13, no. 5, pp. 5550–5559, 2023. [Google Scholar] [Crossref]
21.
J. Ma and W. Gao, “Debunking rumors on twitter with tree transformer,” in Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 2020, pp. 5455–5466. [Google Scholar] [Crossref]
Search
Open Access
Research article

Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach

Hoang Ha Nguyen1,
Ha Huy Cuong Nguyen2,
Chiranjibe Jana3*
1
Department of Information Technology, Hue University ò Sciences, 49000 Hue, Vietnam
2
Software Development Centre, The University of Danang, 50000 Da Nang, Vietnam
3
Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences (SIMATS), 602105 Chennai, India
Journal of Intelligent Management Decision
|
Volume 3, Issue 1, 2024
|
Pages 15-21
Received: 01-06-2024,
Revised: 02-14-2024,
Accepted: 02-22-2024,
Available online: 03-04-2024
View Full Article|Download PDF

Abstract:

In geographically isolated regions, where infrastructure limitations and remote locations pose significant challenges, a mobile application has been developed to facilitate an efficient emergency response system. This system, designed to bridge the gap in emergency support, employs a multi-faceted strategy that combines human expertise with advanced machine learning (ML) technologies. Upon activation through the application, a coordinated mechanism is triggered, dispatching local mechanics equipped with the necessary tools, resources, vehicles, and spare parts to the site of the emergency. This immediate on-site assistance is essential for addressing mechanical failures and ensuring timely support for individuals in remote areas.At the heart of the application lies a sophisticated ML model, trained on an extensive dataset comprising a wide array of emergencies likely to occur in rural settings. This model, characterized by its convolutional neural network (CNN) architecture and optimized for mobile deployment through TensorFlow Lite (TFLite), demonstrates an impressive diagnostic accuracy rate of 98%. Such precision significantly enhances the application’s capacity to diagnose issues accurately, prioritize response efforts, and optimize resource allocation.Moreover, the application leverages data-driven insights not only to streamline the emergency response process but also to facilitate predictive maintenance. By continuously learning from incoming data, the ML model can predict potential problems and suggest preventative measures to users, thereby minimizing the likelihood of future breakdowns. This predictive capability underscores the application’s role in promoting resilience within rural communities.Community engagement is further encouraged through the inclusion of local mechanics in the emergency response network. This initiative not only expands the pool of available skilled professionals but also fosters a sense of community solidarity, crucial for enhancing the system’s overall effectiveness.In summary, the development of this mobile application represents a significant advancement in emergency assistance for rural communities. By integrating real-time response capabilities with sophisticated ML models, the system not only addresses the immediate challenges of emergency support in remote areas but also contributes to the creation of a more resilient and interconnected community fabric.

Keywords: Vehicle servicing and transporting (VSERV) emergency, Machine learning (ML) classifier, TensorFlow Lite (TFLite), Convolutional neural network (CNN)

1. Introduction

The VSERV Mobile app is a comprehensive solution designed to address the challenges faced by travelers during unexpected breakdowns. With its integrated ML system, the application provides a quick and accurate diagnosis of the vehicle's issues, allowing users to understand the nature of the problem promptly. The app ensures that travelers can stay informed about their vehicle's status, such as fuel levels, tire pressure, and overall mechanical condition, even before embarking on a journey. By providing real-time updates, users can take preventive measures to avoid breakdowns and ensure a smoother travel experience. In the event of a breakdown, the ML system within the VSERV Mobile app assists the traveler in identifying the specific problem with the vehicle. This not only saves valuable time but also empowers users with knowledge, enabling them to communicate effectively with locals or service personnel [1], [2]. The application goes beyond diagnosis by offering a range of support services. If the breakdown is severe, the VSERV Mobile app facilitates the arrangement of transportation for the traveler, ensuring they can continue their journey with minimal disruption. Simultaneously, it takes care of the servicing and delivery of the stranded vehicle, providing a seamless experience for the user [3], [4].

Moreover, the VSERV Mobile app eliminates the need for travelers to rely solely on local assistance, which may be limited or delayed. By streamlining the process of finding nearby service centers and arranging repairs, the app ensures a quicker resolution to the issue, minimizing downtime and enhancing safety [5], [6].

With the VSERV Mobile app, travelers can confidently navigate through unexpected challenges during their journeys [7], [8], knowing that they have a reliable and efficient solution at their fingertips. This innovative application not only enhances the overall travel experience but also promotes safety, convenience, and peace of mind for users on the move [9], [10].

2. Literature Survey

Delivering news content through social networks offers several advantages, including cost-effectiveness, easy accessibility, and rapid transmission. These benefits have led many individuals to prefer obtaining their news from these platforms. The expansive growth of social networks has transformed various social media platforms into efficient channels for news dissemination. The increasing reliance on social media for news consumption is primarily driven by its convenience. However, this convenience also poses a challenge, as false information can quickly spread and have detrimental effects on individuals and society. Microblogs like Twitter and Weibo, among the most widely used online platforms, enable the swift sharing and forwarding of tweets. Notably, tweets featuring both text and photos tend to attract more attention than those with only text.

ML is the dominant approach to employing these techniques, utilizing a significant body of research. By having a labeled dataset containing both genuine and false news, a classification model is trained using novel attributes. This model is then applied to assess the accuracy of the provided information. Two prevalent categories of characteristics are commonly utilized in these methods: (1) features reliant on content and (2) features dependent on context. Features derived from the text or the actual content of the news are known as content-based features.

This comprehensive review delves into the latest developments in auditory signal analysis, encompassing a range of applications. From COVID-19 identification using auditory signals to voice recognition and sound event categorization, this survey explores significant research contributions. The review highlights studies utilizing audio analysis, CNNs, and deep learning techniques, shedding light on their implications and relevance. Additionally, innovative approaches in automated surveillance systems and acoustic emissions analysis are examined, offering potential solutions for real-world challenges. The article provides an up-to-date insight into the evolving landscape of auditory signal analysis and its diverse applications.

In their groundbreaking research on COVID-19 identification through auditory signals, Jose Gomez Aleixandre and Mohamed Elgendi delved into an extensive analysis of 48 publications sourced from a comprehensive search spanning 659 databases, including reputable platforms such as PubMed, IEEE Xplore, Embase, and Google Scholar. The meticulous inclusion of both publicly available and researcher-obtained datasets underscored the thoroughness of their investigation. One notable aspect of their methodology was the utilization of crowdsourcing for data collection, reflecting a collaborative and inclusive approach to gathering diverse sets of information. However, a critical observation highlighted by the authors pertained to the size of some datasets employed in these studies, with a significant portion relying on relatively small datasets comprising fewer than 200 instances. Despite this limitation, the research outcomes demonstrated promise, with 13 out of the 48 publications showcasing encouraging results in the realm of COVID-19 identification via auditory signals [11], [12]. The success of these studies has far-reaching implications for the development of innovative diagnostic tools. In terms of algorithmic performance, CNNs and support vector machines (SVMs) emerged as the frontrunners, showcasing their efficacy in processing and interpreting auditory data for disease identification. The authors noted that these ML techniques exhibited superior capabilities, underscoring the potential for advanced technology in the field of healthcare.

Furthermore, the authors shed light on the specific features that contributed to the success of these algorithms. Mel-frequency cepstral coefficients (MFCCs) and zero-crossing rate emerged as the preferred features, showcasing their importance in extracting relevant information from auditory signals for accurate disease identification. Intriguingly, the exploration of non-linear characteristics in the auditory signals also yielded positive outcomes, expanding the repertoire of effective approaches beyond traditional linear methods. This innovation in analytical techniques holds promise for the continued development of robust and reliable systems for COVID-19 identification. The research spearheaded by Gomez Aleixandre and Elgendi represents a significant stride in leveraging auditory signals for COVID-19 identification. Their comprehensive review and analysis of the existing literature provide valuable insights into the potential of ML algorithms and specific features in advancing the field of medical diagnostics through auditory data analysis [13], [14].

Nasiri's [12] research introduced novel methods for sound event detection, acoustic emissions analysis, and ambient sound categorization. The AudioMask technique, based on Mask R-CNN and frame-level audio analysis, showed potential for identifying relevant audio events, especially those with distinctive forms in Mel spectrograms. The Audioset dataset, containing over 2 million labeled sound segments, facilitated the exploration of sound events. SoundCLR, a supervised contrastive learning-based system, demonstrated state-of-the-art performance in ambient sound categorization. Hybrid deep network models achieved remarkable accuracy on benchmark datasets, reinforcing the progress in sound event analysis [15], [16].

The review delves into an innovative online surveillance system proposed by Alain Dufaux, Laurent Besacier, and others. This system combines microphone recordings and detection modules to trigger recognition processes and subsequent human interventions. Acoustic emissions analysis, specifically in SiC composite deterioration, was investigated using a random forest approach and deep neural networks, contributing to material degradation assessment [17]. The article discusses advancements in voice recognition, emphasizing CNN-based approaches for tonal speech identification and continuous voice signals. These studies exhibit the potential of deep learning techniques to achieve remarkable accuracy rates, reinforcing the significance of feature extraction methodologies [18], [19]. Reviewed literature underscores the dynamic landscape of auditory signal analysis, spanning applications from healthcare to surveillance and voice recognition. Emerging technologies like CNNs and deep learning models exhibit considerable potential, fueling progress in the field. As researchers continue to push boundaries, this comprehensive survey offers valuable insights into the current state and potential future directions of auditory signal analysis [20], [21].

3. Dataset Creation

We met a few service center car mechanics to learn “how engine sounds vary for different vehicle problems." Surprisingly, we got to know about many problems with vehicles that can be “identified through engine sound." For the dataset, we went to different showrooms of car companies and service centers like Hyundai, Honda, Kia, etc. We explained the problem to the managers of the companies. Many of them were impressed by the idea and approved of us collecting data from their showrooms and service centers. In each showroom and service center, an experienced mechanic was assigned to explain the problem, the reason for the occurrence of the problem, and the solution. It took almost one month to collect data on 20 different classes of problems for different car companies. From this experience, we got to know the varieties of problems that can be identified through engine sound. Examples of some of the problems include starter motor faults, diesel injector noise, brake switch faults, turbo problems, etc. Table 1 illustrates the vehicle problems with the associated label.

Table 1. Vehicle problems with the associated label

Label

Fault of Vehicle

Label

Fault of Vehicle

1

Starter motor fault

10

Clipping loose

2

Diesel injector noise

11

Alternator output not coming

3

Brake switch noise

12

Injector not working

4

No problem

13

Air filter damage hose

5

No problem

14

Wall tappet clearance

6

Turbo problem

15

AC compressor and air filter assembler fault

7

Water leakage

16

Timing chain noise

8

Injector, spares fail/ wiring cuts

17

AC noise

9

Nozzle diesel flow stops

18

Nozzle noise/ injector noise

The dataset consists of 600 audio samples of engine sounds and other vehicle fault-related sounds. The dataset is split in the ratio 8:2, with 80%, i.e., 480 samples for training and 20%, i.e., 120 samples for testing. The samples are labeled from 0-18. The main goal of this paper is to detect problems in the vehicle by listening to the engine sound.

4. Proposed Method

The proposed model, which is illustrated in Figure 1, uses ANN (artificial neural networks).

Step 1: Extract the features from recorded audio using the MFCC technique.

Step 2: Feed the extracted features to the model with 7 dense layers and ReLu activation.

Step 3: Classify the vehicle problem based on the recorded sound.

Step 4: Convert the.h5 model into the TFLite model.

Step 5: Load the TFLite in the mobile app to predict the problem.

The Mel spectrum may be calculated by first sending the signal that has been processed by the Fourier transform through a bank of bandpass filters that are collectively referred to as the Mel-filter bank. Mels are used as a unit of measurement because they are based on the highest frequency that can be heard by humans. In the same way that the frequency spectrum of physical tones is not as linear as the mel spectrum, the human auditory system does not receive tones in a linear fashion either. The Mel scale is logarithmic for frequencies that are higher than 1 kHz but linear for frequencies that are lower than 1 kHz. Mel's approximation for the human ears’ perceived frequency can be expressed as Eq. (1).

${{f}_{m}}=2595*lo{{g}_{10}}\left( 1+\frac{f}{700} \right)$
(1)

where, $f$ represents the frequency as measured in Hz, and ${{f}_{m}}$ denotes the frequency perceived by human ears as represented by the Mel-scale.

The model is used to classify the audio. So, we built the model using supervised learning classification algorithms. We used a dataset with 600 audio samples to construct the model. We utilized 480 of these audios for training and the rest for testing.

Figure 1. Architecture of the model

The various problems identified from engine sound are starter motor fault, diesel injector noise, brake switch fault, turbo problem, water leakage, injector/spares fail, wiring cuts, nozzle diesel flow stops, steering sticky, noise, clipping loose, alternator output not coming, injector not working, air filter damage hose, wall tappet clearance, AC compressor, and air filter assembler, timing chain noise, AC noise, nozzle noise, and injector noise. For training the model, we generated features of the collected audio using MFCCs and performed encoding on labels, which converts labels into integers. Then we split the data into training and testing.

We built a web interface using the Python and Flask frameworks. We undertake the feature extraction tasks using Python code and then pass the extracted features to the model. The deployed model then classifies the features and outputs the probabilities.

The main goal is to build a mobile application, VSERV. A mobile application is developed where the proposed model is constructed and deployed. The app was developed using the Flutter framework. Because a mobile device has fewer resources and less power, we convert the ML model (.h5 file) into a TensorFlow Lite (TFLite) file. A TFLite file is a compressed version of the ML model, which consumes fewer resources and is compatible with low-end devices. We connected the app to Firebase for the backend. The user first needs to log in using a mobile number. We will send an OTP to the registered mobile number to verify the user. Later, the user needs to provide the care details in the form present in the app. The user can select the problem from the given list of concerns. If the user does not know about the issue, he can choose the audio option and find out the problem. All he needs to do is switch on the audio option and record the engine sound. The app automatically saves the audio clip and passes it on for processing. The ML model does its job and predicts the vehicle problem.

VSERV is a mobile application that is mainly developed to help people who have a breakdown in their vehicle in remote areas. The novelty of the VSERV app is to record the breakdown vehicle sound and predict the problem. There is no such kind of application available in the app store. This app best suits mid-range cars and is cost-effective. All the user needs to do is install and register the app on their mobile. In the application, the user needs to register with details about the car. In this app, the users can not only select their problem through a set of given options with common problems but also provide their problem through text if the problem is identifiable, like tire puncture or refilling the petrol. If the problem is unidentifiable, then the user needs to provide the audio of the engine so that we can identify the problem using an ML classifier. The model takes the audio sample as input and identifies the probabilities of the various problems. The model has an accuracy of around 95%. The main usage of this model is to give a basic condition of the vehicle, which helps the mechanic solve the problem quickly. The app also provides solutions to a particular problem and associated videos to solve it, which can be used by the user to solve the problem itself. The following Figure 2 illustrates the vehicle serving the mobile app.

Figure 2. a: Home screen; b: Registration; c: OTP verification; d: User profile; e: Vechile profile; f: Recording engine sound; g: Diagonsis report

5. Conclusion

The VSERV Mobile app's innovative approach to roadside assistance marks a significant departure from conventional solutions, setting a new standard in the industry. By harnessing the power of audio analysis, the app not only streamlines the assistance process but also enhances the accuracy and efficiency of identifying and resolving issues. This intelligent use of technology not only addresses immediate concerns but also contributes to a broader narrative of making travel safer and more secure. The seamless integration of cutting-edge features within the VSERV Mobile app speaks to a commitment to user-centric design and functionality. The user experience is elevated through intuitive interfaces, real-time communication capabilities, and a comprehensive suite of tools that empower users to navigate unexpected challenges with ease. This user-centric approach not only improves the overall satisfaction of the service but also fosters a sense of trust and confidence among users, reinforcing VSERV's position as an industry leader. Moreover, the VSERV Mobile app's contribution to minimizing travel disruptions is not just a matter of convenience but a strategic investment in optimizing road safety. By promptly addressing roadside issues and providing timely assistance, the app actively contributes to creating a safer environment for all road users. This commitment to safety underscores VSERV's dedication to not just resolving problems but actively preventing them, thereby creating a positive impact on the overall road ecosystem. As technology continues its relentless evolution, VSERV stands poised at the forefront, anticipating and adapting to emerging trends. The app's forward-thinking approach not only addresses current challenges but also lays the groundwork for a future where more intricate and sophisticated solutions become possible. Whether through the integration of artificial intelligence, further advancements in audio analysis, or other technological breakthroughs, VSERV is positioned to lead the charge in shaping the future of roadside assistance and road safety.

In conclusion, the VSERV Mobile app is not just a tool for addressing immediate roadside assistance needs; it is a catalyst for transformative change in how we perceive and manage road safety. Through its pioneering technologies, user-centric design, and unwavering commitment to innovation, VSERV is not merely keeping pace with the trajectory of technology but is actively steering it towards a safer and more efficient future for all travelers.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
S. Helmstetter and H. Paulheim, “Collecting a large scale dataset for classifying fake news tweets using weak supervision,” Futur. Internet, vol. 13, no. 5, p. 114, 2021. [Google Scholar] [Crossref]
2.
A. Zakharchenko, T. Peráček, S. Fedushko, Y. Syerov, and O. Trach, “When fact-checking and ‘BBC standards’ are helpless: ‘Fake newsworthy event’ manipulation and the reaction of the ‘high-quality media’ on it,” Sustainability, vol. 13, no. 2, p. 573, 2021. [Google Scholar] [Crossref]
3.
J. Noureen and M. Asif, “Crowdsensing: Socio-technical challenges and opportunities,” Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 3, pp. 363–369, 2017. [Google Scholar] [Crossref]
4.
H. Reddy, N. Raj, M. Gala, and A. Basava, “Text-mining-based fake news detection using ensemble methods,” Int. J. Autom. Comput., vol. 17, no. 2, pp. 210–221, 2020. [Google Scholar] [Crossref]
5.
T. Bian, X. Xiao, T. Y. Xu, P. L. Zhao, W. B. Huang, Y. Rong, and J. Z. Huang, “Rumor detection on social media with bi-directional graph convolutional networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, New York, USA, 2020, pp. 549–556. [Google Scholar] [Crossref]
6.
K. R. Cao, B. H. Wang, H. Y. Ding, L. Lv, J. W. Tian, H. Hu, and F. K. Gong, “Achieving reliable and secure communications in wireless-powered NOMA systems,” IEEE Trans. Veh. Technol., vol. 70, no. 2, pp. 1978–1983, 2021. [Google Scholar] [Crossref]
7.
P. Chen, H. Y. Liu, R. Y. Xin, T. Carval, J. L. Zhao, Y. N. Xia, and Z. M. Zhao, “Effectively detecting operational anomalies in large-scale IoT data infrastructures by using a GAN-based predictive model,” Comput. J., vol. 65, no. 11, pp. 2909–2925, 2022. [Google Scholar] [Crossref]
8.
J. Chen, Q. C. Wang, W. M. Peng, H. T. Xu, X. D. Li, and W. Q. Xu, “Disparity-based multiscale fusion network for transportation detection,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 18855–18863, 2022. [Google Scholar] [Crossref]
9.
B. Cheng, D. Zhu, S. Zhao, and J. L. Chen, “Situation-aware IoT service coordination using the event-driven SOA paradigm,” IEEE Trans. Netw. Serv. Manage., vol. 13, no. 2, pp. 349–361, 2016. [Google Scholar] [Crossref]
10.
F. P. Guo, W. Zhou, Q. B. Lu, and C. Zhang, “Path extension similarity link prediction method based on matrix algebra in directed networks,” Comput. Commun., vol. 187, no. 2, pp. 83–92, 2022. [Google Scholar] [Crossref]
11.
B. Li, X. L. Zhou, Z. K. Ning, X. Y. Guan, and K. F. Yiu, “Dynamic event-triggered security control for networked control systems with cyber-attacks: A model predictive control approach,” vol. 612, no. 3, pp. 384–398, 2022. [Google Scholar] [Crossref]
12.
A. Nasiri, “Deep learning based sound event detection and classification,” phdthesis, University of South Carolina, Columbia, 2021. [Online]. Available: http://scholarcommons.sc.edu/ [Google Scholar]
13.
H. H. C. Nguyen, V. K. Solanki, D. Van Thang, and T. T. Nguyen, “Resource allocation for heterogeneous cloud computing,” Resource, vol. 9, no. 1–2, pp. 1–15, 2017. [Google Scholar] [Crossref]
14.
P. Mahesh Kumar, “A new human voice recognition system,” Asian J. Sci. Appl. Technol., vol. 5, no. 2, pp. 23–30, 2016. [Google Scholar] [Crossref]
15.
S. Dua, S. S. Kumar, Y. Albagory, R. Ramalingam, A. Dumka, R. Singh, M. Rashid, A. Gehlot, S. S. Alshamrani, and A. S. AlGhamdi, “Developing a speech recognition system for recognizing tonal speech signals using a convolutional neural network,” Appl. Sci., vol. 12, no. 12, p. 6223, 2022. [Google Scholar] [Crossref]
16.
M. A. Anusuya and S. K. Katti, “Speech recognition by machine, a review,” Int. J. Comput. Sci. Inf. Sec., vol. 6, no. 3, pp. 181–205, 2009. [Google Scholar] [Crossref]
17.
J. G. Aleixandre, M. Elgendi, and C. Menon, “The use of audio signals for detecting COVID-19: A systematic review,” Sensors, vol. 22, no. 21, p. 8114, 2022. [Google Scholar] [Crossref]
18.
J. Xie, K. Hu, M. Y. Zhu, J. H. Yu, and Q. B. Zhu, “Investigation of different CNN-based models for improved bird sound classification,” vol. 7, pp. 175353–175361, 2019. [Google Scholar] [Crossref]
19.
H. H. C. Nguyen, H. V. Dang, N. M. N. Pham, V. S. Le, and T. T. Nguyen, “Deadlock detection for resource allocation in heterogeneous distributed platforms,” in Recent Advances in Information and Communication Technology, Cham: Springer, 2015. [Google Scholar] [Crossref]
20.
V. N. Thatha, S. Donepudi, M. A. Safali, S. P. Praveen, T. T. Nguyen, and H. H. C. Nguyen, “Security and risk analysis in the cloud with software defined networking architecture,” Int. J. Electr. Comput. Eng., vol. 13, no. 5, pp. 5550–5559, 2023. [Google Scholar] [Crossref]
21.
J. Ma and W. Gao, “Debunking rumors on twitter with tree transformer,” in Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 2020, pp. 5455–5466. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Nguyen, H. H., Nguyen, H. H. C., & Jana, C. (2024). Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach. J. Intell. Manag. Decis., 3(1), 15-21. https://doi.org/10.56578/jimd030102
H. H. Nguyen, H. H. C. Nguyen, and C. Jana, "Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach," J. Intell. Manag. Decis., vol. 3, no. 1, pp. 15-21, 2024. https://doi.org/10.56578/jimd030102
@research-article{Nguyen2024DeployingMA,
title={Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach},
author={Hoang Ha Nguyen and Ha Huy Cuong Nguyen and Chiranjibe Jana},
journal={Journal of Intelligent Management Decision},
year={2024},
page={15-21},
doi={https://doi.org/10.56578/jimd030102}
}
Hoang Ha Nguyen, et al. "Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach." Journal of Intelligent Management Decision, v 3, pp 15-21. doi: https://doi.org/10.56578/jimd030102
Hoang Ha Nguyen, Ha Huy Cuong Nguyen and Chiranjibe Jana. "Deploying Mobile Applications for Emergency Flood Response in Geographically Isolated Areas: A Data-Driven Approach." Journal of Intelligent Management Decision, 3, (2024): 15-21. doi: https://doi.org/10.56578/jimd030102
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.