Javascript is required
Search
Volume 2, Issue 4, 2023

Abstract

Full Text|PDF|XML

Diabetic retinopathy, a severe ocular disease correlated with elevated blood glucose levels in diabetic patients, carries a significant risk of visual impairment. The essentiality of its timely and precise severity classification is underscored for effective therapeutic intervention. Deep learning methodologies have been shown to yield encouraging results in the detection and categorisation of severity levels of diabetic retinopathy. This study proposes a dual-level approach, wherein the MobileNetV2 model is modified for a regression task, predicting retinopathy severity levels and subsequently fine-tuned on fundus images. The refined MobileNetV2 model is then utilised for learning feature embeddings, and a Support Vector Machine (SVM) classifier is trained for grading retinopathy severity. Upon implementation, this dual-level approach demonstrated remarkable performance, achieving an accuracy rate of 87% and a kappa value of 93.76% when evaluated on the APTOS19 benchmark dataset. Additionally, the efficacy of data augmentation and the handling of class imbalance issues were explored. These findings suggest that the novel dual-level approach provides an efficient and highly effective solution for the detection and classification of diabetic retinopathy severity levels.

Abstract

Full Text|PDF|XML

In the realm of agriculture, crop yields of fundamental cereals such as rice, wheat, maize, soybeans, and sugarcane are adversely impacted by insect pest invasions, leading to significant reductions in agricultural output. Traditional manual identification of these pests is labor-intensive and time-consuming, underscoring the necessity for an automated early detection and classification system. Recent advancements in machine learning, particularly deep learning, have provided robust methodologies for the classification and detection of a diverse array of insect infestations in crop fields. However, inaccuracies in pest classification could inadvertently precipitate the use of inappropriate pesticides, further endangering both agricultural yields and the surrounding ecosystems. In light of this, the efficacy of nine distinct pre-trained deep learning algorithms was evaluated to discern their capability in the accurate detection and classification of insect pests. This assessment utilized two prevalent datasets, comprising ten pest classes of varied sizes. Among the transfer learning techniques scrutinized, adaptations of ResNet-50 and ResNet-101 were deployed. It was observed that ResNet-50, when employed in a transfer learning paradigm, achieved an exemplary classification accuracy of 99.40% in the detection of agricultural pests. Such a high level of precision represents a significant advancement in the field of precision agriculture.

Abstract

Full Text|PDF|XML

In Sub-Saharan Africa, particularly in Nigeria, Lassa fever poses a significant infectious disease threat. This investigation employed count regression and machine learning techniques to model mortality rates associated with confirmed Lassa fever cases. Utilizing weekly data from January 7, 2018, to April 2, 2023, provided by the Nigeria Centre for Disease Control (NCDC), an analytical comparison between these methods was conducted. Overdispersion was indicated (p<0.01), prompting the exclusive use of negative binomial and generalized negative binomial regression models. Machine learning algorithms, specifically medium Gaussian support vector machine (MGSVM), ensemble boosted trees, ensemble bagged trees, and exponential Gaussian Process Regression (GPR), were applied, with 80% of the data allocated for training and the remaining 20% for testing. The efficacy of these methods was evaluated using the coefficients of determination (R²) and the root mean square error (RMSE). Descriptive statistics revealed a total of 30,461 confirmed cases, 4,745 suspected cases, and 772 confirmed fatalities attributable to Lassa fever during the study period. The negative binomial regression model demonstrated superior performance (R²=0.1864, RMSE=4.33) relative to the generalized negative binomial model (R²=0.1915, RMSE=18.2425). However, machine learning algorithms surpassed the count regression models in predictive capability, with ensemble boosted trees emerging as the most effective (R²=0.85, RMSE=1.5994). Analysis also identified the number of confirmed cases as having a significant positive correlation with mortality rates (r=0.885, p<0.01). The findings underscore the importance of promoting community hygiene practices, such as preventing rodent intrusion and securing food storage, to mitigate the transmission and consequent fatalities of Lassa fever.

Abstract

Full Text|PDF|XML

This investigation delineates an optimised predictive model for employee attrition within a substantial workforce, identifying pertinent models tailored to the specific context of employee and organisational variables. The selection and refinement of the appropriate predictive model serve as cornerstones for enhancements and updates, which are integral to honing the model's precision in prognosticating potential departures. Through meticulous optimisation, the model demonstrates proficiency in pinpointing the pivotal factors contributing to employee turnover and elucidating the interdependencies among salient variables. A suite of 27 general and eight critical variables were scrutinised. Pertinent correlations were unearthed, notably between monthly income and job satisfaction, home-to-work distance and job satisfaction, as well as age with both job satisfaction and performance metrics. Drawing from prior studies in analogous domains, a three-stage analytical methodology encompassing data exploration, model selection, and implementation was employed. The rigorous training of the optimised model encompassed both attrition factors and variable correlations, culminating in predictive outcomes with a precision of 90% and an accuracy of 87%. Implementing the refined model projected that 113 out of 709 employees, equating to 15.93%, were at a heightened risk of exiting the organisation. This quantitative foresight equips stakeholders with a strategic tool for preemptive interventions to mitigate turnover and sustain organisational vitality.

Abstract

Full Text|PDF|XML

In the domain of intellectual property protection, the embedding of digital watermarks has emerged as a pivotal technique for the assertion of copyright, the conveyance of confidential messages, and the endorsement of authenticity within digital media. This research delineates the implementation of a non-blind watermarking algorithm, utilizing alpha blending facilitated by discrete wavelet transform (DWT) to embed watermarks into genuine images. Thereafter, an extraction process, constituting the inverse of embedding, retrieves these watermarks. The robustness of the embedded watermark against prevalent manipulative attacks, specifically median filter, salt and pepper (SAP) noise, Gaussian noise, speckle noise, and rotation, is rigorously evaluated. The performance of the DWT-based watermarking is quantified using the peak signal-to-noise ratio (PSNR), an objective metric reflecting fidelity. It is ascertained that the watermark remains tenaciously intact under such adversarial conditions, underscoring the proposed method's suitability for applications in digital image security and copyright verification.

- no more data -