Acadlore takes over the publication of IJCMEM from 2025 Vol. 13, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.
Enhanced Sarcasm Detection in Telugu Dialogue Systems Using Self Attention-Based RNN and Gated Recurrent Unit Models
Abstract:
Sarcasm detection is challenging in sentiment analysis, especially for morphologically complex languages like Telugu. Sarcastic statements often use positive words to convey negative sentiments, complicating automated interpretation. Existing sarcasm detection systems predominantly cater to English, leaving a gap for low-resource languages such as Hindi, Telugu, Tamil, Arabic, and others. This study fills this gap by creating and annotating a Telugu conversational dataset, which includes both standard and sarcastic responses. We employed two deep learning models—Self Attention-based Recurrent Neural Network (SA-RNN) and Gated Recurrent Unit (GRU)—to analyze this dataset. Results showed that the SA-RNN model outperformed the GRU, achieving 96% accuracy compared to 94%. The models utilized GloVe word embeddings and specific linguistic features, such as interjections and punctuation marks, to improve sarcasm detection. This research advances the field of sarcasm detection for low-resource languages, particularly Telugu.
