In the realm of facial expression recognition (FER), the identification and classification of seven universal emotional states, surprise, disgust, fear, happiness, neutrality, anger, and contempt, are of paramount importance. This research focuses on the application of convolutional neural networks (CNNs) for the extraction and categorization of these expressions. Over the past decade, CNNs have emerged as a significant area of research in human-computer interaction, surpassing previous methodologies with their superior feature learning capabilities. While current models demonstrate exceptional accuracy in recognizing facial expressions within controlled laboratory datasets, their performance significantly diminishes when applied to real-time, uncontrolled datasets. Challenges such as degraded image quality, occlusions, variable lighting, and alterations in head pose are commonly encountered in images sourced from unstructured environments like the internet. This study aims to enhance the recognition accuracy of FER by employing deep learning techniques to process images captured in real-time, particularly those of lower resolution. The objective is to augment the accuracy of FER in real-world datasets, which are inherently more complex and collected under less controlled conditions, compared to laboratory-collected data. The effectiveness of a deep learning-based approach to emotion detection in photographs is rigorously evaluated in this work. The proposed method is exhaustively compared with manual techniques and other existing approaches to assess its efficacy. This comparison forms the foundation for a subjective evaluation methodology, focusing on validation and end-user satisfaction. The findings conclusively demonstrate the method's proficiency in accurately recognizing emotions in both laboratory and real-world scenarios, thereby underscoring the potential of deep learning in the domain of facial emotion identification.