Home / Articles
Analyzing Emotional Responses to Music |
![]() |
Author Name BHARATHVAJ G, SAKTHI GANISHKA M, SASHITHRA R and DHANUSHKA N Abstract Music plays a vital role in evoking emotional responses, making it a valuable tool in therapeutic settings, particularly for individuals with mental health challenges. This paper presents a system designed to detect and analyze users’ emotional responses to music using machine learning and facial expression recognition techniques. The system classifies emotions such as happiness, sadness, anger, and relaxation by integrating audio feature extraction and real-time facial expression analysis. By combining both modalities, the system provides more accurate and objective emotional insights during music therapy sessions. The primary goal is to assist therapists by offering real-time feedback on patients’ emotional states, allowing for personalized music therapy interventions. Audio features such as Mel Frequency Cepstral Coefficients (MFCC), chroma, and tempo are extracted from music, while facial expressions are analyzed using Convolutional Neural Networks (CNNs). The system provides therapists with continuous emotional insights, helping to tailor treatment plans based on the emotional impact of various music genres and styles. The system demonstrates high accuracy in emotion detection, with experimental results achieving an overall accuracy of 87%. This approach not only enhances therapeutic outcomes but also offers broader applications in mental health care, stress management, and personalized music recommendations. By objectively analyzing emotional responses to music, this system contributes to the growing field of affective computing and its role in mental health and wellness.
Keywords— Emotion recognition, facial expressions, music emotion analysis, machine learning, deep learning. Published On : 2024-12-07 Article Download : ![]() |