Music Emotion Recognition (MER) has been an active research topic for decades. This article proposes a unique multi-modal approach towards Music Emotion Recognition using three feature sets —
➊ Audio features extracted from the song,
➋ Lyrics of the song, and
➌ Electrodermal Activity (EDA) of humans while listening to the song.
Audio and Lyrical features of a song strongly complement each other. While audio features like rhythm and pitch of a song capture its mood and genre, there remains a semantic gap between these features and the music itself. Lyrics capture the meaning and specificity of the language, filling…