Assignment Task
Emotions play a great role in human lives. Our mind unconsciously notices the emotions of people we meet every day. It interprets the signals people emit through these emotions. It also tries to determine how to respond to and deal with people around us. Facial expressions can reveal emotions like happiness, anger, sadness, surprise, fear, excitement, desire, contempt, disgust, confusion and many more. This paper proposes an emotion-based movie and song recommender system that utilizes deep learning techniques for user’s emotion detection. It also provides recommendation of movies and songs that match user’s emotional state at that time.The emotion detection-based movie and song recommender system is a revolutionary system that aims to come up with customized suggestions formed on basis of user’s emotional state at that time. It uses advanced techniques of artificial intelligence for emotion detection of user. It then suggests movies and songs that align with their mood. Also, it can help users find latest items that they may not have found by themselves. It leads to a more enjoyable and fulfilling entertainment experience. Additionally, the system has the potential to improve mental health outcomes by providing users with content that can positively impact their emotional state. Overall, the movie and song recommender system using emotion detection is a valuable tool for anyone seeking personalized entertainment recommendations. Our research work focused on using facial expressions for movies and song recommendation to the user. Depending on the expression user reveals at particular moment we can best recommend movies that suit his current mood.
Sentiment Analysis of Science Fiction Movie Reviews Based on Deep Learning
In recent years, the information technology and film industry have developed rapidly. Online film reviews can greatly reflect the opinions of the movie audience and can extract valuable information from the online data by using advanced information processing technology. In this article, we applied TextCNN to achieve the emotional analysis of the audience’s comments on science fiction movies: Inception and The Wondering Earth. The reviews of the movies were collected on MTime. Experimental results show that TextCNN has high accuracy in sentiment analysis of film reviews.
Deep Learning-Based Recognizing and Visualizing Emotions Through Acoustic Signals
This study proposes a new technique for analyzing and visualizing emotions in speech by integrating color representation and intonation information to enhance text expression. A sentiment analysis system was developed by employing deep learning methods, particularly a IDCNN model, to integrate emotion analysis and text mapping. Through this, emotions were visually represented in text by size and color, providing an effective means of expressing emotions compared to conventional methods. This approach enables sentiment content language analysis applicable to various domains such as movies, dramas, computer games, etc. The combination of speech and color in emotion representation serves a universal role in conveying emotions transcending language barriers. Additionally, emotion color cards derived from this approach can be highly beneficial in educational environments, facilitating communication for students with hearing impairments or emotional developmental disorders. Moreover, it can assist in promptly identifying and blocking harmful content for children and adolescents.
Recognizing Induced Emotions of Movie Audiences from Multimodal Information
Recognizing emotional reactions of movie audiences to affective movie content is a challenging task in affective computing. Previous research on induced emotion recognition has mainly focused on using audio-visual movie content. Nevertheless, the relationship between the perceptions of the affective movie content (perceived emotions) and the emotions evoked in the audiences (induced emotions) is unexplored. In this work, we studied the relationship between perceived and induced emotions of movie audiences. Moreover, we investigated multimodal modelling approaches to predict movie induced emotions from movie content based features, as well as physiological and behavioral reactions of movie audiences. To carry out analysis of induced and perceived emotions, we first extended an existing database for movie affect analysis by annotating perceived emotions in a crowd-sourced manner. We find that perceived and induced emotions are not always consistent with each other. In addition, we show that perceived emotions, movie dialogues, and aesthetic highlights are discriminative for movie induced emotion recognition besides spectators’ physiological and behavioral reactions. Also, our experiments revealed that induced emotion recognition could benefit from including temporal information and performing multimodal fusion. Moreover, our work deeply investigated the gap between affective content analysis and induced emotion recognition by gaining insight into the relationships between aesthetic highlights, induced emotions, and perceived emotions.
