There is no author summary for this book yet. Authors can add summaries to their books on ScienceOpen to make them more accessible to a non-specialist audience.
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\% and 70\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.
Research indicates that people value music primarily because of the emotions it evokes. Yet, the notion of musical emotions remains controversial, and researchers have so far been unable to offer a satisfactory account of such emotions. We argue that the study of musical emotions has suffered from a neglect of underlying mechanisms. Specifically, researchers have studied musical emotions without regard to how they were evoked, or have assumed that the emotions must be based on the "default" mechanism for emotion induction, a cognitive appraisal. Here, we present a novel theoretical framework featuring six additional mechanisms through which music listening may induce emotions: (1) brain stem reflexes, (2) evaluative conditioning, (3) emotional contagion, (4) visual imagery, (5) episodic memory, and (6) musical expectancy. We propose that these mechanisms differ regarding such characteristics as their information focus, ontogenetic development, key brain regions, cultural impact, induction speed, degree of volitional influence, modularity, and dependence on musical structure. By synthesizing theory and findings from different domains, we are able to provide the first set of hypotheses that can help researchers to distinguish among the mechanisms. We show that failure to control for the underlying mechanism may lead to inconsistent or non-interpretable findings. Thus, we argue that the new framework may guide future research and help to resolve previous disagreements in the field. We conclude that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.