SciELO - Scientific Electronic Library Online

 
vol.32 issue1Trends of drug use 1998 to 2005 in three cities in the northern zone of Mexico: Ciudad Juarez, Monterrey and TijuanaEvaluation of a brief motivational intervention program for smokers: results of a pilot study author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Salud mental

Print version ISSN 0185-3325

Abstract

FLORES-GUTIERREZ, Enrique  and  DIAZ, José Luis. The emotional response to music: attribution of emotions words to musical segments. Salud Ment [online]. 2009, vol.32, n.1, pp.21-34. ISSN 0185-3325.

Even though music is usually considered a source of intense, diverse, and specific affective states, at the present time there is not a standardized scientific procedure that reveals with reliable confidence the emotional processes and events evoked by music. The progress in understanding musical emotion crucially depends in the development of reasonable secure methods to record and analyze such a peculiar and universally-sought affective process. In 1936 Kate Hevner published a pioneer study where she used a list of 66 adjectives commonly used to categorize musical compositions arranged in a circle of eight groups of similar emotions. The volunteers selected the terms that seemed appropriate to categorize their emotional experience while they listened to masterpieces by Debussy, Mendelssohn, Paganini, Tchaikovsky, and Wagner. The results were presented in histograms showing a different profile for each piece. Subsequent studies have advanced in the methods and techniques to assess the emotions produced by music but there are many still unresolved difficulties concerning the criteria to choose the musical pieces, the terms of emotion, the design of the experiment, the proper controls, and the relevant statistical tools to analyze the results. The present study was undertaken in order to test and advance an experimental technique designed to evaluate and study the human emotions evoked by music. Specifically, the study intends to prove if different musical excerpts evoke a significant agreement in the selection of previously organized emotion terms within a relatively homogeneous population of human subjects. Since music constitutes a form of acoustic language that has been selected and developed through millennia of human cultural evolution for the expression and communication of emotional states, it is supposed that there will be a significant agreement in the attribution of terms of emotion to musical segments among human evaluators belonging to a relatively homogeneous population. The attribution system allowed both to obtain objective responses derived from introspection and to analyze the data by means of an appropriate statistical processing of data obtained in groups of subjects submitted to carefully selected musical stimuli. Volunteer subjects were 108 college-level students of both sexes with a mean age of 22 years from schools and universities located in the central Mexico. The audition and attribution sessions lasted for 90 min and were conducted in a specially adapted classroom located in each institution. Four criteria were established for the selection of the musical excerpts: instrumental music, homogeneous melody and musical theme, clear and distinct affective tone, and samples of different cultures. The ten selected pieces were: 1. Mozart's piano concerto no. 1 7, K 453, third movement; 2. A sound of the magnetic spectra of an aurora borealis, a natural event; 3. Mussorgsky's Gnome, from Pictures at an Exhibition orchestrated by Ravel; 4. Andean folk music; 5. Tchaikovsky's Fifth Symphony, second movement; 6. << Through the Never>>, heavy metal music by Metallica; 7. Japanese Usagi folk music played with koto and shyakuhachi; 8. Mahler's Fifth Symphony, second movement; 9. Taqsim Sigah, Arab folk music played with kamandja, and 1 0. Bach's Inventions in three parts for piano, BMW 797. The selected fragments and their replicas were divided in two to five musically homogeneous segments (mean segment duration: 24 seconds) and were played in different order in each occasion. The segments were played twice during the test. During the first audition, the complete piece was played in order for the subjects to become familiar with the composition and freely express their reaction in writing. During the second hearing, the same piece was played in the separate selected segments and the volunteers were asked to choose those emotion-referring terms that more accurately identified their music-evoked feelings from an adjunct chart obtained and arranged from an original list of 328 Spanish words designing particular emotions. The terms had been previously arranged in 28 sets of semantically related terms located in 14 bipolar axes of opposing affective polarity in a circumflex model of the affective system. The recorded attributions from all the subjects were captured and transformed into ranks. The non-parametric Friedman test of rank bifactorial variance for k related samples was selected for the statistical analysis of agreement. All the data were gathered in the 28 categories or sets of emotion obtained in the previous taxonomy of emotion terms and the difference among the musical segments was tested. The difference was significant for 24 of the 28 emotional categories for α=0.05 and 33 degrees of freedom (Fr ≥43.88). In order to establish in which segments were the main significant differences, the extension of the Friedman test for comparison of groups to a control was undertaken. Thus, after applying the appropriate formula, a critical value of the difference | R1 - Ru | was established at ≥18.59. In this way it was possible to plot the significance level of all 28 emotion categories for each music segment and thereby to obtain the emotion profile of each selected music fragment. The differences obtained for the musical pieces were established both for the significant response of individual emotion, groups of emotions, and the global profile of the response. In all the pieces used, one or more terms showed significance. Sometimes as many as seven terms appear predominant (Mahler, Mozart). In contrast other segments produce only one or two responses (aurora borealis, Arab music). In most musical segments there were null responses implying that there was an agreement concerning not only the emotions that were present, but also those that did not occur. Concerning the global response, there were several profiles recognizable among different pieces. The histogram is slanted to the left when positive and vigorous emotions are reported (Tchaikovsky, Bach). The predominance of emotions in the center-right sector corresponds to negative and quiet emotions (Arab music) or in the fourth sector of negative and agitated emotions (Mahler). Sometimes a <<U>> shaped profile was obtained when vigorous emotions predominated (Mahler, Metallica). A bell-shaped response was obtained when calm emotions were reported, both pleasant and unpleasant (Japanese music). There is also music that globally stimulates one of the four quadrants defined in the affective circle, such as pleasant (Mozart), unpleasant (Mussorgsky), exciting (Metallica) or relaxing emotions (Japanese music). The only segment that produced scattered responses in the four sectors of emotions was the aurora borealis. Very similar profiles were obtained with very different pieces, such as the identical responses to Mozart and Andean music. It is necessary to analyze the individual emotion terms to distinguish them. Several common characteristics can be detected in these two pieces, such as fast speed in tempo allegro, binary rhythm, counterpoint figures, and ascending melody, well known features in music composition. In contrast other segments evoked unpleasant responses (Mussorgsky), where fear, tension, doubt or pain was reported. The listener probably concedes a high value to a piece that evokes emotions that normally avoids in the context of a controlled artistic experience.

Keywords : Music; emotion; emotion terms; attribution; inter-valuator agreement.

        · abstract in Spanish     · text in Spanish     · Spanish ( pdf )

 

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License