SciELO - Scientific Electronic Library Online

 
vol.18 issue3Semantic Hyper-graph Based Representation of Nouns in the Kazakh Language author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Comp. y Sist. vol.18 n.3 Ciudad de México Jul./Sep. 2014

https://doi.org/10.13053/CyS-18-3-2048 

Artículos regulares

 

Towards the Automatic Recommendation of Musical Parameters based on Algorithm for Extraction of Linguistic Rules

 

Félix Castro Espinoza, Ornar López-Ortega, and Anilú Franco-Árcega

 

Universidad Autónoma del Estado de Hidalgo, Área Académica de Sistemas Computacionales, Pachuca, México. fcastroe@gmail.com, lopezo@uaeh.edu.mx, afranco@uaeh.edu.mx.

 

Article received on 02/07/2014.
Accepted on 23/09/2014.

 

Abstract

In the present article the authors describe an analysis of data associated to the emotional responses to fractal generated music. This analysis is done via discovery of rules, and it constitutes the basis to elevate computer-assisted creativity: Our ultimate goal is to create musical pieces by retrieving the right set of parameters associated to a target emotion. This paper contains the description of (i) variables associated to fractal music and emotions; (ii) the data gathering method to obtain the tuples relating input parameters and emotional responses; (iii) the rules that where discovered by using an algorithm LR-FIR. Even though similar experiments whose intention is to elucidate emotional responses from music have been reported, this study stands because a connection is appointed between fractal-generated music and emotional responses, all with the purpose of advancing in computer-assisted creativity.

Keywords: Recommender systems, knowledge discovery, rules extraction, fractal music.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

Acknowledgements

The authors are thankful to Karla Lopez de la Cruz for coordinating the data gathering process.

 

References

1. Ashlock, D., Bryden, K., Meinert, K., & Bryden, K. (2003). Transforming data into music using fractal algorithms. Intelligent engineering systems through artificial neural networks, 13, 665-670.         [ Links ]

2. Biles, J. A. (2007). Evolutionary computer music, chapter Evolutionary computation for musical tasks. Volume 1 of Miranda & Biles [22], 28-51.         [ Links ]

3. Bilotta, E., Pantano, P., Cupellini, E., & Rizzuti, C. (2007). Evolutionary methods for melodic sequences generation from non-linear dynamic systems. Lecture Notes in Computer Science, 4448, 585-592.         [ Links ]

4. Blackwell, T. (2007). Evolutionary computer music, chapter Swarming and music. Volume 1 of Miranda & Biles [22], 194-217.         [ Links ]

5. Blackwell, T. (2008). The art of artificial evolution. A Handbook on evolutionary art and music, chapter Swarm Granulation. Volume 1 of Romero & Machado [25], 103-122.         [ Links ]

6. Castro, F., Nebot, A., & Mugica, F. (2011). On the extraction of decision support rules from fuzzy predictive models. Applied Soft Computing, 11, 3463-3475.         [ Links ]

7. den Brinker, B., van Dinther, R., & Skowronek, J. (2012). Expressed music mood classification compared with valence and arousal ratings. EURASIP Journal on Audio, Speech, and Music Processing, 24, 1-14.         [ Links ]

8. Eerola, T. (2011). Are the emotions expressed in music genre-specific? An audio-based evaluation of datasets spanning classical, film pop and mixed genres. Journal of New Music Research, 40(4), 439-366.         [ Links ]

9. Hu, Y., Chen, X., & Yang, D. (2009). Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In Proceedings of the International Conference for Music Information Retrieval (ISMIR). 123-128.         [ Links ]

10. Janssen, J. H., van der Broek, E. L., & Westerink, J. H. D. M. (2012). Tune in to your emotions: A robust personalized affective music player. User Model User-Adap Inter, 22, 255-279.         [ Links ]

11. Juncke, L. (2008). Music, memory and emotion. Journal of Biology, 7(6), 1- 5.         [ Links ]

12. Krueger, J. W. (2011 ). Doing things with music. Phenomenology and the Cognitive Sciences, 10(1), 1-22. ISSN 1568-7759.         [ Links ]

13. Kumamoto, T. (2010). A natural language dialogue system for impression-based music retrieval. Polibits, 41, 19-24.         [ Links ]

14. Laurier, C., Meyers, O., Serra, J., Blech, M., Herrera, P., & Serra, X. (2010). Indexing music by mood: Design and integration of an automatic content-based annotator. Multimedia Tools and Applications, 48(1), 161-184. ISSN 1380-7501.         [ Links ]

15. Li, H.-F. (2011). MEMSA: Mining emerging melody structures from music query data. Multimedia Systems, 17, 237-245.         [ Links ]

16. Li, T. & Ogihara, M. (2003). Detecting emotion in music. In Proceedings of the International Conference for Music Information Retrieval (ISMIR), volume 3. Baltimore, USA, 239-240.         [ Links ]

17. López-Ortega, O. (2013). Computer-assisted creativity: Emulation of cognitive processes on a multiagent system. Expert Systems with Applications, 40(9), 3459-3470.         [ Links ]

18. López-Ortega, O. & Lopez-Popa, S. I. (2012). Fractals, fuzzy logic and expert systems to assist in the construction of musical pieces. Expert Systems with Applications, 39, 11911-11923.         [ Links ]

19. Lorenz, E. N. (1963). Deterministic non-periodic flow. Atmospheric Science, 20, 130-141.         [ Links ]

20. Lu, L., Liu, D., & Zhang, H.-J. (2006). Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing, 14(1), 5-18.         [ Links ]

21. Miranda, E. R. (2007). Evolutionary computer music, chapter Cellular automata music: From sound synthesis to musical forms. Volume 1 of Miranda & Biles [22], 170-193.         [ Links ]

22. Miranda, E. R. & Biles, J. A. (2007). Evolutionary computer music, volume 1. Springer, London.         [ Links ]

23. Mitra, S. & Acharya, T. (2003). Data mining: Multimedia, soft computing and bioinformatics. Wiley, USA, 1st edition.         [ Links ]

24. Poria, S., Gelbukh, A., Hussain, A., Bandyopad-hyay, S., & Howard, N. (2013). Music genre classification: A semi-supervised approach. Lecture Notes in Computer Science, 7914, 254-263.         [ Links ]

25. Romero, J. & Machado, P. (2008). The art of artificial evolution. A Handbook on evolutionary art and music, volume 1. Springer, London.         [ Links ]

26. Russell, J. A. (1980). A circumflex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178.         [ Links ]

27. Russell, J. A. & Barret, L. F. (1999). Core affect, prototypical emotional episodes and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology, 76, 805-819.         [ Links ]

28. Salas, H. A. G. & Gelbukh, A. (2008). Musical composer based on detection of typical patterns in a human composer's style. In XXIV Simposio Internacional de Computación en Educación. Sociedad Mexicana de Computacion en Educacion -SOMECE, Xalapa, Mexico, 1-6.         [ Links ]

29. Salas, H. A. G., Gelbukh, A., & Calvo, H. (2010). Music composition based on linguistic approach. Lecture Notes in Artificial Intelligence, 6437, 117-128.         [ Links ]

30. Salas, H. A. G., Gelbukh, A., Calvo, H., & Soria, F. G. (2011). Automatic music composition with simple probabilistic generative grammars. Polibits, 44, 89-95.         [ Links ]

31. Sánchez, L. E. G., Azuela, H. S., Barrón, R., Cuevas, F., & Vielma, J. F. J. (2013). Redes neuronales dinamicas aplicadas a la recomendacion musical optimizada. Polibits, 47, 89-95.         [ Links ]

32. Schmidt, Migneco, Morton, P., R., J., S., J.A., S., & Turnbull, D. (2010). Music emotion recognition: A state of the art review. In Proceedings of the International Conference for Music Information Retrieval (ISMIR). 255-266.         [ Links ]

33. Shan, M.-K. & Chiu, S.-C. (2010). Algorithmic compositions based on discovered musical patterns. Multimed Tools Appl, 46, 1-23.         [ Links ]

34. Smith, E. E. & Kosslyn, S. M. (2007). Cognitive psychology: Mind and brain. Prentice Hall, USA, 1st edition.         [ Links ]

35. Sukumaran, S. & Dheepa, G. (2003). Generation of fractal music with mandelbrot set. Global Journal of Computer Science and Technology, 1(1), 127-130.         [ Links ]

36. Terhardt, E. (1978). Psychoacoustic evaluation of musical sounds. Perception and Psychophysics, 23, 483-492.         [ Links ]

37. Trohidis, K., Tsoumakas, G., Kalliris, G., & Vlahavas, I. (2011). Multi-label classification of music by emotion. EURASIP Journal on Audio, Speech, and Music Processing, 4.         [ Links ]

38. Unehara, M. & Onisawa, T. (2003). Music composition system with human evaluation as human centered system. Soft Computing, 167-178.

39. Webster, G. D. & Weir, C. G. (2005). Emotional responses to music: Interactive effects of mode, texture and tempo. Motivation and Emotion, 29(1), 19-39.         [ Links ]

40. Yang, Y., Lin, Y., Cheng, H., Liao, I., Ho, Y., & Chen, H. (2008). Toward multimodal music emotion classification. In Huang, Y., Xu, C., Cheng, K., Yang, J., Swamy, M., S, L., & Ding, J., editors, Lecture Notes in Computer Science, volume 5353. Springer, Germany, 70-79.         [ Links ]

41. Zhang, Q. & Miranda, E. R. (2007). Evolutionary computer music, chapter Experiments in generative musical performance with a genetic algorithm. Volume 1 of Miranda & Biles [22], 100-116.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License