SciELO - Scientific Electronic Library Online

 
 número39E- Learning Content Design and Implementation based on Learners' Levels índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Polibits

versão On-line ISSN 1870-9044

Polibits  no.39 México Jan./Jun. 2009

 

Articles

 

Modeling Multimodal Multitasking in a Smart House

 

Pilar Manchón, Carmen del Solar, Gabriel Amores, and Guillermo Pérez

 

University of Seville, Seville, Spain (e–mail: pmanchon@us.es, carsolval@alum.us.es, jgabriel@us.es, gperez@us.es).

 

Manuscript received November 30, 2008.
Manuscript accepted for publication March 3, 2009.

 

Abstract

This paper belongs to an ongoing series of papers presented in different conferences illustrating the results obtained from the analysis of the MIMUS corpus. This corpus is the result of a number of WoZ experiments conducted at the University of Seville as part of the TALK Project. The main objective of the MIMUS corpus was to gather information about different users and their performance, preferences and usage of a multimodal multilingual natural dialogue system in the Smart Home scenario. The focus group is composed by wheel–chair–bound users. In previous papers the corpus and all relevant information related to it has been analyzed in depth. In this paper, we will focus on multimodal multitasking during the experiments, that is, modeling how users may perform more than one task in parallel. These results may help us envision the importance of discriminating complementary vs. independent simultaneous events in multimodal systems. This gains more relevance when we take into account the likelihood of the cooccurrence of these events, and the fact that humans tend to multitask when they are sufficiently comfortable with the tools they are handling.

Key words: Multimodal corpus, HCI, multimodal experiments, multimodal entries, multimodal multitasking.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

REFERENCES

[1] Bush, C., "How to Multitask," New York Times Magazine. April 8, 2001.         [ Links ]

[2] Heeman, P., Yang, F., Kun, A. and Shyrokov, A., "Conventions in Human–Human Multi–Threaded Dialogues: A Preliminary Study," in Proceedings of IUI'05, San Diego, CA, USA. January, 2005.         [ Links ]

[3] Levy, J., Pashler, H., "Is dual–task slowing instruction dependent?" Journal of Experimental Psychology: Human Perception and Performance, 27, 4, pp. 862–869, 2001.         [ Links ]

[4] Levy, J., Pashler, H., "Task prioritization in multitasking during driving: Opportunity to abort a concurrent task does not insulate braking responses from dual–task slowing," Applied Cognitive Psychology, 22, pp. 507–525, 2008.         [ Links ]

[5] Pashler, H., "Task switching and multitask performance," in Monsell, S., Driver, J. (eds.). Attention and Performance XVIII: Control of mental processes. Cambridge, MA: MIT Press, 2000.         [ Links ]

[6] Manchón, P., del Solar, C., Amores, G., and Pérez, G., "Multimodal Interaction Analysis in a Smart House," in Proceedings of the 9th international Conference on Multimodal interfaces ICMI '07, Nagoya, Aichi, Japan, November 12 – 15, 2007.         [ Links ]

[7] Manchón, P., del Solar, C., Amores, G., and Pérez, G., "The MIMUS Corpus," in Proc. of LREC 2006 International Workshop on Multimodal Corpora From Multimodal Behaviour Theories to Usable Models, pp. 56–59, Genoa, Italy, 2006.         [ Links ]

[8] Manchón P., Pérez G., and Amores G., "WOZ experiments in Multimodal Dialogue Systems," in Proceedings of the ninth workshop on the semantics and pragmatics of dialogue, pp. 131–135, Nancy, France. June, 2005.         [ Links ]

[9] Oviatt, S. L., "Multimodal interactive maps: Designing for human performance," Human–Computer Interaction, (special issue on "Multimodal interfaces"), pp. 93–129, 1997.         [ Links ]

[10] Oviatt, S. L., DeAngeli, A. and Kuhn, K., "Integration and synchronization of input modes during multimodal human–computer interaction," in Proceedings of Conference on Human Factors in Computing Systems CHI '97, 1997.         [ Links ]

[11] Oviatt, S., Coulston, R., Tomko, S., Xiao, B., Lunsford, R., Wesson, M. and Carmichael, L., "Toward a Theory of Organized Multimodal Integration Patterns During Human–Computer Interaction," in Proc of 5th International Conference on Multimodal Interfaces, ICMI'2003, pp. 44–51, Vancouver, British Columbia, Canada, 2003.         [ Links ]

[12] Oviatt, S., Coulston, R., and Lunsford, R., "When Do We Interact Multimodally? Cognitive Load and Multimodal Communication Patterns," in Proceedings of the Sixth International Conference on Multimodal Interfaces (ICMI 2004), State College, Pennsylvania, USA, October 14–15, 2004.         [ Links ]

[13] Wallis, C., "The Multitasking Generation," TIME Magazine, March 19, 2006.         [ Links ]

[14] Wild, P., Johnson, P. and Johnson, H., "Towards a Composite Modeling Approach for Multitasking," in Proc. of Tamodia'04, Prague, Czech Republic, November, 2004.         [ Links ]

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons