SciELO - Scientific Electronic Library Online

 
 número44A Model of Decision-Making Based on the Theory of Persuasion used in MMORPGsFPGA Implementation of Fuzzy Mamdani System with Parametric Conjunctions Generated by Monotone Sum of Basic t-Norms índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Polibits

versión On-line ISSN 1870-9044

Polibits  no.44 México jul./dic. 2011

 

User Preference Model for Conscious Services in Smart Environments

 

Andrey Ronzhin1, Jesus Savage2, and Sergey Glazkov3

 

1 St. Petersburg State University, 11, Universitetskaya nab., St. Petersburg, 199034, Russia.

2 Universidad Nacional Autónoma de México, México City, México (email: savage@servidor.unam.mx).

3 Russian Academy of Sciences St. Petersburg Institute for Informatics and Automation RAS, St. Petersburg, 39, 14 Line, 199178, Russia (e–mail: glazkov@iias.spb.su).

 

Manuscript received June 29, 2011.
Manuscript accepted for publication August 25, 2011.

 

Abstract

Awareness of user preferences and analysis of the current situation makes capable to provide user with invasive services in various applications of smart environments. In smart meeting rooms context–aware systems analyze user behavior based on multimodal sensor data and provide proactive services for meeting support, including active control PTZ (pan, tilt and zoom) cameras, microphone arrays, context dependent automatic archiving and web–transmission of meeting data at the interaction. History of interaction sessions between a user and a service is used for knowledge accumulation in order to forecast user behavior during the next visit. The user preference model based on audiovisual data recorded during interaction and statistics of his/her speech activity, requests, movement trajectories and other parameters was implemented for the developed mobile information robot and smart meeting room.

Key words: User preferences, context awareness, action recognition, mobile robot, smart meeting room.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

NOTA

This work is supported by Saint–Petersburg State University (project # 31.37.103.2011), the Russian Federal Targeted Program (contracts #P876 and #14.740.11.0357) of the Ministry of Science and Education of Russia.

 

REFERENCES

[1] T. Laakko, "Context–Aware Web Content Adaptation for Mobile User Agents," in Studies in Computational Intelligence, R. Nayak et al. (Eds.): SCI 130, Evolution of the Web in Artificial Intelligence Environments, 2008, pp. 69–99.         [ Links ]

[2] C. Bolchini, CA. Curino, E. Quintarelli, F.A. Schreiber, and L. Tanca, "A data–oriented survey of context models," SIGMOD, 36(4), 2007, pp. 19–26.         [ Links ]

[3] A. Boytsov and A. Zaslavsky, "Extending context spaces theory by proactive adaptation," S. Balandin et al. (Eds.): NEW2AN/ruSMART 2010, LNCS 6294, Springer, 2010, pp. 1–12.         [ Links ]

[4] B. Schilit, N. Adams, and R. Want, "Context–aware computing applications," in Proc. of the Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, USA, 1994, pp. 85–90.         [ Links ]

[5] D.R. Morse, N.S. Ryan, and J. Pascoe, "Enhanced reality fieldwork using hand–held computers in the field," Life Sciences Educational Computing, 9 (1), 1998, pp. 18–20.         [ Links ]

[6] B. Moltchanov, C. Mannweiler, and J. Simoes, "Context–Awareness Enabling New Business Models in Smart Spaces," S. Balandin et al. (Eds.): NEW2AN/ruSMART 2010, LNCS 6294, Springer, 2010, pp. 13–25.         [ Links ]

[7] K.H. Goh, J.Y. Tham, T. Zhang, and T. Laakko, "Context–Aware Scalable Multimedia Content Delivery Platform for Heterogeneous Mobile Devices," in Proc. of MMEDIA 2011, Budapest, Hungary, 2011, pp. 1–6.         [ Links ]

[8] P. Dai, L. Tao and G. Xu, "Audio–Visual Fused Online Context Analysis Toward Smart Meeting Room," J. Indulska et al. (Eds.): UIC 2007, LNCS 4611, Springer, 2007, pp. 868–877.         [ Links ]

[9] Computers in the human interaction loop. Ed. A. Waibel and R. Stiefelhagen, Berlin: Springer, 2009.         [ Links ]

[10] G. Garau and H. Bourlard, "Using Audio and Visual Cues for Speaker Diarisation Initialisation," in Proc. of ICASSP'2010, 2010, pp. 4942–1945.         [ Links ]

[11] Y. Rui, A. Gupta, J. Grudin, and L. He, "Automating lecture capture and broadcast: Technology and videography," Multimedia Systems, 10, 2004, pp. 3–15.         [ Links ]

[12] C. Zhang, P. Yin, Y. Rui, R. Cutler, P. Viola, X. Sun, N. Pinto, and Z. Zhang, "Boosting–Based Multimodal Speaker Detection for Distributed Meeting Videos," IEEE Transactions on Multimedia, Vol.10, No.8, 2008, pp.1541–1552.         [ Links ]

[13] V. Rozgic, C. Busso, P.G. Georgiou, and S.S. Narayanan, "Multimodal meeting monitoring: Improvements on speaker tracking and segmentation through a modified mixture particle filter," in IEEE International Workshop on Multimedia Signal Processing (MMSP), 2007, pp. 60–65.         [ Links ]

[14] V.C. Raykar, B. Yegnanarayana, S.R. Prasanna, and R. Duraiswami, "Speaker Localization using excitation source information in speech," IEEE Transactions on Speech and Audio Processing, Volume 13, Issue 5, Part 2, 2005, pp. 751–761.         [ Links ]

[15] T. Habib and H. Romsdorfer, "Concurrent Speaker Localization Using Multi–Band Position–Pitch (M–PoPi) Algorithm with Spectro–Temporal Pre–Processing," in Proc. of Interspeech2010, Makuhari, Japan, 2010, pp. 2774–2777.         [ Links ]

[16] J. Voordouw, C. Yang, L. Rothkrantz, and M. Capg, "A Comparison of the ILD and TDOA Sound Source Localization Algorithms in a Train Environment," in Proc. of EuroMedia 2007, Delft, 2007.         [ Links ]

[17] A. Brutti, M. Omologo, and P. Svaizer, "Comparison between different sound source localization techniques based on a real data collection," in Proc. of Hands–Free Speech Communication and Microphone Arrays (HSCMA '2008), Trento, Italy, 2008.         [ Links ]

[18] C. Datta, A. Kapuria, and R. Vijay, "A pilot study to understand requirements of a shopping malí robot," in Proc. of HRI'2011, 2011, pp. 127–128.         [ Links ]

[19] M. Makatchev, I. Fanaswala, A. Abdulsalam, B. Browning, W. Ghazzawi, M. Sakr, and R. Simmons, "Dialogue Patterns of an Arabic Robot Receptionist," in Proc. of HRI'2010, 2010, pp. 167–168.         [ Links ]

[20] M. Nieuwenhuisen, J. Stuckler, and S. Behnke, "Intuitive Multimodal Interaction for Service Robots," in Proc. of HRI'2010,2010, pp. 177–178.         [ Links ]

[21] A.C. Tenorio–González, E.F. Morales, and L. Villaseflor–Pineda, "Teaching a robot to perform tasks with voice commands," in Grigori Sidorov, Arturo Hernández Aguirre, Carlos Alberto Reyes Garcia (Eds.): Proc. of the 9th Mexican International conference on Advances in artificial intelligence: Part I (MICAP10), Springer–Verlag, 2010, pp. 105–116.         [ Links ]

[22] G. Carrera J. Savage, and W. Mayol–Cuevas, "Robust feature descriptors for efficient vision–based tracking," in Luis Rueda, Domingo Mery, and Josef Kittler (Eds.): Proc. of the Congress on pattern recognition 12th Iberoamerican conference on Progress in pattern recognition, image analysis and applications (CIARP'07), Springer–Verlag, 2007, pp. 251–260.         [ Links ]

[23] A.C. Ramirez–Hernandez, JA. Rivera–Bautista, A. Marin–Hernandez, and V.A. Garcia–Vega, "Detection and Interpretation of Human Walking Gestures for Human–Robot Interaction," in Proc. of the 2009 Eighth Mexican International Conference on Artificial Intelligence (MICAI '09), IEEE Computer Society, Washington, DC, USA, 2009, pp. 41–16.         [ Links ]

[24] V. Budkov, M. Prischepa, and A. Ronzhin, "Dialog Model Development of a Mobile Information and Reference Robot," Pattern Recognition and Image Analysis, Pleiades Publishing, Vol. 21, No. 3, 2011, pp. 442–145.         [ Links ]

[25] V.Yu. Budkov, Al.L. Ronzhin, S.V. Glazkov, and An.L. Ronzhin, "Event–Driven Content Management System for Smart Meeting Room," S. Balandin et al. (Eds.): NEW2AN/ruSMART 2011, LNCS 6869, Springer–Verlag, 2011, pp. 550–560.         [ Links ]

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons