SciELO - Scientific Electronic Library Online

 
vol.7 número3Jerk analysis of a module of an artificial spine by means of screw theoryPolygonal Approximation of Contour Shapes Using Corner Detectors índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Journal of applied research and technology

versión On-line ISSN 2448-6736versión impresa ISSN 1665-6423

Resumen

FUENTES-PACHECO, J.; RUIZ-ASCENCIO, J.  y  RENDON-MANCHA, J. M.. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor. J. appl. res. technol [online]. 2009, vol.7, n.3, pp.259-273. ISSN 2448-6736.

This paper presents a binocular eye-to-hand visual servoing system that is able to track and grasp a moving object in real time. Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting future positions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with six degrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client-server architecture and is composed of two main parts: the vision system and the control system. The vision system uses color detection to extract the object from the background and a tracking technique based on search windows and object moments. The control system uses the RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port. Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

Palabras llave : linear prediction; visual servoing; tracking; grasping; stereo vision; camera calibration.

        · resumen en Español     · texto en Inglés     · Inglés ( pdf )

 

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons