Exemplar Based Pose Correction Method For Microsoft Kinect Based System
International Journal of Recent Engineering Science (IJRES) | |
|
© 2015 by IJRES Journal | ||
Volume-2 Issue-3 |
||
Year of Publication : 2015 | ||
Authors : Priya Ashtankar, Vaidehi Baporikar |
||
DOI : 10.14445/23497157/IJRES-V2I3P104 |
How to Cite?
Priya Ashtankar, Vaidehi Baporikar, "Exemplar Based Pose Correction Method For Microsoft Kinect Based System," International Journal of Recent Engineering Science, vol. 2, no. 3, pp. 21-22, 2015. Crossref, https://doi.org/10.14445/23497157/IJRES-V2I3P104
Abstract
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. we propose an exemplar-based method to learn to correct the initially estimated poses also working with an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain to increase the accuracy of pose correction Our proposed approach basically deals with the facial landmark correction and color controls also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Keywords
Kinect, background removal, pose correction, pose tag, skeleton
Reference
[1] Wei Shen and Ke Deng “Exemplar-Based human action pose correction and tagging” Microsoft Research Asia& UCLA.
[2] Microsoft Corporation. Kinect for XBOX 360. Redmond, WA,USA.
[3] L. Liu and L. Shao, “Learning discriminative representations from RGB-D video data,” in Proc. IJCAI , 2013.
[4] J. Shotton ,A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth image,” in Proc. CVPR, 2011, pp. 116–124.
[5] R. Girshick, J. Shotton, P. Kohli, A. Criminisi, and A. Fitzgibbon, “Efficientregression of general-activity human poses from depth images,” in Proc. ICCV, 2011, pp. 415– 422.