mloss.org Chalearn gesture challenge code by jun wan http://mloss.orgUpdates and additions to Chalearn gesture challenge code by jun wan enTue, 29 Sep 2015 08:50:22 -0000Chalearn gesture challenge code by jun wan 2.0http://mloss.org/software/view/499/<html><p>This code is provided by Jun Wan. It is used in the Chalearn one-shot learning gesture challenge (round 2). This code includes: bag of features, 3D MoSIFT, EMoSIFT and SMoSIFT features. </p> <p>If you use this code, you should cite these three papers. </p> <p>1) Jun Wan, Qiuqi Ruan, Shuang Deng and Wei Li, "One-shot learning gesture recognition from rgb-d data using bag of features", journal of machine learning research, Vol 14, pp. 2549-2582, 2013.[link: http://jmlr.org/papers/volume14/wan13a/wan13a.pdf] </p> <p> This paper introduces the 3D EMoSIFT features and also use SOMP to replace VQ for coding descriptors. </p> <p>2) Jun Wan, Qiuqi Ruan, Wei Li, Gaoyun An and Ruizhen Zhao "3D SMoSIFT: 3D Sparse Motion Scale Invariant Feature Transform for Activity Recognition from RGB-D Videos", Journal of Electronic Imaging, 23(2), 023017, 2014. </p> <p> This paper introduces the 3D SMoSIFT features. The significant merit is that this new feature is almost real-time. And 3D SMoSIFT can get higher performance than 3D MoSIFT and 3D EMoSIFT features. </p> <p>3) Jun Wan, Guogong Guo, Stan Z. Li, "Explore Efficient Local Features form RGB-D Data for One-shot Learning Gesture Recognition", under review (submitted to IEEE TPAMI, round 2), 2015 </p> <p>In this paper, Mixed features around sparse keypoints (MFSK) is proposed. The MFSK feaute consists of some popular feature descriptors, such as 3D SMoSIFT, HOG, HOF, MBH. The MFSK feature can outperform all the published approaches on the challenging data of CGD, such as translated, scaled and occludeed subsets. </p></html>Jun WanTue, 29 Sep 2015 08:50:22 -0000http://mloss.org/software/rss/comments/499http://mloss.org/software/view/499/bag of featuresgesture recognitionone shot learning3d emosift3d mosift3d smosift