International journal of

ADVANCED AND APPLIED SCIENCES

EISSN: 2313-3724, Print ISSN:2313-626X

Frequency: 12

line decor
  
line decor

 Volume 5, Issue 6 (June 2018), Pages: 11-18

----------------------------------------------

 Original Research Paper

 Title: Recognition of static gestures using correlation and cross-correlation

 Author(s): Shazia Saqib *, Syed Asad Raza Kazmi

 Affiliation(s):

 Department of Computer Science, GC University, Lahore, Pakistan

 https://doi.org/10.21833/ijaas.2018.06.002

 Full Text - PDF          XML

 Abstract:

Sign language recognition has been an active area of research for around two decades and numerous sign languages have been extensively studied in order to design reliable sign language recognition systems. Pakistan sign language (PSL) has been used as a case study here. A comprehensive database of static images depicting the signs for different Urdu alphabets is being used as a reference and input images are being compared to perform PSL alphabet recognition. The normalized Correlation technique is being used for image registration between input image and images from the database to find the closest match. The purpose of research is to identify static gestures of any Sign Language which can ultimately lead to an identification of words and sentences. The research starts with image acquisition, image preprocessing, and use of correlation and labeling of an identified symbol. Normalized correlation is used to find the nearest match. This paper includes experiments for 37 static hand gestures related to PSL alphabets. Training dataset consists of 10 samples of each PSL symbol in different lighting conditions, different sizes and shapes of hand by 5 different signers. This gesture recognition system can identify one hand static gestures in any complex background with a “minimum-possible constraints” approach. A comparison is also drawn between normalized correlations and normalized cross-correlation. As compared to other technique, this technique can work with a small dataset size.  The technique is based on unsupervised learning. 

 © 2018 The Authors. Published by IASE.

 This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

 Keywords: Sign language, Correlation, Cross correlation, Invariance, Norm correlation, Normalized cross correlation, RGB

 Article History: Received 22 December 2017, Received in revised form 16 March 2018, Accepted 24 March 2018

 Digital Object Identifier: 

 https://doi.org/10.21833/ijaas.2018.06.002

 Citation:

 Saqib S and  Kazmi SAR (2018). Recognition of static gestures using correlation and cross-correlation. International Journal of Advanced and Applied Sciences, 5(6): 11-18

 Permanent Link:

 http://www.science-gate.com/IJAAS/2018/V5I6/Saqib.html

----------------------------------------------

 References (37) 

  1. Abdalla MS and Hemayed EE (2013). Dynamic hand gesture recognition of arabic sign language using hand motion trajectory features. Global Journal of Computer Science and Technology, 13(5-F): 27-33.    [Google Scholar] 
  2. Abdo MZ, Hamdy AM, Salem SAER, and Saad EM (2015). Arabic alphabet and numbers sign language recognition. International Journal of Advanced Computer Science and Applications (IJACSA), 6(11): 209–214.    [Google Scholar]     
  3. Albelwi NR and Alginahi YM (2012). Real-time arabic sign language (arsl) recognition. In the 5th International Conference on Communications and Information Technology, Cairo, Egypt: 497-501.    [Google Scholar]     
  4. Ali SA (2013). Detection of urdu sign language using harr algorithms. International Journal of Inventive Engineering and Sciences, 1(6): 50–54.    [Google Scholar]     
  5. Alvi AK, Azhar MYB, Usman M, Mumtaz S, Rafiq S, Rehman R, and Ahmed I (2007). Pakistan sign language recognition using statistical template matching. World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, 1(3): 765-768.    [Google Scholar]     
  6. Bansal M, Saxena S, Desale D, and Jadhav D (2011). Dynamic gesture recognition using hidden markov model in static background. International Journal of Computer Science Issues, 8(6): 391–398.    [Google Scholar]     
  7. Bhuyan MK, Kumar DA, MacDorman KF, and Iwahori Y (2014). A novel set of features for continuous hand gesture recognition. Journal on Multimodal User Interfaces, 8(4): 333-343. https://doi.org/10.1007/s12193-014-0165-0    [Google Scholar] 
  8. Darwish SM, Madbouly MM, and Khorsheed MB (2016). Hand gesture recognition for sign language: A new higher order fuzzy HMM approach. International Journal of Engineering and Technology, 8(3): 157-164. https://doi.org/10.7763/IJET.2016.V6.877    [Google Scholar] 
  9. Fernandez JA, Boddeti VN, Rodriguez A, and Kumar BV (2015). Zero-aliasing correlation filters for object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(8): 1702-1715. https://doi.org/10.1109/TPAMI.2014.2375215    [Google Scholar]  PMid:26353005 
  10. Jain C (2016). A novel technique for gesture recognition system. International Journal of Recent Research Aspects, 3(2): 21-24.    [Google Scholar]     
  11. Jalilian B and Chalechale A (2014). Persian sign language recognition using radial distance and fourier transform. International Journal of Image, Graphics and Signal Processing, 6(1): 40–46. https://doi.org/10.5815/ijigsp.2014.01.06    [Google Scholar] 
  12. Jie Z, Liang X, Feng J, Jin X, Lu W, and Yan S (2016). Tree-structured reinforcement learning for sequential object localization. In the 30th Conference on Neural Information Processing Systems, Barcelona, Spain: 127-135.    [Google Scholar]     
  13. Kausar S, Javed MY, and Sohail S (2008). Recognition of gestures in pakistani sign language using fuzzy classifier. In the 8th WSEAS International Conference on Signal Processing, Computational Geometry and Artificial Vision, Rhodes, Greece: 101–105.    [Google Scholar]     
  14. Moghaddam M, Nahvi M, and Hasanzadeh PR (2011). Static persian sign language recognition using kernel-based feature extraction. International Journal of Information and Communication Technology Research, 4(1): 21-28. https://doi.org/10.1109/IranianMVIP.2011.6121539    [Google Scholar] 
  15. Mohandes M, Deriche M, and Liu J (2014). Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Transactions on Human-Machine Systems, 44(4): 551-557. https://doi.org/10.1109/THMS.2014.2318280    [Google Scholar] 
  16. Nachamai M (2013). Alphabet recognition of american sign language: A hand gesture recognition approach using sift algorithm. International Journal of Artificial Intelligence and Applications, 4(1): 105–115. https://doi.org/10.5121/ijaia.2013.4108    [Google Scholar] 
  17. Nagi J, Nagi F, and Ahmed SK (2008). A MATLAB based Face recognition system using image processing and neural networks. In the 4th International Colloquium on Signal Processing and its Applications, Kuala Lumpur, Malaysia: 83–88.   [Google Scholar] PMCid:PMC4088872     
  18. Naoum R, Owaied H, and Joudeh S (2012). Development of a new Arabic sign language recognition using k-nearest neighbor algorithm. Journal of Emerging Trends in Computing and Information Sciences, 3(8): 1173–1178.    [Google Scholar]    
  19. Ohn-Bar E and Trivedi MM (2014). Hand gesture recognition in real time for automotive interfaces: A multimodal vision-based approach and evaluations. IEEE Transactions on Intelligent Transportation Systems, 15(6): 2368-2377. https://doi.org/10.1109/TITS.2014.2337331    [Google Scholar] 
  20. Pandey P and Jain V (2015). Hand gesture recognition for sign language recognition: A review. Hand, 4(3): 4464-4470.    [Google Scholar]     
  21. Pansare JR, Gawande SH, and Ingle M (2012). Real-time static hand gesture recognition for american sign language (ASL) in complex background. Journal of Signal and Information Processing, 3(03): 364-367. https://doi.org/10.4236/jsip.2012.33047    [Google Scholar] 
  22. Parveen D, Sanyal R, and Ansari A (2011). Clause boundary identification using classifier and clause markers in Urdu language. Polibits Research Journal on Computer Science, 43: 61-65. https://doi.org/10.17562/PB-43-8    [Google Scholar] 
  23. Raees M, Ullah S, Rahman SU, and Rabbi I (2016). Image based recognition of Pakistan sign language. Journal of Engineering Research, 4(1): 21-41. https://doi.org/10.7603/s40632-016-0002-6    [Google Scholar] 
  24. Sahoo AK, Mishra GS, and Ravulakollu KK (2014). Sign language recognition: State of the art. ARPN Journal of Engineering and Applied Sciences, 9(2): 116-134.    [Google Scholar]    
  25. Saqib S and Kazmi SAR (2017). Repository of static and dynamic signs. International Journal of Advanced Computer Science and Applications, 8(11): 101-105. https://doi.org/10.14569/IJACSA.2017.081113    [Google Scholar] 
  26. Sarkalehl AK, Poorahangaryan F, Zan B, and Karami A (2009). A neural network based system for persian sign language recognition. In the IEEE International Conference on Signal and Image Processing Applications, IEEE, Kuala Lumpur, Malaysia: 145–149. https://doi.org/10.1109/ICSIPA.2009.5478627    [Google Scholar]     
  27. Sasirekha D and Chandra E (2012). Enhanced techniques for PDF image segmentation and text extraction. International Journal of Computer Science and Information Security, 10(9): 1-5.    [Google Scholar]     
  28. Shahzad N, Paulson B, and Hammond T (2009). Urdu Qaeda : Recognition system for isolated urdu characters. IUI 2009 Workshop on Sketch Recognition, Sanibel Island, Florida Chair: 1–5.    [Google Scholar]     
  29. Shurong L, Yuanyuan H, Zuojin H, and Qun D (2015). Key frame detection algorithm based on dynamic sign language video for the non specific population. International Journal of Signal Processing, Image Processing and Pattern Recognition, 8(12): 135-148. https://doi.org/10.14257/ijsip.2015.8.12.14    [Google Scholar] 
  30. Singha J and Das K (2013). Indian sign language recognition using eigen value weighted euclidean distance based classification technique. International Journal of Advanced Computer Science and Applications, 4(2): 188–195. https://doi.org/10.14569/IJACSA.2013.040228    [Google Scholar] 
  31. Singha J and Laskar RH (2017). Hand gesture recognition using two-level speed normalization, feature selection and classifier fusion. Multimedia Systems, 23(4): 499-514. https://doi.org/10.1007/s00530-016-0510-0    [Google Scholar] 
  32. Sykora P, Kamencay P, and Hudec R (2014). Comparison of SIFT and SURF methods for use on hand gesture recognition based on depth map. AASRI Procedia, 9: 19-24. https://doi.org/10.1016/j.aasri.2014.09.005    [Google Scholar] 
  33. Szegedy C, Toshev A, and Erhan D(2013). Deep neural networks for object detection. In the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 2: 2553-2561.    [Google Scholar]     
  34. Tauseef H, Fahiem MA, and Farhan S (2009). Recognition and translation of hand gestures to urdu alphabets using a geometrical classification. In the 2nd International Conference in Visualisation, IEEE: 213-217. https://doi.org/10.1109/VIZ.2009.11    [Google Scholar]     
  35. Tehsin S, Masood A, Kausar S, and Javed Y (2013). Text localization and detection method for born-digital images. IETE Journal of Research, 59(4): 343-349. https://doi.org/10.4103/0377-2063.118025    [Google Scholar] 
  36. Zhang D, Maei H, Wang X, and Wang YF (2017). Deep reinforcement learning for visual object tracking in videos. Available online at: http://arxiv.org/abs/1701.08936    [Google Scholar]
  37. Zhou B, Lapedriza A, Xiao J, Torralba A, and Oliva A (2014). Learning deep features for scene recognition using places database. In the 27th International Conference on Advances in Neural Information Processing Systems, MIT Press, 487-495.    [Google Scholar]