Towards a framework for recognising Sign language alphabets captured under arbitrary illumination
- Published
- Accepted
- Subject Areas
- Human-Computer Interaction, Computer Vision
- Keywords
- sign language recognition
- Copyright
- © 2018 Zhao
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2018. Towards a framework for recognising Sign language alphabets captured under arbitrary illumination. PeerJ Preprints 6:e26725v1 https://doi.org/10.7287/peerj.preprints.26725v1
Abstract
Our work addresses the problem of automatically recognising a Sign Language alphabet from a given still image obtained under arbitrary illumination. To solve this problem, we designed a computational framework that is founded on the notion that shape features are robust to illumination changes. The statistical classifier part of the framework uses a set of weighted, self-learned features, i.e., binary relationship between pairs of pixels. There are two possible pairings: an edge pixel with another edge pixel, and an edge pixel with a non-edge pixel. This two- pairing arrangement allows a consistent 2D image representation for all letters of the Sign Language alphabets, even if they were to be captured under varying illumination settings. Our framework, which is modular and extensible, paves the way for a system to perform robust (to illumination changes) recognition of the Sign Language alphabets. We also provide arguments to justify our framework design in term of its fitness for real world application.
Author Comment
Submitted for review (MVIPPA2018). This is a preprint submission to PeerJ Preprints.