Education, Science, Technology, Innovation and Life
Open Access
Sign In

Design and Implementation of a Continuous Sign Language Recognition System Based on Deep Learning

Download as PDF

DOI: 10.23977/autml.2026.070106 | Downloads: 1 | Views: 45

Author(s)

Chuwei Wang 1, Wenhui Zeng 1, Zicheng Wang 1, Bing Wang 1

Affiliation(s)

1 School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, China

Corresponding Author

Bing Wang

ABSTRACT

Continuous sign language recognition (CSLR) is crucial for bridging communication gaps for hearing-impaired people. Traditional methods relying on manual feature extraction suffer from poor adaptability and low accuracy. To address this, this paper designs a high-performance CSLR system based on CNN-BiLSTM-Attention, integrated with comprehensive regularization strategies to suppress overfitting. Using the RWTH-PHOENIX-Weather 2014 dataset, the system conducts standard data preprocessing and constructs a hybrid model: CNN extracts spatial features, BiLSTM captures bidirectional temporal dependencies, and attention enhances key frames. Regularization measures include L2 regularization (λ=0.001), Dropout(rate=0.3), model pruning, and early stopping. With Adam optimizer and learning rate decay, the model achieves 93.2%test accuracy, 15.8/12.0 percentage points higher than single CNN/BiLSTM, and 4.1 percentage points higher than CNN-BiLSTM without regularization/attention. It balances accuracy, robustness, and inference efficiency, providing a feasible solution for practical CSLR applications.

KEYWORDS

Continuous Sign Language Recognition; Deep Learning; CNN; BiLSTM; Attention Mechanism; RWTH-PHOENIX-Weather 2014 Dataset; L2 Regularization; Dropout; Model Pruning

CITE THIS PAPER

Chuwei Wang, Wenhui Zeng, Zicheng Wang, Bing Wang. Design and Implementation of a Continuous Sign Language Recognition System Based on Deep Learning. Automation and Machine Learning (2026). Vol. 7, No. 1, 48-54. DOI: http://dx.doi.org/10.23977/autml.2026.070106.

REFERENCES

[1]  Ben Zaid, F., Benaddy, M., Boukdir, A., & El Meslouhi, O. "A dataset for Moroccan sign language recognition and translation." Data in Brief, 64, 112395, 2026.
[2]  Alotaibi, N., Al Dayil, R., Aljehane, N. O., & Rizwanullah, M. "Enhanced feature fusion with hand gesture recognition system for sign language accessibility to aid hearing and speech impaired individuals." Scientific Reports, 2026.
[3]  Nemri, N., Alzahrani, M. Y., Bouchelligua, W., & Alneil, A. A. "Improving sign language recognition system for assisting deaf and dumb people using pathfinder algorithm with representation learning model." Scientific Reports, 2025.
[4]  Tao, T., Che, X., Zhao, Y., & Yang, Z. "Local attention and contrastive clustering network for sign language recognition." Pattern Recognition, 173, 112941, 2026.
[5]  Talaat, F. M., & Hassan, B. M. "A multistream attention based neural network for visual speech recognition and sign language understanding." Scientific Reports, 15(1), 44675, 2025.
[6]  Jiang, Y., Yang, D., & Chen, C. "Enhancing Continuous Sign Language Recognition via Spatio-Temporal Multi-Scale Deformable Correlation." Applied Sciences, 16(1), 124, 2025.
[7]  Hao, J., & Pan, H. "A novel deep transformer based CvT model for sign language recognition in visual communication." Scientific Reports, 2025.
[8]  Kishore, P. V. V., Bindu, G. H., Prasad, B., Kumar, D. A., Kumar, P. P., Suneetha, M., & Kumar, E. K. "Ppent: a pose embedding refinement framework aligning estimated and motion-captured skeletons for real-time word-level sign language recognition." International Journal of Information Technology, (prepublish), 1-19, 2025.
[9]  Harrouch, H., Trabelsi, L., Jebali, M., & Gammoudi, O. "A deep learning-based method combines manual and non-manual features for sign language recognition." Scientific Reports, 2025.

Downloads: 4905
Visits: 242214

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.