Education, Science, Technology, Innovation and Life
Open Access
Sign In

Research on Optimization of Deep Learning in Handwritten Digit Recognition

Download as PDF

DOI: 10.23977/cpcs.2025.090108 | Downloads: 1 | Views: 38

Author(s)

Xueju Hao 1

Affiliation(s)

1 School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, China

Corresponding Author

Xueju Hao

ABSTRACT

Aiming at the problem of balancing model accuracy and generalization ability in handwritten digit recognition tasks, this study takes the MNIST dataset as the research object, systematically compares the recognition performance of Multilayer Perceptrons (MLP) and Lightweight Convolutional Neural Networks (CNN). It optimizes model structures by adjusting the number of network layers, neurons, and convolution kernels, introduces Dropout regularization to suppress overfitting, and analyzes the impact of hyperparameters such as learning rate and batch size on model performance. Experimental results show that the lightweight CNN, relying on its advantage in spatial feature extraction, achieves a basic model recognition accuracy of 97.2%, significantly outperforming MLP's 95.8%. After structural optimization and Dropout regularization, the test accuracy of the lightweight CNN is improved to 98.6%, and overfitting is effectively alleviated. Among hyperparameters, the learning rate has the most significant impact on model convergence speed; when the optimal learning rate is 0.001, the model can quickly reach stable accuracy. This research provides an efficient lightweight model solution for handwritten digit recognition tasks, which is of reference value for image recognition applications in low-resource scenarios.

KEYWORDS

Handwritten Digit Recognition; MNIST Dataset; Multilayer Perceptron; Lightweight Convolutional Neural Network; Dropout; Hyperparameter Optimization

CITE THIS PAPER

Xueju Hao, Research on Optimization of Deep Learning in Handwritten Digit Recognition. Computing, Performance and Communication Systems (2025) Vol. 9: 59-64. DOI: http://dx.doi.org/10.23977/cpcs.2025.090108.

REFERENCES

[1] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (2002): 2278-2324.
[2] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." nature 323.6088 (1986): 533-536. 
[3] Howard, Andrew G., et al. "Mobilenets: Efficient convolutional neural networks for mobile vision applications." arXiv preprint arXiv:1704.04861 (2017).
[4] Zhang, Xiangyu, et al. "Shufflenet: An extremely efficient convolutional neural network for mobile devices." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. 
[5] Abadi, Martín, et al. "Tensorflow: Large-scale machine learning on heterogeneous distributed systems." arXiv preprint arXiv:1603.04467 (2016).
[6] Bergstra, James, and Yoshua Bengio. "Random search for hyper-parameter optimization." The journal of machine learning research 13.1 (2012): 281-305.
[7] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning,2015.

Downloads: 3343
Visits: 207416

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.