Education, Science, Technology, Innovation and Life
Open Access
Sign In

Efficient Hierarchical Federated Learning for Unlabeled Edge Devices

Download as PDF

DOI: 10.23977/autml.2024.050103 | Downloads: 2 | Views: 166

Author(s)

Zhipeng Sun 1,2

Affiliation(s)

1 School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
2 Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China

Corresponding Author

Zhipeng Sun

ABSTRACT

Federated Learning (FL) has emerged as a critical technology to train deep learning models across massive decentralized IoT data on-device. While FL preserves data privacy, it encounters challenges like synchronization latency for model aggregation and single-point failures. In response to these issues, Hierarchical Federated Learning (HFL), which employs edge servers near edge devices to reduce synchronization latency and enhance resilience against single-point failures, has been proposed. However, the assumption of labeled edge devices, i.e, labeled data on edge devices, often proves impractical. Recent researches on semi-supervised FL enable model training for unlabeled edge devices, yet integrating these into HFL presents challenges in balancing model accuracy and training efficiency. This paper introduces FLAGS, a novel semi-supervised HFL system with adaptive global aggregation intervals. Building on the HFL system, FLAGS conducts alternate training between labeled cloud data and unlabeled edge devices. Through an adaptive global aggregation intervals control algorithm, FLAGS navigates the balance between model performance and training efficiency. Evaluation on CIFAR-10 demonstrates FLAGS outperforming baselines within designated time budgets.

KEYWORDS

Edge Computing, Hierarchical Federated Learning, Semi-supervised Learning

CITE THIS PAPER

Zhipeng Sun, Efficient Hierarchical Federated Learning for Unlabeled Edge Devices. Automation and Machine Learning (2024) Vol. 5: 17-22. DOI: http://dx.doi.org/10.23977/autml.2024.050103.

REFERENCES

[1] Hu, Y. C., Patel, M., Sabella, D., Sprecher, N., & Young, V. (2015). Mobile edge computing—A key technology towards 5G. ETSI white paper, 11(11), 1-16.
[2] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge computing: Vision and challenges. IEEE internet of things journal, 3(5), 637-646.
[3] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
[4] Aledhari, M., Razzak, R., Parizi, R. M., & Saeed, F. (2020). Federated learning: A survey on enabling technologies, protocols, and applications. IEEE Access, 8, 140699-140725.
[5] Liu, L., Zhang, J., Song, S. H., & Letaief, K. B. (2020, June). Client-edge-cloud hierarchical federated learning. In ICC 2020-2020 IEEE International Conference on Communications (ICC) (pp. 1-6). IEEE.
[6] Diao, E., Ding, J., & Tarokh, V. (2022). SemiFL: Semi-supervised federated learning for unlabeled clients with alternate training. Advances in Neural Information Processing Systems, 35, 17871-17884.
[7] Jeong, W., Yoon, J., Yang, E., & Hwang, S. J. (2020, October). Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning. In International Conference on Learning Representations.
[8] Zhao, J., Ghosh, S., Bharadwaj, A., & Ma, C. Y. (2023). When does the student surpass the teacher? Federated Semi-supervised Learning with Teacher-Student EMA. arXiv preprint arXiv:2301.10114.
[9] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
[10] Besbes, O., Gur, Y., & Zeevi, A. (2014). Stochastic multi-armed-bandit problem with non-stationary rewards. Advances in neural information processing systems, 27.
[11] Cesa-Bianchi, N., Gentile, C., Lugosi, G., & Neu, G. (2017). Boltzmann exploration done right. Advances in neural information processing systems, 30.
[12] Liao, Y., Xu, Y., Xu, H., Yao, Z., Wang, L., & Qiao, C. (2023). Accelerating federated learning with data and model parallelism in edge computing. IEEE/ACM Transactions on Networking.
[13] Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images. Master's thesis, University of Tront.
[14] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

Downloads: 1628
Visits: 68439

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.