Education, Science, Technology, Innovation and Life
Open Access
Sign In

A Robust Combinatorial Defensive Method Based on GCN

Download as PDF

DOI: 10.23977/acss.2023.070310 | Downloads: 7 | Views: 372

Author(s)

Xiaozhao Qian 1, Peng Wang 1, Yuelan Meng 1, Zhihong Xu 1

Affiliation(s)

1 Research Center for Complexity Sciences, Hangzhou Normal University, Hangzhou, 311121, China

Corresponding Author

Peng Wang

ABSTRACT

Graph Convolutional Neural Networks (GCNs) often demonstrate poor robustness when faced with adversarial attacks, which can be generated with malicious intent. Several heuristic defensive methods have been proposed to mitigate this issue, but they are often vulnerable to stronger adaptive attacks. Recently, researchers have shown that the non-robust aggregation functions used in GCNs are responsible for their vulnerability, and adversarial training in the popular space can enhance the model's accuracy and robustness. Building on this prior research, this paper analyzes the robustness of the winsorised mean function and the mean aggregation function from the perspective of model interpretability, based on the theory of breakdown points and influence function robustness. We propose an improved robust combinatorial defensive method, WLGCN, which replaces the mean aggregation function in the GCN operator with the more robust winsorised mean aggregation function, and incorporates a robust adversarial regularizer on the manifold space hidden layer H(1) of the GCN. Finally, we evaluate the robustness of the proposed model under different levels of adversarial perturbation cost, using accuracy and classification margin as evaluation metrics. The experimental results demonstrate that the proposed defensive approach can effectively enhance the model's robustness against adversarial attacks while maintaining model accuracy, when compared to other baselines.

KEYWORDS

Graph convolutional networks, graph adversarial attack, robustness, defense

CITE THIS PAPER

Xiaozhao Qian, Peng Wang, Yuelan Meng, Zhihong Xu. A Robust Combinatorial Defensive Method Based on GCN. Advances in Computer, Signals and Systems (2023) Vol. 7: 77-91. DOI: http://dx.doi.org/10.23977/acss.2023.070310.

REFERENCES

[1] B.-B. Xu, K.-Y. Cen, J.-J Huang, et al. A review of graph convolutional neural networks. Journal of Computer Research and Development, 2020, 43(5): 755-780.
[2] Jiang Z, Gao Z, Duan Y, et al. Camouflaged Chinese spam content detection with semi-supervised generative active learning [C] //Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 3080-3085.
[3] Zügner D, Akbarnejad A, And Günnemann S. Adversarial attacks on neural networks for graph data[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’18. 2018: 2847C2856.
[4] Dai H, Li H, Tian T, et al. Adversarial attack on graph structured data[C]//Proceedings of the 35th International Conference on Machine Learning, ser. ICML ’18. 2018: 1123-1132.
[5] Wang X, Eaton J, Hsieh C J, et al. Attack graph convolutional networks by adding fake nodes[J]. arXiv preprint arXiv: 1810.10751, 2018.
[6] Zhou K, Michalak T P, Waniek M, et al. Attacking similarity-based link prediction in social networks[C] //Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, ser. AAMAS ’19. 2019: 305-313.
[7] Sun Y, Wang S, Tang X, et al. Node injection attacks on graphs via reinforcement learning[J]. arXiv preprint arXiv: 1909.06543, 2019.
[8] Feng F, He X, Tang J, et al. Graph adversarial training: Dynamically regularizing based on graph structure [J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 33(6): 2493-2504.
[9] Zhu D, Zhang Z, Cui P, et al. Robust graph convolutional networks against adversarial attacks[C]//Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019: 1399-1407.
[10] Zügner D, Günnemann S. Certifiable robustness and robust training for graph convolutional networks[C]// Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019: 246-256.
[11] Entezari N, Al-Sayouri S A, Darvishzadeh A, et al. All you need is low (rank) defending against adversarial attacks on graphs[C]//Proceedings of the 13th International Conference on Web Search and Data Mining. 2020: 169-177.
[12] Jin M, Chang H, Zhu W, et al. Power up! Robust graph convolutional network via graph powering[C]//35th AAAI Conference on Artificial Intelligence. 2021.
[13] Chen J, Lin X, Xiong H, et al. Smoothing adversarial training for gnn [J]. IEEE Transactions on Computational Social Systems, 2020, 8(3): 618-629.
[14] Xu K, Chen H, Liu S, et al. Topology attack and defense for graph neural networks: An optimization perspective [J]. arXiv preprint arXiv:1906.04214, 2019.
[15] Zügner D, Günnemann S. Adversarial attacks on graph neural networks via meta-learning [J]. arXiv preprint arXiv: 1902.08412, 2019.
[16] Geisler S, Zügner D, Günnemann S. Reliable graph neural networks via robust aggregation [J]. Advances in Neural Information Processing Systems, 2020, 33: 13272-13284.
[17] D -Y. Zhu, Z -W. Zhang, P. Cui, and W -W. Zhu. Robust graph convolutional networks against adversarial attacks. In KDD, pages 1399–1407, 2019.
[18] J -Y. Chen, D -J. Zhang, et al. A review of graph neural network for adversarial attack and defense [J]. Journal of Network and Information Security, 2021, 7(3): 1-28.
[19] Stutz D, Hein M, Schiele B. Disentangling adversarial robustness and generalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 6976-6987.
[20] Zügner D, Günnemann S. Adversarial attacks on graph neural networks via meta-learning [J]. arXiv preprint arXiv:1902.08412, 2019.
[21] Zhou S J. Trimmed mean and winsorised mean [J]. Journal of Xi’an Institute of Geology, 1996, 18(4): 84-90.
[22] Hampel F R, Ronchetti E M, Rousseeuw P J, et al. Robust statistics: the approach based on influence functions [M]. John Wiley & Sons, 2011.
[23] Miyato T, Dai A M, Goodfellow I. Adversarial training methods for semi-supervised text classification [J]. arXiv preprint arXiv:1605.07725, 2016
[24] Jin H, Zhang X. Latent adversarial training of graph convolution networks[C]//ICML workshop on learning and reasoning with graph-structured representations. 2019, 2.
[25] Hampel F R, Ronchetti E M, Rousseeuw P J, et al. Robust statistics: the approach based on influence functions [M]. John Wiley & Sons, 2011.
[26] Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against ad-versarial attacks. In KDD, pages 1399–1407, 2019.
[27] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[28] Zügner D, Akbarnejad A, And Günnemann S. Adversarial attacks on neural networks for graph data[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’18. 2018: 2847C2856.
[29] Jin W, Ma Y, Liu X, et al. Graph structure learning for robust graph neural networks[C]//Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020: 66-74.
[30] Xu B, Shen H, Cao Q, et al. Graph wavelet neural network [J]. arXiv preprint arXiv:1904.07785, 2019.
[31] Thekumparampil K K, Wang C, Oh S, et al. At-tention-based graph neural network for semi-supervised learning [J]. arXiv preprint arXiv:1803.03735, 2018.
[32] Hu W, Chen C, Chang Y, et al. Robust graph convolutional networks with directional graph adversarial training [J]. Applied Intelligence, 2021, 51(11): 7812-7826.
[33] Zhou J, Cui G, Hu S, et al. Graph neural networks: A review of methods and applications [J]. AI open, 2020, 1: 57-81.

Downloads: 11330
Visits: 240188

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.