Education, Science, Technology, Innovation and Life
Open Access
Sign In

Entity Relation Extraction for Table Filling Based on Dynamic Convolution

Download as PDF

DOI: 10.23977/jnca.2025.100106 | Downloads: 5 | Views: 186

Author(s)

Jin Chen 1

Affiliation(s)

1 School of Information, North China University of Technology, Beijing, 100144 China

Corresponding Author

Jin Chen

ABSTRACT

To address the formidable challenges of complex contextual dependencies and high resource consumption in capturing long-distance relationships in joint entity-relation extraction tasks, a novel and innovative model is proposed. This model strategically leverages dynamic convolution to revolutionize the way the task is approached. Specifically, it reformulates the joint entity-relation extraction task as table annotation, ingeniously treating tables as if they were images, with each cell within the table corresponding to a pixel. By doing so, it creates a unique and structured framework for analysis. Dynamic convolution is then employed in a sophisticated manner to enhance the modeling of local dependencies. This not only allows for a more nuanced understanding of the data but also effectively improves the representation of the intricate and often convoluted relationships between entities. Additionally, the model incorporates an efficient feature extraction strategy. This strategy is carefully designed to significantly reduce computational resource usage, ensuring that the model can operate smoothly without sacrificing performance. To validate the effectiveness of the proposed model, extensive and rigorous experiments are conducted on well-known benchmark datasets, including CoNLL04, ACE05, and ADE. The comprehensive experimental results clearly demonstrate that the model not only improves the accuracy of entity and relation extraction but also achieves the remarkable feat of reducing resource consumption, making it a promising solution in the field.

KEYWORDS

Joint Relation Extraction, Named Entity Recognition, Dynamic Convolution, Natural Language Processing

CITE THIS PAPER

Jin Chen, Entity Relation Extraction for Table Filling Based on Dynamic Convolution. Journal of Network Computing and Applications (2025) Vol. 10: 39-48. DOI: http://dx.doi.org/10.23977/jnca.2025.100106.

REFERENCES

[1] Bosselut A, Rashkin H, Sap M, et al. COMET: Commonsense transformers for automatic knowledge graph construction[J]. arXiv preprint arXiv:1906.05317, 2019. 
[2] Jehangir B, Radhakrishnan S, Agarwal R. A survey on Named Entity Recognition—datasets, tools, and methodologies[J]. Natural Language Processing Journal, 2023, 3: 100017. 
[3] Zhao X, Deng Y, Yang M, et al. A comprehensive survey on relation extraction: Recent advances and new frontiers[J]. ACM Computing Surveys, 2024, 56(11): 1-39. 
[4] Ashish V. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30: 1. 
[5] Kenton J D M W C, Toutanova L K. Bert: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of naacL-HLT. 2019, 1: 2. 
[6] Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019. 
[7] Liu Y. Roberta: A robustly optimized bert pretraining approach[J].arxiv preprint arxiv:1907.11692, 2019. 
[8] Zheng Y, Tuan L A. A novel, cognitively inspired, unified graph-based multi-task framework for information extraction[J]. Cognitive Computation, 2023, 15(6): 2004-2013. 
[9] Hong Y, Liu Y, Yang S, et al. Improving graph convolutional networks based on relation-aware attention for end-to-end relation extraction[J]. IEEE Access, 2020, 8: 51315-51323. 
[10] Zheng Y, Hao A, Luu A T. Jointprop: joint semi-supervised learning for entity and relation extraction with heterogeneous graph-based propagation[J]. arXiv preprint arXiv:2305.15872, 2023. 
[11] Zeng D, Xu L, Jiang C, et al. Sequence tagging with a rethinking structure for joint entity and relation extraction[J]. International Journal of Machine Learning and Cybernetics, 2024, 15(2): 519-531. 
[12] Ma Y, Hiraoka T, Okazaki N. Named entity recognition and relation extraction using enhanced table filling by contextualized representations[J]. Journal of Natural Language Processing, 2022, 29(1): 187-223. 
[13] Zhang M, Zhang Y, Fu G. End-to-end neural relation extraction with global optimization[C]//Proceedings of the 2017 conference on empirical methods in natural language processing. 2017: 1730-1740. 
[14] Dosovitskiy A. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv:2010.11929, 2020. 
[15] Xia Z, Pan X, Song S, et al. Vision transformer with deformable attention[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 4794-4803. 
[16] Han K, Wang Y, Guo J, et al. ParameterNet: Parameters Are All You Need for Large-scale Visual Pretraining of Mobile Networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 15751-15761. 
[17] Roth D, Yih W. A linear programming formulation for global inference in natural language tasks[C]. Proceedings of the eighth conference on computational natural language learning (CoNLL-2004) at HLT-NAACL 2004. 2004: 1-8. Available: https://aclanthology.org/W04-2401
[18] Gurulingappa H, Mateen‐Rajpu A, Toldo L. Extraction of potential adverse drug events from medical case reports[J]. Journal of biomedical semantics, 2012, 3: 1-10.  
[19] Zaratiana U, Tomeh N, Holat P, et al. An Autoregressive Text-to-Graph Framework for Joint Entity and Relation Extraction[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(17): 19477-19487. 
[20] Zaratiana U, Tomeh N, Khbir N E, et al. GraphER: A Structure-aware Text-to-Graph Model for Entity and Relation Extraction[J]. arXiv preprint arXiv:2404.12491, 2024. 
[21] Yan Z, Zhang C, Fu J, et al. A partition filter network for joint entity and relation extraction[J]. arXiv preprint arXiv:2108.12202, 2021. 
[22] Zhong Z, Chen D. A frustratingly easy approach for entity and relation extraction[J]. arXiv preprint arXiv:2010.12812, 2020. 
[23] Wang J, Lu W. Two are better than one: Joint entity and relation extraction with table-sequence encoders[J]. arXiv preprint arXiv:2010.03851, 2020. 
[24] Eberts M, Ulges A. Span-based joint entity and relation extraction with transformer pre-training[M]. ECAI 2020. IOS Press, 2020: 2006-2013. 

Downloads: 1475
Visits: 138950

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.