Education, Science, Technology, Innovation and Life
Open Access
Sign In

The Application of Multimodal Learning to Enhance Language Proficiency in Oral English Teaching

Download as PDF

DOI: 10.23977/aduhe.2023.051806 | Downloads: 60 | Views: 582

Author(s)

Yangxue Zeng 1

Affiliation(s)

1 Hubei Preschool Teachers College, Wuhan, Hubei, 430000, China

Corresponding Author

Yangxue Zeng

ABSTRACT

Enhancing language proficiency through multimodal learning in oral English teaching is a promising approach that caters to diverse learning styles, improves language skills, and fosters cultural understanding. While it presents challenges, such as resource limitations and the need for teacher training, the benefits it offers make it a valuable addition to language education. As technology continues to advance, educators should embrace the opportunities presented by multimodal learning to empower their students with effective oral communication skills in the globalized world.

KEYWORDS

Multimodal learning, oral English teaching, language proficiency

CITE THIS PAPER

Yangxue Zeng, The Application of Multimodal Learning to Enhance Language Proficiency in Oral English Teaching. Adult and Higher Education (2023) Vol. 5: 34-38. DOI: http://dx.doi.org/10.23977/aduhe.2023.051806.

REFERENCES

[1] LeCun Y, Bengio Y, Hinton G. Deep learning [J]. Nature, 2015, 521(7553): 436-444. 
[2] Lahat D, Adali T, Jutten C. Multimodal data fusion: an overview of methods, challenges, and prospects[J]. Proceedings of the IEEE, 2015, 103(9): 1449-1477. 
[3] Birch, D, and Michael G. Students' perceptions of technology-based marketing courses. Proceedings of the 2005 Australian and New Zealand Marketing Academy Conference (ANZMAC 2005). 2005. 
[4] Cronin J J. Upgrading to Web 2. 0: An experiential project to build a marketing wiki [J]. Journal of marketing Education, 2009, 31(1): 66-75. 
[5] Pashler H, McDaniel M, Rohrer D, Learning styles: Concepts and evidence [J]. Psychological science in the public interest, 2008, 9(3): 105-119. 
[6] Tietjen, P. Applying Instructional Design Theories to Improve Efficacy of Technology-assisted Presentations. Technical Communication, 2005, 52(1), 107-108. 
[7] Picciano A. Blending with purpose: The multimodal model [J]. Journal of the Research Center for Educational Technology, 2009, 5(1): 4-14. 
[8] Atrey P K, Hossain M A, El Saddik A, et al. Multimodal fusion for multimedia analysis: a survey[J]. Multimedia systems, 2010, 16: 345-379. 
[9] Khaleghi B, Khamis A, Karray F O, et al. Multisensor data fusion: A review of the state-of-the-art [J]. Information fusion, 2013, 14(1): 28-44. 
[10] Kuncheva L I. Combining pattern classifiers: methods and algorithms [M]. John Wiley & Sons, 2014. 
[11] Ngiam, Jiquan, Multimodal deep learning. Proceedings of the 28th international conference on machine learning (ICML-11). 2011. 
[12] Srivastava N, Salakhutdinov R R. Multimodal learning with deep boltzmann machines [J]. Advances in neural information processing systems, 2012, 25. 
[13] Neverova N, Wolf C, Taylor G, et al. Moddrop: adaptive multi-modal gesture recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(8): 1692-1706. 
[14] Sankey, M., & St Hill, R. The ethics of designing for multimodality: Empowering nontraditional learners. In U. Demiray & R. Sharma (Eds.), Ethical Practices and Implications in Distance Education 2009, 126-155. 
[15] Williamson Sprague E, Dahl D W. Learning to click: An evaluation of the personal response system clicker technology in introductory marketing courses [J]. Journal of Marketing Education, 2010, 32(1): 93-103.

All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.