[1]
|
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2023) ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learning and Individual Differences, 103, Article ID: 102274. https://doi.org/10.1016/j.lindif.2023.102274
|
[2]
|
Eloundou, T., Manning, S., Mishkin, P. and Rock, D. (2023) GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. arXiv: 2303.10130.
|
[3]
|
Yuzbashyan, N., Banar, N., Markov, I. and Daelemans, W. (2023) An Exploration of Zero-Shot Natural Language Inference-Based Hate Speech Detection. In: Chakravarthi, B.R., Bharathi, B., Griffith, J., Bali, K. and Buitelaar, P., Eds., Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion, INCOMA Ltd., 1-9. https://aclanthology.org/2023.ltedi-1.1
|
[4]
|
Goldzycher, J., Preisig, M., Amrhein, C. and Schneider, G. (2023) Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled Data. The 7th Workshop on Online Abuse and Harms (WOAH), Toronto, 13 July 2023, 187-201. https://doi.org/10.18653/v1/2023.woah-1.19
|
[5]
|
Naveed, H., Khan, A.U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N. and Mian, A. (2024) A Comprehensive Overview of Large Language Models. arXiv: 2307.06435.
|
[6]
|
Abzianidze, L., Zwarts, J. and Winter, Y. (2023) SpaceNLI: Evaluating the Consistency of Predicting Inferences in Space. Proceedings of the 4th Natural Logic Meets Machine Learning Workshop, Nancy, June 2023, 12-24. https://aclanthology.org/2023.naloma-1.2
|
[7]
|
Han, X., Zeng, G., Zhao, W., Liu, Z., Zhang, Z., Zhou, J., et al. (2022) BMInF: An Efficient Toolkit for Big Model Inference and Tuning. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 22-27 May 2022, Dublin, 224-230. https://doi.org/10.18653/v1/2022.acl-demo.22
|
[8]
|
Liu, S., Wen, T., Pattamatta, A.S.L.S. and Srolovitz, D.J. (2024) A Prompt-Engineered Large Language Model, Deep Learning Workflow for Materials Classification. Materials Today. https://doi.org/10.1016/j.mattod.2024.08.028
|
[9]
|
Winata, G., Xie, L., Radhakrishnan, K., Gao, Y. and Preotiuc-Pietro, D. (2023) Efficient Zero-Shot Cross-Lingual Inference via Retrieval. Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), Nusa Dua, November 2023, 93-104. https://doi.org/10.18653/v1/2023.ijcnlp-short.11
|
[10]
|
Conceição, S.I.R., F. Sousa, D., Silvestre, P. and Couto, F.M. (2023) LasigeBioTM at SemEval-2023 Task 7: Improving Natural Language Inference Baseline Systems with Domain Ontologies. Proceedings of the the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, 13-14 July 2023, 10-15. https://doi.org/10.18653/v1/2023.semeval-1.2
|
[11]
|
Jin, R.R., Du, J.C., Huang, W.W., Liu, W., Luan, J., Wang, B. and Xiong, D.Y. (2024) A Comprehensive Evaluation of Quantization Strategies for Large Language Models. arXiv: 2402.16775.
|
[12]
|
Chavan, A., Magazine, R., Kushwaha, S., Debbah, M. and Gupta, D. (2024) Faster and Lighter Llms: A Survey on Current Challenges and Way Forward. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, 3-9 August 2024, 7980-7988. https://doi.org/10.24963/ijcai.2024/883
|
[13]
|
Li, L., Jiang, B., Wang, P., Ren, K., Yan, H. and Qiu, X. (2023) Watermarking LLMs with Weight Quantization. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 3368-3378. https://doi.org/10.18653/v1/2023.findings-emnlp.220
|
[14]
|
Gong, Z., Liu, J., Wang, Q., Yang, Y., Wang, J., Wu, W., et al. (2023) PreQuant: A Task-Agnostic Quantization Approach for Pre-Trained Language Models. Findings of the Association for Computational Linguistics: ACL 2023, Toronto, 9-14 July 2023, 8065-8079. https://doi.org/10.18653/v1/2023.findings-acl.511
|
[15]
|
Smith, A., Hachen, S., Schleifer, R., Bhugra, D., Buadze, A. and Liebrenz, M. (2023) Old Dog, New Tricks? Exploring the Potential Functionalities of ChatGPT in Supporting Educational Methods in Social Psychiatry. International Journal of Social Psychiatry, 69, 1882-1889. https://doi.org/10.1177/00207640231178451
|
[16]
|
Kolagar, Z. and Zarcone, A. (2024) HumSum: A Personalized Lecture Summarization Tool for Humanities Students Using LLMs. Proceedings of the 1st Workshop on Personalization of Generative AI Systems (PERSONALIZE 2024), St. Julians, March 2024, 36-70. https://aclanthology.org/2024.personalize-1.4
|
[17]
|
Jawahar, G., Mukherjee, S., Liu, X., Kim, Y.J., Abdul-Mageed, M., Lakshmanan, V.S., L., et al. (2023) AutoMoE: Heterogeneous Mixture-Of-Experts with Adaptive Computation for Efficient Neural Machine Translation. Findings of the Association for Computational Linguistics: ACL 2023, Toronto, 9-14 July 2023, 9116-9132. https://doi.org/10.18653/v1/2023.findings-acl.580
|
[18]
|
Yen, A. and Hsu, W. (2023) Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 3055-3069. https://doi.org/10.18653/v1/2023.findings-emnlp.201
|
[19]
|
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I. (2017) Attention Is All You Need. arXiv: 1706.03762.
|
[20]
|
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., et al. (2020) Language Models Are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877-1901. https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
|
[21]
|
Wang, P., Zhang, N., Tian, B., Xi, Z., Yao, Y., Xu, Z., et al. (2024) EasyEdit: An Easy-To-Use Knowledge Editing Framework for Large Language Models. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, 11-16 August 2024, 82-93. https://doi.org/10.18653/v1/2024.acl-demos.9
|
[22]
|
Zhong, Z., Wu, Z., Manning, C., Potts, C. and Chen, D. (2023) MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 15686-15702. https://doi.org/10.18653/v1/2023.emnlp-main.971
|
[23]
|
Chan, C., Jiayang, C., Wang, W.Q., Jiang, Y.X., Fang, T., Liu, X. and Song, Y.Q. (2024) Exploring the Potential of ChatGPT on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations. Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, 17-22 March 2024, 684-721. https://aclanthology.org/2024.findings-eacl.47
|
[24]
|
Das, S.S.S., Zhang, H., Shi, P., Yin, W. and Zhang, R. (2023) Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse Finetuning. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 6998-7010. https://doi.org/10.18653/v1/2023.emnlp-main.433
|
[25]
|
Bowen, C., Sætre, R. and Miyao, Y. (2024) A Comprehensive Evaluation of Inductive Reasoning Capabilities and Problem Solving in Large Language Models. Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, 17-22 March 2024, 323-339. https://aclanthology.org/2024.findings-eacl.22
|
[26]
|
Li, J., Su, Q., Yang, Y., Jiang, Y., Wang, C. and Xu, H. (2023) Adaptive Gating in Mixture-Of-Experts Based Language Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 3577-3587. https://doi.org/10.18653/v1/2023.emnlp-main.217
|
[27]
|
Moon, H., Lee, J., Eo, S., Park, C., Seo, J. and Lim, H. (2024) Generative Interpretation: Toward Human-Like Evaluation for Educational Question-Answer Pair Generation. Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, 17-22 March 2024, 2185-2196. https://aclanthology.org/2024.findings-eacl.145
|
[28]
|
Campos, D., Marques, A., Kurtz, M. and Xiang Zhai, C. (2023) oBERTa: Improving Sparse Transfer Learning via Improved Initialization, Distillation, and Pruning Regimes. Proceedings of the Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), Toronto, 13 July 2023, 39-58. https://doi.org/10.18653/v1/2023.sustainlp-1.3
|
[29]
|
Men, X., Xu, M.Y., Zhang, Q.Y., Wang, B.N., Lin, H.Y., Lu, Y.J., Han, X.P. and Chen, W.P. (2024) ShortGPT: Layers in Large Language Models Are More Redundant than You Expect. arXiv: 2403.03853.
|
[30]
|
Azeemi, A., Qazi, I. and Raza, A. (2023) Data Pruning for Efficient Model Pruning in Neural Machine Translation. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 1127-1149. https://doi.org/10.18653/v1/2023.findings-emnlp.18
|
[31]
|
Lewis, A. and White, M. (2023) Mitigating Harms of LLMs via Knowledge Distillation for a Virtual Museum Tour Guide. Proceedings of the 1st Workshop on Taming Large Language Models: Controllability in the Era of Interactive Assistants, Prague, 12 September 2023, 31-45. https://aclanthology.org/2023.tllm-1.4
|
[32]
|
West, P., Bras, R., Sorensen, T., Lin, B., Jiang, L., Lu, X., et al. (2023) NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 1127-1148. https://doi.org/10.18653/v1/2023.findings-emnlp.80
|
[33]
|
Hubert, R., Sokolov, A. and Riezler, S. (2023) Improving End-to-End Speech Translation by Imitation-Based Knowledge Distillation with Synthetic Transcripts. Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), Toronto, July 2023, 89-101. https://doi.org/10.18653/v1/2023.iwslt-1.4
|
[34]
|
Faysse, M., Viaud, G., Hudelot, C. and Colombo, P. (2023) Revisiting Instruction Fine-Tuned Model Evaluation to Guide Industrial Applications. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 9033-9048. https://doi.org/10.18653/v1/2023.emnlp-main.559
|
[35]
|
Zhou, W., Tahmasebi, N. and Dubossarsky, H. (2023) The Finer They Get: Combining Fine-Tuned Models for Better Semantic Change Detection. Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), Tórshavn, 22-24 May 2023, 518-528. https://aclanthology.org/2023.nodalida-1.52
|
[36]
|
Qi, Z., Tan, X., Shi, S., Qu, C., Xu, Y. and Qi, Y. (2023) PILLOW: Enhancing Efficient Instruction Fine-Tuning via Prompt Matching. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, Singapore, 6-10 December 2023, 471-482. https://doi.org/10.18653/v1/2023.emnlp-industry.45
|
[37]
|
Anuranjana, K. (2023) DiscoFlan: Instruction Fine-Tuning and Refined Text Generation for Discourse Relation Label Classification. Proceedings of the 3rd Shared Task on Discourse Relation Parsing and Treebanking (DISRPT 2023), Toronto, 14 July 2023, 22-28. https://doi.org/10.18653/v1/2023.disrpt-1.2
|
[38]
|
Arriola, J.M., Iruskieta, M., Arrieta, E. and Alkorta, J. (2023) Towards Automatic Essay Scoring of Basque Language Texts from a Rule-Based Approach Based on Curriculum-Aware Systems. Proceedings of the NoDaLiDa 2023 Workshop on Constraint Grammar—Methods, Tools and Applications, Tórshavn, 22 May 2023, 20-28. https://aclanthology.org/2023.nodalida-cgmta.4.
|
[39]
|
Ranaldi, L., Pucci, G. and Massimo Zanzotto, F. (2023) Modeling Easiness for Training Transformers with Curriculum Learning. Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, Varna, 4-6 September 2023, 937-948. https://doi.org/10.26615/978-954-452-092-2_101
|
[40]
|
Vakil, N. and Amiri, H. (2023) Complexity-Guided Curriculum Learning for Text Graphs. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 2610-2626. https://doi.org/10.18653/v1/2023.findings-emnlp.172
|
[41]
|
Zhou, J., Zeng, Z., Gong, H. and Bhat, S. (2023) Non-Compositional Expression Generation Based on Curriculum Learning and Continual Learning. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 4320-4335. https://doi.org/10.18653/v1/2023.findings-emnlp.286
|
[42]
|
Zhang, X., Ju, T.J., Liang, H.J., Fu, Y. and Zhang, Q. (2024) LLMs Instruct LLMs: An Extraction and Editing Method. arXiv: 2403.15736v1.
|
[43]
|
Heitmann, M. (2020) More than a Feeling: Benchmarks for Sentiment Analysis Accuracy. Elsevier.
|
[44]
|
Vansh, R., Rank, D., Dasgupta, S. and Chakraborty, T. (2023) Accuracy Is Not Enough: Evaluating Personalization in Summarizers. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 2582-2595. https://doi.org/10.18653/v1/2023.findings-emnlp.169
|
[45]
|
Schmidtova, P. (2023) Semantic Accuracy in Natural Language Generation: A Thesis Proposal. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), Toronto, 9-14 July 2023, 352-361. https://doi.org/10.18653/v1/2023.acl-srw.48
|
[46]
|
Lee, K., Han, W., Hwang, S., Lee, H., Park, J. and Lee, S. (2022) Plug-and-Play Adaptation for Continuously-Updated QA. Findings of the Association for Computational Linguistics: ACL 2022, Dublin, 22-27 May 2022, 438-447. https://doi.org/10.18653/v1/2022.findings-acl.37
|
[47]
|
Lan, W., Qiu, S., He, H. and Xu, W. (2017) A Continuously Growing Dataset of Sentential Paraphrases. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, 9-11 September 2017, 1224-1234. https://doi.org/10.18653/v1/d17-1126
|
[48]
|
Wang, S., Zhu, Y.C., Liu, H.C., Zheng, Z.Y., Chen, C. and Li, J.D. (2023) Knowledge Editing for Large Language Models: A Survey. arXiv: 2310.16218.
|
[49]
|
Bittermann, A. and Rieger, J. (2022) Finding Scientific Topics in Continuously Growing Text Corpora. Proceedings of the Third Workshop on Scholarly Document Processing, Gyeongju, 12-17 October 2022, 7-18. https://aclanthology.org/2022.sdp-1.2
|
[50]
|
Zhang, N., Tian, B., Cheng, S., Liang, X., Hu, Y., Xue, K., et al. (2024) Instructedit: Instruction-Based Knowledge Editing for Large Language Models. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, 3-9 August 2024, 6633-6641. https://doi.org/10.24963/ijcai.2024/733
|
[51]
|
Onoe, Y., Zhang, M., Padmanabhan, S., Durrett, G. and Choi, E. (2023) Can Lms Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, 9-14 July 2023, 5469-5485. https://doi.org/10.18653/v1/2023.acl-long.300
|
[52]
|
Pandya, H.A. and Bhatt, B.S. (2021) Question Answering Survey: Directions, Challenges, Datasets, Evaluation Matrices. arXiv: 2112.03572.
|
[53]
|
Li, C.X., Huang, D., Lu, Z.Y., Xiao, Y., Pei, Q.Q. and Bai, L. (2024) A Survey on Long Video Generation: Challenges, Methods, and Prospects. arXiv: 2403.16407.
|
[54]
|
Yang, D.S., Hu, L.H., Tian, Y., Li, Z.H., Kelly, C., Yang, B., Yang, C. and Zou, Y. (2024) WorldGPT: A Sora-Inspired Video AI Agent as Rich World Models from Text and Image Inputs. arXiv: 2403.07944.
|
[55]
|
Zhou, P., Wang, L., Liu, Z., Hao, Y.B., Hui, P., Tarkoma, S. and Kangasharju, J. (2024) A Survey on Generative AI and LLM for Video Generation, Understanding, and Streaming. arXiv: 2404.16038.
|
[56]
|
Renella, N. and Eger, M. (2023) Towards Automated Video Game Commentary Using Generative AI. CEUR Workshop Proceedings: AIIDE Workshop on Experimental Artificial Intelligence in Games, 8 October 2023, Utah, 341-350. https://ceur-ws.org/Vol-3626/paper7.pdf
|
[57]
|
Bhagwatkar, R., Bachu, S., Fitter, K., Kulkarni, A. and Chiddarwar, S. (2020) A Review of Video Generation Approaches. 2020 International Conference on Power, Instrumentation, Control and Computing (PICC), Thrissur, 17-19 December 2020, 1-5. https://doi.org/10.1109/picc51425.2020.9362485
|
[58]
|
Sreekanth, D. and Dehbozorgi, N. (2023) Enhancing Engineering Education through LLM-Driven Adaptive Quiz Generation. https://digitalcommons.kennesaw.edu/cgi/viewcontent.cgi?article=1399&context=cday
|
[59]
|
Lamsiyah, S., El Mahdaouy, A., Nourbakhsh, A. and Schommer, C. (2024) Fine-Tuning a Large Language Model with Reinforcement Learning for Educational Question Generation. In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C. and Bittencourt, I.I., Eds., Artificial Intelligence in Education, Springer Nature Switzerland, 424-438. https://doi.org/10.1007/978-3-031-64302-6_30
|
[60]
|
Agrawal, G., Pal, K., Deng, Y., Liu, H. and Chen, Y. (2024) CyberQ: Generating Questions and Answers for Cybersecurity Education Using Knowledge Graph-Augmented LLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 23164-23172. https://doi.org/10.1609/aaai.v38i21.30362
|
[61]
|
Hu, B., Zheng, L., Zhu, J., Ding, L., Wang, Y. and Gu, X. (2024) Teaching Plan Generation and Evaluation with GPT-4: Unleashing the Potential of LLM in Instructional Design. IEEE Transactions on Learning Technologies, 17, 1471-1485. https://doi.org/10.1109/tlt.2024.3384765
|
[62]
|
Annie Micheal, A., Prasanth, A., Aswin, T.S., et al. (2024) Advancing Educational Accessibility: The LangChain LLM Chatbot’s Impact on Multimedia Syllabus-Based Learning. https://doi.org/10.21203/rs.3.rs-4399670/v1
|
[63]
|
Goslen, A., Kim, Y.J., Rowe, J. and Lester, J. (2024) LLM-Based Student Plan Generation for Adaptive Scaffolding in Game-Based Learning Environments. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-024-00421-1
|
[64]
|
Stahl, M., Biermann, L., Nehring, A. and Wachsmuth, H. (2024) Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation. arXiv: 2404.15845. https://arxiv.org/abs/2404.15845
|
[65]
|
Stamper, J., Xiao, R. and Hou, X. (2024) Enhancing LLM-Based Feedback: Insights from Intelligent Tutoring Systems and the Learning Sciences. Communications in Computer and Information Science, 2150, 32-43. https://doi.org/10.1007/978-3-031-64315-6_3
|
[66]
|
Nie, A., Cheng, C.A., Kolobov, A. and Swaminathan, A. (2024) The Importance of Directional Feedback for LLM-Based Optimizers. arXiv: 2405.16434. https://arxiv.org/abs/2405.16434
|
[67]
|
Gabbay, H. and Cohen, A. (2024) Combining LLM-Generated and Test-Based Feedback in a MOOC for Programming. Proceedings of the Eleventh ACM Conference on Learning @ Scale, Atlanta, 18-20 July 2024, 177-187. https://doi.org/10.1145/3657604.3662040
|
[68]
|
Tanwar, H., Shrivastva, K., Singh, R. and Kumar, D. (2024) OpineBot: Class Feedback Reimagined Using a Conversational LLM. arXiv: 2401.15589. https://arxiv.org/abs/2401.15589
|
[69]
|
Estévez-Ayres, I., Callejo, P., Hombrados-Herrera, M.Á., Alario-Hoyos, C. and Delgado Kloos, C. (2024) Evaluation of LLM Tools for Feedback Generation in a Course on Concurrent Programming. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-024-00406-0
|
[70]
|
Liu, Y., Cao, J.H., Liu, C.Y., Ding, K. and Jin, L.W. (2024) Datasets for Large Language Models: A Comprehensive Survey. https://doi.org/10.21203/rs.3.rs-3996137/v1
|
[71]
|
Xu, F.Y., Lo, K.L., Soldaini, L.C., Kuehl, B., Choi, E. and Wadden, D. (2024) KIWI: A Dataset of Knowledge-Intensive Writing Instructions for Answering Research Questions. arXiv: 2403.03866.
|
[72]
|
Wang, S., Xu, T.L., Li, H., Zhang, C.L., Liang, J., Tang, J.L., Yu, P.S. and Wen, Q.S. (2024) Large Language Models for Education: A Survey and Outlook. arXiv: 2403.18105.
|
[73]
|
Lu, R., Tang, Z., Hu, G., Liu, D. and Li, J. (2023) NetEase.AI at Semeval-2023 Task 2: Enhancing Complex Named Entities Recognition in Noisy Scenarios via Text Error Correction and External Knowledge. Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, 13-14 July 2023, 987-904. https://doi.org/10.18653/v1/2023.semeval-1.124
|
[74]
|
Li, Q., Yang, X.Y., et al. (2024) From Beginner to Expert: Modeling Medical Knowledge into General LLMs. arXiv: 2312.01040.
|
[75]
|
Chen, W., Verga, P., de Jong, M., Wieting, J. and Cohen, W.W. (2023) Augmenting Pre-Trained Language Models with QA-Memory for Open-Domain Question Answering. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Dubrovnik, 2-6 May 2023, 1597-1610. https://doi.org/10.18653/v1/2023.eacl-main.117
|
[76]
|
Talmor, A., Herzig, J., Lourie, N. and Berant, J. (2018) CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. arXiv: 1811.00937.
|
[77]
|
Marini, T. and Brant-Ribeiro, T. (2024) Comparative Analysis of Intentional Gramatical Error Correction Techniques on Twitter/X. Proceedings of the 16th International Conference on Computational Processing of Portuguese, Santiago de Compostela, 14-15 March 2024, 527-531. https://aclanthology.org/2024.propor-1.55
|
[78]
|
Luhtaru, A., Korotkova, E. and Fishel, M. (2024) No Error Left Behind: Multilingual Grammatical Error Correction with Pretrained Translation Models. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), St. Julian’s, 17-22 March 2024, 1209-1222. https://aclanthology.org/2024.eacl-long.73
|
[79]
|
Ponce, A.D.H., Jadie, J.S.A., Espiritu, P.E.A. and Cheng, C. (2023) Balarila: Deep Learning for Semantic Grammar Error Correction in Low-Resource Settings. Proceedings of the First Workshop in South East Asian Language Processing, Nusa Dua, November 2023, 21-29. https://doi.org/10.18653/v1/2023.sealp-1.3
|
[80]
|
Veerubhotla, A.S., Poddar, L., Yin, J., Szarvas, G. and Eswaran, S. (2023) Few Shot Rationale Generation Using Self-Training with Dual Teachers. Findings of the Association for Computational Linguistics: ACL 2023, Toronto, 9-14 July 2023, 4825-4838. https://doi.org/10.18653/v1/2023.findings-acl.297
|
[81]
|
Ho, N., Schmid, L. and Yun, S. (2023) Large Language Models Are Reasoning Teachers. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, 9-14 July 2023, 14852–14882. https://doi.org/10.18653/v1/2023.acl-long.830
|
[82]
|
Warstadt, A., Mueller, A., et al. (2023) Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Association for Computational Linguistics. https://aclanthology.org/2023.conll-babylm
|
[83]
|
Huang, P.W. (2022) Domain Specific Augmentations as Low Cost Teachers for Large Students. Proceedings of the First Workshop on Information Extraction from Scientific Publications, November 2022, 84-90. https://aclanthology.org/2022.wiesp-1.10
|
[84]
|
Chen, Q.L., Liu, T. and Guo, J. (2024) LLM-Da: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition. arXiv: 2402.14568v1.
|
[85]
|
Zheng, H., Zhong, Q., Ding, L., Tian, Z., Niu, X., Wang, C., et al. (2023) Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 8964-8974. https://doi.org/10.18653/v1/2023.emnlp-main.555
|
[86]
|
Li, Z., Haroutunian, L., Tumuluri, R., Cohen, P. and Haf, R. (2024) Improving Cross-domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach. Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, 17-22 March 2024, 347-354. https://aclanthology.org/2024.findings-eacl.24
|
[87]
|
Zhu, Y., Si, J., Zhao, Y., Zhu, H., Zhou, D. and He, Y. (2023) EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-Hop Fact Verification. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 13377-13392. https://doi.org/10.18653/v1/2023.emnlp-main.826
|
[88]
|
Krzymiński, M. (2023) Take the Most Out of Text Data Augmentation Strategies for Intent Clustering and Induction Based on DSTC 11 track 2. Proceedings of the 19th Annual Meeting of the Young Reseachers’ Roundtable on Spoken Dialogue Systems, Prague, 11-12 September 2023, 47-48. https://aclanthology.org/2023.yrrsds-1.17
|
[89]
|
Lai, V., Nguyen, C., Ngo, N., Nguyen, T., Dernoncourt, F., Rossi, R., et al. (2023) Okapi: Instruction-Tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Singapore, 6-10 December 2023, 318-327. https://doi.org/10.18653/v1/2023.emnlp-demo.28
|
[90]
|
Bansal, R., Samanta, B., Dalmia, S., Gupta, N., Vash-ishth, S., et al. (2024) LLM Augmented LLMs: Expanding Capabilities through Composition. arXiv: 2401.02412.
|
[91]
|
Cheng, Y.H., Zhang, C.Y., et al. (2024) Exploring Large Language Model Based Intelligent Agents: Definitions, Methods, and Prospects. arXiv: 2401.03428.
|
[92]
|
Li, Q.Y., Fu, L.Y., et al. (2024) Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges. arXiv: 2401.08664.
|
[93]
|
Sun, Z.H., Lyu, C., Li, B.L., Wan, Y., Zhang, H.Y., Li, G. and Jin, Z. (2024) Enhancing Code Generation Performance of Smaller Models by Distilling the Reasoning Ability of LLMs. arXiv: 2403.13271.
|
[94]
|
Zheng, C., Sun, K., Wu, H., Xi, C.G. and Zhou, X. (2024) Balancing Enhancement, Harmlessness, and General Capabilities: Enhancing Conversational LLMs with Direct RLHF. arXiv: 2403.02513.
|
[95]
|
Hu, S.J., Zhou, L., et al. (2024) WavLLM: Towards Robust and Adaptive Speech Large Language Model. arXiv: 2404.00656.
|
[96]
|
Lee, C., Xia, C.S., Huang, J., Zhu, Z., Zhang, L. and Lyu, M.R. (2024) A Unified Debugging Approach via LLM-Based Multi-Agent Synergy. arXiv: 2404.17153.
|
[97]
|
Guo, S.Y., Deng, C., Wen, Y., Chen, H.C., Chang, Y. and Wang, J. (2024) DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning. arXiv: 2402.17453.
|
[98]
|
Ma, K., Cheng, H., Zhang, Y., Liu, X., Nyberg, E. and Gao, J. (2023) Chain-of-Skills: A Configurable Model for Open-Domain Question Answering. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, 9-14 July 2023, 1599-1618. https://doi.org/10.18653/v1/2023.acl-long.89
|
[99]
|
Huang, L., et al. (2023) A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv: 2311.05232. https://arxiv.org/pdf/2311.05232
|
[100]
|
O’Neill, L., Anantharama, N., Borgohain, S. and Angus, S.D. (2023) Models Teaching Models: Improving Model Accuracy with Slingshot Learning. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Dubrovnik, 2-6 May 2023, 3233-3247. https://doi.org/10.18653/v1/2023.eacl-main.236
|
[101]
|
Roller, S., Dinan, E., Goyal, N., Ju, D., Williamson, M., Liu, Y., et al. (2021) Recipes for Building an Open-Domain Chatbot. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 19-23 April 2021, 300-325. https://doi.org/10.18653/v1/2021.eacl-main.24
|
[102]
|
Lassner, D., Brandl, S., Baillot, A. and Nakajima, S. (2023) Domain-Specific Word Embeddings with Structure Prediction. Transactions of the Association for Computational Linguistics, 11, 320-335. https://doi.org/10.1162/tacl_a_00538
|
[103]
|
Arefeen, M.A., Debnath, B. and Chakradhar, S. (2024) LeanContext: Cost-Efficient Domain-Specific Question Answering Using LLMs. Natural Language Processing Journal, 7, Article ID: 100065. https://doi.org/10.1016/j.nlp.2024.100065
|
[104]
|
Ahn, J., Verma, R., Lou, R., Liu, D., Zhang, R. and Yin, W.P. (2024) Large Language Models for Mathematical Reasoning: Progresses and Challenges. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, St. Julian’s, Malta, 21-22 March 2024, 225-237. https://aclanthology.org/2024.eacl-srw.17
|
[105]
|
Handa, K., Clapper, M., Boyle, J., Wang, R., Yang, D., Yeager, D., et al. (2023) “Mistakes Help Us Grow”: Facilitating and Evaluating Growth Mindset Supportive Language in Classrooms. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023, 8877-8897. https://doi.org/10.18653/v1/2023.emnlp-main.549
|
[106]
|
Lundberg, S.M. and Lee, S.I. (2017) A Unified Approach to Interpreting Model Predictions. arXiv: 1705.07874.
|
[107]
|
Yu, W., Zhu, C., Zhang, Z., Wang, S., Zhang, Z., Fang, Y., et al. (2022) Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, 7-11 December 2022, 4364–4377. https://doi.org/10.18653/v1/2022.emnlp-main.294
|
[108]
|
Sultana, A., Chowdhury, N.K. and Chy, A.N. (2022) CSECU-DSG@SMM4H’22: Transformer Based Unified Approach for Classification of Changes in Medication Treatments in Tweets and WebMD Reviews. Proceedings of the Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, Gyeongju, 12-17 October 2022, 118-122. https://aclanthology.org/2022.smm4h-1.33
|
[109]
|
Si, C., Shi, W., Zhao, C., Zettlemoyer, L. and Boyd-Graber, J. (2023) Getting More Out of Mixture of Language Model Reasoning Experts. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 8234–8249. https://doi.org/10.18653/v1/2023.findings-emnlp.552
|
[110]
|
Shen, S., Yao, Z., Li, C., Darrell, T., Keutzer, K. and He, Y. (2023) Scaling Vision-Language Models with Sparse Mixture of Experts. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 11329-11344. https://doi.org/10.18653/v1/2023.findings-emnlp.758
|
[111]
|
Li, R., Murray, G. and Carenini, G. (2023) Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-Trained Language Models. Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6-10 December 2023, 9456-9469. https://doi.org/10.18653/v1/2023.findings-emnlp.634
|
[112]
|
Artetxe, M., Bhosale, S., Goyal, N., Mihaylov, T., Ott, M., Shleifer, S., et al. (2022) Efficient Large Scale Language Modeling with Mixtures of Experts. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, 7-11 December 2022, 11699-11732. https://doi.org/10.18653/v1/2022.emnlp-main.804
|
[113]
|
Jiang, Z., Peng, H., Feng, S., Li, F. and Li, D. (2024) LLMs Can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-thought. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, 3-9 August 2024, 3439-3447. https://doi.org/10.24963/ijcai.2024/381
|
[114]
|
Kong, X.H., Chen, J.Y., Wang, W.G., Su, H., Hu, X.L., Yang, Y. and Liu, S. (2024) Controllable Navigation Instruction Generation with Chain of Thought Prompting. arXiv: 2407.07433. https://arxiv.org/abs/2407.07433
|
[115]
|
Cohn, C., Hutchins, N., Le, T. and Biswas, G. (2024) A Chain-of-Thought Prompting Approach with LLMs for Evaluating Students’ Formative Assessment Responses in Science. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 23182-23190. https://doi.org/10.1609/aaai.v38i21.30364
|
[116]
|
Parker, M.J., Anderson, C., Stone, C., et al. (2024) A Large Language Model Approach to Educational Survey Feedback Analysis. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-024-00414-0
|
[117]
|
Pozdniakov, S., Brazil, J., Abdi, S., Bakharia, A., Sadiq, S., Gašević, D., et al. (2024) Large Language Models Meet User Interfaces: The Case of Provisioning Feedback. Computers and Education: Artificial Intelligence, 7, Article ID: 100289. https://doi.org/10.1016/j.caeai.2024.100289
|
[118]
|
Dimbisoa, W.G., Mahatody, T. and Razafimandimby, J.P. (2018) Creating a Metamodel of UI Components in Form of Model Independent of the Platform. International Journal of Conceptions on Computing and Information Technology, 6, 48-52. http://wairco.org/IJCCIT/November2018Paper12.pdf
|
[119]
|
Logaprakash, M., Manjunath, N., Rubanraaj, K. and Srinivas, V. (2024) Personalised Learning System Using LLM. International Journal of Creative Research Thoughts (IJCRT), 12, c24-c26. https://www.ijcrt.org/papers/IJCRT2405220.pdf
|
[120]
|
Abu-Rasheed, H., Weber, C. and Fathi, M. (2024) Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations. arXiv: 2403.03008.
|
[121]
|
Fahl, W. (2024) GraphWiseLearn: Personalized Learning through Semantified TEL, Leveraging QA-Enhanced LLM-Generated Content. https://2024.eswc-conferences.org/wp-content/uploads/2024/05/77770405.pdf
|
[122]
|
Park, M., Kim, S., Lee, S., Kwon, S. and Kim, K. (2024) Empowering Personalized Learning through a Conversation-Based Tutoring System with Student Modeling. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, 11-16 May 2024, 1-10. https://doi.org/10.1145/3613905.3651122
|
[123]
|
Shoeibi, N. (2023) Cross-Lingual Transfer in Generative AI-Based Educational Platforms for Equitable and Personalized Learning. Learning Analytics Summer Institute (LASI), Madrid, June 29-30 2023, 524-540. https://ceur-ws.org/Vol-3542/paper8.pdf
|
[124]
|
Shi, Y.X., Zi, X., Shi, Z.J., Zhang, H.M., Wu, Q. and Xu, M. (2024) ERAGent: Enhancing Retrieval-Augmented Language Models with Improved Accuracy, Efficiency, and Personalization. arXiv: 2405.06683.
|
[125]
|
Hang, C.N., Wei Tan, C. and Yu, P. (2024) MCQGen: A Large Language Model-Driven MCQ Generator for Personalized Learning. IEEE Access, 12, 102261-102273. https://doi.org/10.1109/access.2024.3420709
|
[126]
|
Teresa, L.A., Sunil, N.M., Andrews, S.R., Thengumpallil, T.T., Thomas, S. and V A, B. (2023) Enhancing Children’s Learning Experience: Interactive and Personalized Video Learning with AI Technology. 2023 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE), Kerala, 8-11 November 2023, 1-5. https://doi.org/10.1109/rasse60029.2023.10363506
|
[127]
|
Mo, C., Wang, C., Dai, J. and Jin, P. (2022) Video Playback Speed Influence on Learning Effect from the Perspective of Personalized Adaptive Learning: A Study Based on Cognitive Load Theory. Frontiers in Psychology, 13, Article 839982. https://doi.org/10.3389/fpsyg.2022.839982
|
[128]
|
Cui, Y. and Hu, Y. (2024) Personalized Recommendation Method for the Video Teaching Resources of Folk Sports Shehuo Based on Mobile Learning. In: Wang, B., Hu, Z., Jiang, X. and Zhang, Y.D., Eds., Multimedia Technology and Enhanced Learning, Springer Nature Switzerland, 254-267. https://doi.org/10.1007/978-3-031-50574-4_18
|
[129]
|
Xu, Y., Li, X., Yang, Y., Lin, Z., Wang, L. and Li, W. (2023) FedABR: A Personalized Federated Reinforcement Learning Approach for Adaptive Video Streaming. 2023 IFIP Networking Conference (IFIP Networking), Barcelona, 12-15 June 2023, 1-9. https://doi.org/10.23919/ifipnetworking57963.2023.10186404
|
[130]
|
Gorban, A.N., Mirkes, E.M. and Zinovyev, A.Y. (2023) Exploring the Impact of Adaptive Video on Personalized Learning Experiences. Proceedings of the Workshop on the Influence of Adaptive Video Learning, Plovdiv, 13-14 October 2022, 9-16. https://ceur-ws.org/Vol-3372/paper01.pdf
|
[131]
|
Lu, Y., Zhu, Y. and Wang, Z. (2022) Personalized 360-Degree Video Streaming. Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, 10-14 October 2022, 3143-3151. https://doi.org/10.1145/3503161.3548047
|
[132]
|
Liu, X.D. and Xue, X.W. (2023) Research on Learning Video Recommendation System Based on DBSCAN Clustering Algorithm. International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2023), Yinchuan, 18-19 August 2023, 129-137.
|
[133]
|
Bontchev, B., Antonova, A. and Dankov, Y. (2020) Educational Video Game Design Using Personalized Learning Scenarios. In: Gervasi, O., et al., Eds., Computational Science and Its Applications—ICCSA 2020, Springer, 829-845.
|
[134]
|
Yi, R., Ye, Z.P., Zhang, J.Y., Bao, H.J. and Liu, Y.J. (2020) Audio-Driven Talking Face Video Generation with Learning-Based Personalized Head Pose. arXiv: 2002.10137. https://arxiv.org/abs/2002.10137
|
[135]
|
Qu, Z.Y., Yin, L., Yu, Z.T., Wang, W.B. and Zhang, X. (2024) CourseGPT-ZH: An Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization. arXiv: 2405.04781.
|
[136]
|
Zastudil, C., Rogalska, M., Kapp, C., Vaughn, J. and MacNeil, S. (2023) Generative AI in Computing Education: Perspectives of Students and Instructors. 2023 IEEE Frontiers in Education Conference (FIE), College Station, 18-21 October 2023, 1-9. https://doi.org/10.1109/fie58773.2023.10343467
|
[137]
|
Wang, H., Dang, A., Wu, Z. and Mac, S. (2024) Generative AI in Higher Education: Seeing ChatGPT through Universities’ Policies, Resources, and Guidelines. Computers and Education: Artificial Intelligence, 7, Article ID: 100326. https://doi.org/10.1016/j.caeai.2024.100326
|
[138]
|
Heston, T. and Khun, C. (2023) Prompt Engineering in Medical Education. International Medical Education, 2, 198-205. https://doi.org/10.3390/ime2030019
|
[139]
|
Wang, T.Y., Zhou, N.J. and Chen, Z.X. (2024) Enhancing Computer Programming Education with LLMs: A Study on Effective Prompt Engineering for Python Code Generation. arXiv: 2407.05437. https://arxiv.org/abs/2407.05437
|
[140]
|
Taylor Gonzalez, D.J., Djulbegovic, M.B. and Bair, H. (2024) We Need to Add Prompt Engineering Education to Optimize Generative Artificial Intelligence in Medicine. Academic Medicine, 99, 1050-1051.
|
[141]
|
Devlin, J., Chang, M.W., Lee, K. and Toutanova, K. (2018) Bert: Pretraining of Deep Bidirectional Transformers for Language Understanding. arXiv: 1810.04805.
|
[142]
|
Zhang, H.J., Xu, Y.M. andPerez-Beltrachini, L. (2024) Fine-Grained Natural Language Inference Based Faithfulness Evaluation for Diverse Summarisation Tasks. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), St. Julian’s, 17-22 March 2024, 1701-1722. https://aclanthology.org/2024.eacl-long.102.
|
[143]
|
Akoju, S.A., Vacareanu, R., Blanco, E., Riaz, H. and Surdeanu, M. (2023) Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference. Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE), Toronto, 13 June 2023, 157-168. https://doi.org/10.18653/v1/2023.nlrse-1.12
|
[144]
|
Tian, R., Zhao, Z., Liu, W., Liu, H., Mao, W., Zhao, Z., et al. (2023) SAMP: A Model Inference Toolkit of Post-Training Quantization for Text Processing via Self-Adaptive Mixed-precision. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, Singapore, 6-10 December 2023, 123-130. https://doi.org/10.18653/v1/2023.emnlp-industry.13
|
[145]
|
Austin, E., Zaïane, O.R. and Largeron, C. (2022) Community Topic: Topic Model Inference by Consecutive Word Community Discovery. Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, 12-17 October 2022, 971-983. https://aclanthology.org/2022.coling-1.81
|
[146]
|
Pletenev, S., Chekalina, V., Moskovskiy, D., Seleznev, M., Zagoruyko, S. and Panchenko, A. (2023) A Computational Study of Matrix Decomposition Methods for Compression of Pre-Trained Transformers. Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation, Hong Kong, 2-5 December 2023, 723-742. https://aclanthology.org/2023.paclic-1.73
|
[147]
|
Volosincu, M., Lupu, C., Trandabat, D. and Gifu, D. (2023) FII SMART at Semeval 2023 Task7: Multi-Evidence Natural Language Inference for Clinical Trial Data. Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, 13-14 July 2023, 212-220. https://doi.org/10.18653/v1/2023.semeval-1.30
|
[148]
|
Kotitsas, S., Kounoudis, P., Koutli, E. and Papageorgiou, H. (2024) Leveraging Fine-tuned Large Language Models with LoRA for Effective Claim, Claimer, and Claim Object Detection. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), St. Julian’s, 17-22 March 2024, 2540-2554. https://aclanthology.org/2024.eacl-long.156
|
[149]
|
Power, R. and Scott, D. (1998) WYSIWYM: Knowledge Editing with Natural Language Feedback. Association for Computational Linguistics. https://aclanthology.org/W98-1437
|
[150]
|
Yehudai, A., Carmeli, B., Mass, Y., Arviv, O., Mills, N., Toledo, A., Shnarch, E. and Choshen, L. (2024) Genie: Achieving Human Parity in Content-Grounded Datasets Generation. arXiv: 2401.14367.
|
[151]
|
Nayak, N., Nan, Y., Trost, A. and Bach, S. (2024) Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation. Findings of the Association for Computational Linguistics ACL 2024, Bangkok, 11-16 August 2024, 12585-12611. https://doi.org/10.18653/v1/2024.findings-acl.748
|
[152]
|
Xu, X.H., Li, M., Tao, C.Y., Shen, T., Cheng, R., Li, J.Y., Xu, C., Tao, D.C. and Zhou, T.Y. (2024) A Survey on Knowledge Distillation of Large Language Models. arXiv: 2402.13116.
|
[153]
|
Li, Q.Y., Fu, L.Y., Zhang, W.M., Chen, X.Y., Yu, J.W., Xia, W., Zhang, W.N., Tang, R.M. and Yu, Y. (2023) Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges. arXiv: 2401.08664.
|
[154]
|
Rai, D. and Yao, Z.Y. (2024) An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of LLMs. arXiv: 2406.12288. https://arxiv.org/abs/2406.12288
|
[155]
|
Chang, W. and Chen, Y. (2024) Injecting Salesperson’s Dialogue Strategies in Large Language Models with Chain-Of-Thought Reasoning. Findings of the Association for Computational Linguistics ACL 2024, Bangkok, 11-16 August 2024, 3798-3812. https://doi.org/10.18653/v1/2024.findings-acl.228
|
[156]
|
Tutunov, R., Grosnit, A., Ziomek, J., Wang, J. and Bou-Ammar, H. (2024) Why Can Large Language Models Generate Correct Chain-of-Thoughts? arXiv: 2310.13571. https://arxiv.org/abs/2310.13571
|
[157]
|
Zou, A., Zhang, Z.S. and Zhao, H. (2024) AuRoRA: A One-for-All Platform for Augmented Reasoning and Refining with Task-Adaptive Chain-of-Thought Prompting. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, 20-25 May 2024, 1801-1807. https://aclanthology.org/2024.lrec-main.160
|
[158]
|
Sultan, A., Ganhotra, J. and Astudillo, R.F. (2024) Structured Chain-of-Thought Prompting for Few-Shot Generation of Content-Grounded QA Conversations. arXiv: 2402.11770. https://arxiv.org/abs/2402.11770
|
[159]
|
Chu, Z., Chen, J.C., et al. (2024) Navigate through Enigmatic Labyrinth a Survey of Chain of Thought Reasoning: Advances, Frontiers and Future. arXiv: 2309.15402. https://arxiv.org/abs/2309.15402
|
[160]
|
Lee, U., Jung, H., Jeon, Y., Sohn, Y., Hwang, W., Moon, J., et al. (2023) Few-shot Is Enough: Exploring ChatGPT Prompt Engineering Method for Automatic Question Generation in English Education. Education and Information Technologies, 29, 11483-11515. https://doi.org/10.1007/s10639-023-12249-8
|