[1]
|
Bohus, D., & Horvitz, E. (2009). Models for Multiparty Engagement in Open-World Dialog. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (pp. 225-234). Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.3115/1708376.1708409
|
[2]
|
Bosch, N., D’Mello, S. K., Baker, R. S., Ocumpaugh, J., Shute, V., Ventura, M., Wang, L., & Zhao, W. (2016). Detecting Student Emotions in Computer-Enabled Classrooms. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (pp. 4125-4129). Palo Alto, CA: AAAI Press.
|
[3]
|
Bosch, N., D’Mello, S., Baker, R., Ocumpaugh, J., Shute, V., Ventura, M., Wang, L., & Zhao, W. (2015). Automatic Detection of Learning-Centered Affective States in the Wild. In Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 379-388). New York, NY: Association for Computing Machinery.
https://doi.org/10.1145/2678025.2701397
|
[4]
|
Burt, K. B., & Obradovic, J. (2013). The Construct of Psychophysiological Reactivity: Statistical and Psychometric Issues. Developmental Review, 33, 29-57.
https://doi.org/10.1016/j.dr.2012.10.002
|
[5]
|
Calvo, R. A., & D’Mello, S. (2010). Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Transactions on Affective Computing, 1, 18-37. https://doi.org/10.1109/T-AFFC.2010.1
|
[6]
|
Calvo, R., & D’Mello, S. K. (Eds.) (2011). New Perspectives on Affect and Learning Technologies. New York, NY: Springer. https://doi.org/10.1007/978-1-4419-9625-1
|
[7]
|
Castellano, G., Kessous, L., & Caridakis, G. (2008). Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech. In C. Peter, & R. Beale (Eds.), Affect and Emotion in Human-Computer Interaction (pp. 92-103). Berlin, Heidelberg: Springer.
https://doi.org/10.1007/978-3-540-85099-1_8
|
[8]
|
Chen, L., Li, X., Xia, Z., Song, Z., Morency, L. P., & Dubrawski, A. (2016). Riding an Emotional Roller-Coaster: A Multimodal Study of Young Child’s Math Problem Solving Activities. Proceedings of the 9th International Conference on Educational Data Mining, Raleigh, NC, 29 June-2 July 2016, 38-45.
|
[9]
|
Cowley, B., Ravaja, N., & Heikura, T. (2013). Cardiovascular Physiology Predicts Learning Effects in a Serious Game Activity. Computers & Education, 60, 299-309.
https://doi.org/10.1016/j.compedu.2012.07.014
|
[10]
|
Craig, S., Graesser, A., Sullins, J., & Gholson, B. (2004). Affect and Learning: An Exploratory Look into the Role of Affect in Learning with AutoTutor. Journal of Educational Media, 29, 241-250. https://doi.org/10.1080/1358165042000283101
|
[11]
|
D’Mello, S., & Graesser, A. (2012). Dynamics of Affective States during Complex Learning. Learning and Instruction, 22, 145-157.
https://doi.org/10.1016/j.learninstruc.2011.10.001
|
[12]
|
D’Mello, S., Craig, S., Fike, K., & Graesser, A. (2009). Responding to Learners’ Cognitive-Affective States with Supportive and Shakeup Dialogues. In J. Jacko (Ed.), Human-Computer Interaction. Ambient, Ubiquitous and Intelligent Interaction (pp. 595-604). Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-02580-8_65
|
[13]
|
De Koning, B. B., Tabbers, H. K., Rikers, R. M., & Paas, F. (2010). Attention Guidance in Learning from a Complex Animation: Seeing Is Understanding? Learning and Instruction, 20, 111-122. https://doi.org/10.1016/j.learninstruc.2009.02.010
|
[14]
|
Devillers, L., & Vidrascu, L. (2007). Real-Life Emotion Recognition in Speech. In C. Müller (Ed.), Speaker Classification II (pp. 34-42). Berlin, Heidelberg: Springer.
https://doi.org/10.1007/978-3-540-74122-0_4
|
[15]
|
Du, P., Kibbe, W. A., & Lin, S. M. (2006). Improved Peak Detection in Mass Spectrum by Incorporating Continuous Wavelet Transform-Based Pattern Matching. Bioinformatics, 22, 2059-2065. https://doi.org/10.1093/bioinformatics/btl355
|
[16]
|
Eyben, F., Wollmer, M., & Schuller, B. (2010). Opensmile: The Munich Versatile and Fast Open-Source Audio Feature Extractor. In Proceedings of the 18th ACM International Conference on Multimedia (pp. 1459-1462). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1873951.1874246
|
[17]
|
Forbes-Riley, K., & Litman, D. (2011). When Does Disengagement Correlate with Learning in Spoken Dialog Computer Tutoring? In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), International Conference on Artificial Intelligence in Education (pp. 81-89). Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-21869-9_13
|
[18]
|
Gomes, J., Yassine, M., Worsley, M., & Blikstein, P. (2013). Analysing Engineering Expertise of High School Students Using Eye Tracking and Multimodal Learning Analytics. In S. K. D’Mello, R. A. Calvo, & A. Olney (Eds.), Proceedings of the 6th International Conference on Educational Data Mining. International Educational Data Mining Society.
|
[19]
|
Graesser, A., Chipman, P., King, B., McDaniel, B., & D’Mello, S. (2007). Emotions and Learning with Auto Tutor. Frontiers in Artificial Intelligence and Applications, 158, 569.
|
[20]
|
Graesser, A., Ozuru, Y., & Sullins, J. (2010). What Is a Good Question? In M. McKeown, & G. Kucan (Eds.), Bringing Reading Research to Life (pp. 112-141). New York: Guilford.
|
[21]
|
Grafsgaard, J., Wiggins, J. B., Boyer, K. E., Wiebe, E. N., & Lester, J. (2013). Automatically Recognizing Facial Expression: Predicting Engagement and Frustration. Educational Data Mining 2013.
|
[22]
|
Hoque, M. E., McDuff, D. J., & Picard, R. W. (2012). Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles. IEEE Transactions on Affective Computing, 3, 323-334. https://doi.org/10.1109/T-AFFC.2012.11
|
[23]
|
Hussain, M. S., AlZoubi, O., Calvo, R. A., & D’Mello, S. K. (2011). Affect Detection from Multichannel Physiology during Learning Sessions with AutoTutor. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), International Conference on Artificial Intelligence in Education (pp. 131-138). Berlin, Heidelberg: Springer.
https://doi.org/10.1007/978-3-642-21869-9_19
|
[24]
|
Kapoor, A., Burleson, W., & Picard, R. W. (2007). Automatic Prediction of Frustration. International Journal of Human-Computer Studies, 65, 724-736.
https://doi.org/10.1016/j.ijhcs.2007.02.003
|
[25]
|
Kononenko, I. (1994). Estimating Attributes: Analysis and Extensions of RELIEF. In F. Bergadano, & L. De Raedt (Eds.), European Conference on Machine Learning (pp. 171-182). Berlin, Heidelberg: Springer. https://doi.org/10.1007/3-540-57868-4_57
|
[26]
|
Landis, J. R., & Koch, G. G. (1977). The Measurement of Observer Agreement for Categorical Data. Biometrics, 33, 159-174. https://doi.org/10.2307/2529310
|
[27]
|
Lin, J., Keogh, E., Lonardi, S., & Chiu, B. (2003). A Symbolic Representation of Time Series, with Implications for Streaming Algorithms. In Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (pp. 2-11). New York, NY: Association for Computing Machinery.
https://doi.org/10.1145/882082.882086
|
[28]
|
Lin, J., Keogh, E., Wei, L., & Lonardi, S. (2007). Experiencing SAX: A Novel Symbolic Representation of Time Series. Data Mining and Knowledge Discovery, 15, 107-144.
https://doi.org/10.1007/s10618-007-0064-z
|
[29]
|
Luft, C. D. B., Nolte, G., & Bhattacharya, J. (2013). High-Learners Present Larger mid-Frontal Theta Power and Connectivity in Response to Incorrect Performance Feedback. Journal of Neuroscience, 33, 2029-2038.
https://doi.org/10.1523/JNEUROSCI.2565-12.2013
|
[30]
|
Matthews, G., Campbell, S. E., Falconer, S., Joyner, L. A., Huggins, J., Gilliland, K., & Warm, J. S. (2002). Fundamental Dimensions of Subjective State in Performance Settings: Task Engagement, Distress, and Worry. Emotion, 2, 315-340.
https://doi.org/10.1037/1528-3542.2.4.315
|
[31]
|
Monkaresi, H., Bosch, N., Calvo, R. A., & D’Mello, S. K. (2016). Automated Detection of Engagement Using Video-Based Estimation of Facial Expressions and Heart Rate. IEEE Transactions on Affective Computing, 8, 15-28.
https://doi.org/10.1109/TAFFC.2016.2515084
|
[32]
|
Pardos, Z. A., Baker, R. S., San Pedro, M. O., Gowda, S. M., & Gowda, S. M. (2014). Affective States and State Tests: Investigating How Affect and Engagement during the School Year Predict End-of-Year Learning Outcomes. Journal of Learning Analytics, 1, 107-128. https://doi.org/10.18608/jla.2014.11.6
|
[33]
|
Peng, S., Chen, L., Gao, C., & Tong, R. J. (2020a). Predicting Students’ Attention Level with Interpretable Facial and Head Dynamic Features in an Online Tutoring System (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34, 13895-13896. https://doi.org/10.1609/aaai.v34i10.7220
|
[34]
|
Peng, S., Ohira, S., & Nagao, K. (2018). Automatic Evaluation of Students’ Discussion Skill Based on Their Heart Rate. In B. McLaren, R. Reilly, S. Zvacek, & J. Uhomoibhi (Eds.), International Conference on Computer Supported Education (pp. 572-585). Berlin: Springer. https://doi.org/10.1007/978-3-030-21151-6_27
|
[35]
|
Peng, S., Ohira, S., & Nagao, K. (2019). Prediction of Students’ Answer Relevance in Discussion Based on their Heart-Rate Data. International Journal of Innovation and Research in Educational Sciences (IJIRES), 6, 414-424.
|
[36]
|
Peng, S., Ohira, S., & Nagao, K. (2020b). Reading Students’ Multiple Mental States in Conversation from Facial and Heart Rate Cues. Proceedings of the 12th International Conference on Computer Supported Education, 1, 68-76.
https://doi.org/10.5220/0009564000680076
|
[37]
|
Robison, J., McQuiggan, S., & Lester, J. (2009). Evaluating the Consequences of Affective Feedback in Intelligent Tutoring Systems. In C. Muhl, D. Heylen, & A. Nijholt (Eds.), Proceedings of International Conference on Affective Computing & Intelligent Interaction (pp. 37-42). Los Alamitos, CA: IEEE Computer Society Press.
https://doi.org/10.1109/ACII.2009.5349555
|
[38]
|
Rodrigo, M. M. T., & Baker, R. S. J. d. (2011a). Comparing the Incidence and Persistence of Learners’ Affect during Interactions with Different Educational Software Packages. In R. Calvo, & S. D’Mello (Eds.), New Perspective on Affect and Learning Technologies (pp. 183-200). New York, NY: Springer. https://doi.org/10.1007/978-1-4419-9625-1_14
|
[39]
|
Rodrigo, M. M. T., & Baker, R. S. J. d. (2011b). Comparing Learners’ Affect While Using an Intelligent Tutor and an Educational Game. Research and Practice in Technology Enhanced Learning, 6, 43-66.
|
[40]
|
Rodrigo, M. M. T., Baker, R. S., Agapito, J., Nabos, J., Repalam, M. C., Reyes, S. S., & San Pedro, M. O. C. (2012). The Effects of an Interactive Software Agent on Student Affective Dynamics While Using an Intelligent Tutoring System. IEEE Transactions on Affective Computing, 3, 224-236. https://doi.org/10.1109/T-AFFC.2011.41
|
[41]
|
Schuller, B., Steidl, S., & Batliner, A. (2009). The INTERSPEECH 2009 Emotion Challenge. 10th Annual Conference of the International Speech Communication Association, Brighton UK, 6-10 September 2009, 312-315.
|
[42]
|
Sikka, K., Dykstra, K., Sathyanarayana, S., Littlewort, G., & Bartlett, M. (2013). Multiple Kernel Learning for Emotion Recognition in the Wild. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction (pp. 517-524). New York, NY: Association for Computing Machinery.
https://doi.org/10.1145/2522848.2531741
|
[43]
|
Stevens, R. H., Galloway, T., & Berka, C. (2007). EEG-Related Changes in Cognitive Workload, Engagement and Distraction as Students Acquire Problem Solving Skills. International Conference on User Modeling, Springer, Berlin, Heidelberg, 187-196.
|
[44]
|
Urbanowicz, R. J., Olson, R. S., Schmitt, P., Meeker, M., & Moore, J. H. (2018). Benchmarking Relief-Based Feature Selection Methods for Bioinformatics Data Mining. Journal of Biomedical Informatics, 85, 168-188.
https://doi.org/10.1016/j.jbi.2018.07.015
|
[45]
|
Yoon, S., Byun, S., Dey, S., & Jung, K. (2019). Speech Emotion Recognition Using Multi-Hop Attention Mechanism. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2822-2826). Piscataway, NJ: IEEE. https://doi.org/10.1109/ICASSP.2019.8683483
|
[46]
|
Zaletelj, J., & Kosir, A. (2017). Predicting Students’ Attention in the Classroom from Kinect Facial and Body Features. EURASIP Journal on Image and Video Processing, 2017, Article No. 80. https://doi.org/10.1186/s13640-017-0228-8
|