Publications
(2023) Impact of Non-Cognitive Interventions on Student Learning Behaviors and Outcomes: An analysis of seven large-scale experimental inventions
Authors
Kirk Vanacore, Ashish Gurung, Andrew McReynolds, Allison Liu, Stacy Shaw, Neil Heffernan
Abstract
As evidence grows supporting the importance of non-cognitive factors in learning, computer-assisted learning platforms increasingly incorporate non-academic interventions to influence student learning and learning related-behaviors. Non-cognitive interventions often attempt to influence students’ mindset, motivation, or metacognitive reflection to impact learning behaviors and outcomes. In the current paper, we analyze data from five experiments, involving seven treatment conditions embedded in mastery-based learning activities hosted on a computer-assisted learning platform focused on middle school mathematics. Each treatment condition embodied a specific non-cognitive theoretical perspective. Over seven school years, 20,472 students participated in the experiments. We estimated the effects of each treatment condition on students’ response time, hint usage, likelihood of mastering knowledge components, learning efficiency, and post-tests performance. Our analyses reveal a mix of both positive and negative treatment effects on student learning behaviors and performance. Few interventions impacted learning as assessed by the post-tests. These findings highlight the difficulty in positively influencing student learning behaviors and outcomes using non-cognitive interventions.
Full Article at 10.1145/3576050.3576073
(2023) Identification, Exploration, and Remediation: Can Teachers Predict Common Wrong Answers?
Authors
Ashish Gurung, Sami Baral, Kirk P. Vanacore, Andrew A. McReynolds,
Hilary Kreisberg, Anthony F. Botelho, Stacy T. Shaw, Neil T. Heffernan
Abstract
Prior work analyzing tutoring sessions provided evidence that highly effective tutors, through their interaction with students and their experience, can perceptively recognize incorrect processes or “bugs” when students incorrectly answer problems. Researchers have studied these tutoring interactions examining instructional approaches to address incorrect processes and observed that the format of the feedback can influence learning outcomes. In this work, we recognize the incorrect answers caused by these buggy processes as Common Wrong Answers (CWAs). We examine the ability of teachers and instructional designers to identify CWAs proactively. As teachers and instructional designers deeply understand the common approaches and mistakes students make when solving mathematical problems, we examine the feasibility of proactively identifying CWAs and generating Common Wrong Answer Feedback (CWAFs) as a formative feedback intervention for addressing student learning needs. As such, we analyze CWAFs in three sets of analyses. We first report on the accuracy of the CWAs predicted by the teachers and instructional designers on the problems across two activities. We then measure the effectiveness of the CWAFs using an intent-to-treat analysis. Finally, we explore the existence of personalization effects of the CWAFs for the students working on the two mathematics activities.
Full Article at 10.1145/3576050.3576109
(2020) Towards Learning at Scale in Developing Countries: Lessons from the Global Learning XPRIZE Field Study
Authors
Andrew McReynolds, Sheba Naderzad, Mononito Goswami, Jack Mostow
Abstract
Advances in education technology are enabling tremendous advances in learning at scale. However, they typically assume resources taken for granted in developed countries, including reliable electricity, high-bandwidth Internet access, fast WiFi, powerful computers, sophisticated sensors, and expert technical support to keep it all working. This paper examines these assumptions in the context of a massive test of learning at scale in a developing country. We examine each assumption, how it was broken, and some workarounds used in a 15-month-long independent controlled evaluation of pre- to posttest learning and social-emotional gains by over 2,000 children in 168 villages in Tanzania. We analyze those gains to characterize who gained how much, using test score data, social-emotional measures, and detailed logs from RoboTutor. We quantify the relative impact of pretest scores, literate aspirations, treatment, and usage on learning gains.
Full Article Access at https://dl.acm.org/doi/abs/10.1145/3386527.3405920