A Multicriteria Evaluation for Data-Driven Programming Feedback Systems: Accuracy, Effectiveness, Fallibility, and Students' Response
Data-driven programming feedback systems can help novices to program in the absence of a human tutor. Prior evaluations showed that these systems improve learning in terms of test scores, or task completion efficiency. However, crucial aspects which can impact learning or reveal insights important for future improvement of such systems are ignored in these evaluations. These aspects include inherent fallibility of current state-of-the-art, students' programming behavior in response to correct/incorrect feedback, and effective/ineffective system components. Consequently, a great deal of knowledge is yet to be discovered about such systems. In this paper, we apply a multi-criteria evaluation with 5 criteria on a data-driven feedback system integrated within a block-based novice programming environment. Each criterion in the evaluation reveals a unique pivotal aspect of the system: 1) How accurate the feedback system is; 2) How it guides students throughout programming tasks; 3) How it helps students in task completion; 4) What happens when it goes wrong; and 5) How students respond generally to the system. Our evaluation results showed that the system was helpful to students due to its effective design and feedback representation despite being fallible. However, novices can be negatively impacted by this fallibility due to high reliance and lack of self-evaluation. The negative impacts include increased working time, implementation, or submission of incorrect/partially correct solutions. The evaluation results reinforced the necessity of multi-criteria system evaluations while revealing important insights helpful to ensuring proper usage of data-driven feedback systems, designing fallibility mitigation steps, and driving research for future improvement.
READ FULL TEXT