New Publication: "Evaluating Problem Difficulty Rankings Using Sparse Student Response Data"

      Filed under: Publications   
Bader-Natal, A. and Pollack, J. Evaluating Problem Difficulty Rankings Using Sparse Student Response Data. Supplementary Proceedings of the 13th International Conference on Artificial Intelligence in Education, 2007.
Abstract Draft PDF Draft PDF Full proceedings Full proceedings

Problem difficulty estimates play important roles in a wide variety of educational systems, including determining the sequence of problems presented to students and the interpretation of the resulting responses. The accuracy of these metrics are therefore important, as they can determine the relevance of an educational experience. For systems that record large quantities of raw data, these observations can be used to test the predictive accuracy of an existing difficulty metric. In this paper, we examine how well one rigorously developed - but potentially outdated - difficulty scale for American-English spelling fits the data collected from seventeen thousand students using our SpellBEE peer-tutoring system. We then attempt to construct alternate metrics that use collected data to achieve a better fit. The domain-independent techniques presented here are applicable when the matrix of available student-response data is sparsely populated or non-randomly sampled. We find that while the original metric fits the data relatively well, the data-driven metrics provide approximately 10% improvement in predictive accuracy. Using these techniques, a difficulty metric can be periodically or continuously recalibrated to ensure the relevance of the educational experience for the student.