Missing in Measurement: Why Identifying Learning in Integrated Domains Is So Hard
Published in Journal of Science Education and Technology, 2019
Recommended citation: Bortz, W. W., Gautam, A., Tatar, D., & Lipscomb, K. (2020). Missing in Measurement: Why Identifying Learning in Integrated Domains Is So Hard. Journal of Science Education and Technology, 29(1), 121-136. http://aakash.xyz/files/jost2019.pdf
Integrating computational thinking (CT) and science education is complex, and assessing the resulting learning gains even more so. Arguments that assessment should match the learning lead to a performance-oriented approach to assessment, using tasks that mirror the integrated instruction. This approach reaps benefits but also poses challenges. Integrated CT is a new approach to learning. Movement is being made toward understanding what it means to operate successfully in this context, but consensus is neither general nor time tested. Movement is also being made toward developing methods for assessing CT. Despite the benefits of matching assessment with pedagogy, there may be intrinsic losses. One problem is that interactions between the two domains may invalidate the results, either because the gains in one may be easier to measure at certain times than the gains in the other, or because interactions between the two domains may cause measurement interference. Our examination draws upon both theoretical basis and also existing practices, particularly from our own work integrating CT and secondary science. We present a mixed-methods analysis of student assessment results and consider potential issues with moving too quickly toward relying on a rubric-based approach to evaluating this student learning. Centrally, we emphasize the importance of assessment approaches that reflect one of the most important affordances of computational environments, that is, the expression of multiple ways of knowing and doing.