By Professor Mark Brown
This brief opinion paper raises a number of conceptual and methodological issues associated with attempts to evaluate institutional initiatives in the area of learning analytics. It frames the discussion around three recent works that invite a more critical reading of learning analytics research and the potential of interventions and data-driven decisions for successful, sustainable and scalable impact on an institution-wide basis.
Firstly, the emerging field of Learning Analytics would benefit from more critical engagement with some of the points raised by Paul Kirschner (2016) in his keynote at the 6th International Conference on Learning Analytics and Knowledge (LAK16). More specifically, Kirschner warns that naïve understandings of learning and narrow conceptions of learning analytics may potentially do a lot of harm.
More recently Kirschner and Neelen (2017) argue that many so-called learning analytics initiatives: (i) view education as a simple process that is easily modelled; (ii) base decisions and interventions on data rich but weak theory; (iii) inform decisions and interventions based on wrong or even invalid variables; (iv) make interpretations and arrive at conclusions that confuse correlations with causality; and (v) result in unintended and unwanted effects that pigeonhole and stereotype learners which may be counterproductive to enhancing student engagement and learner success. Arguably, to date there has not been a serious or comprehensive response to these justifiable concerns.