By Dr Nina Hood
Measurement has been on my mind of late. Yesterday we hosted a seminar by Dr Aaron Wilson on how schools can use data and evidence to improve practice (more to come on this in another article); we currently are working with our bright spots teams to establish how they will be evaluating – measuring – the impact of their new ways of working; and I recently had coffee with an experienced educator working in the school improvement space who described the challenge she is facing working with schools to make better use of data and evidence to understand how their students are progressing.
Measurement and its associated terms data and evidence currently hold quite a controversial place in New Zealand education. This in part reflects their association with high stakes accountability measures and the politisation of education measurement (I have explored this in an earlier article). However, measurement in education is not only for accountability. Measurement also is critical for improvement. As Tony Bryk from the Carnegie Foundation for the Advancement of Teaching suggests “we cannot improve at scale what we cannot measure”. This quote suggests that if we want our schools to be continually improving their offerings to students, measurement must be a central component of their work. Without some form of measurement it is very difficult to accurately assess whether particular actions or practices have led to change, or to determine next steps and plan future actions.
This requires a new mindset in educators (and accompanying knowledge and expertise) to recognise the value of measurement for improvement, and new systems to support this work. Practical applications of measurement, such as the type needed in schools, is particularly fraught. There is a gap in education between what we can easily (and rigorously) measure – things such as progress in mathematics, reading and writing – and the full range of valued outcomes that schools and our education system more generally aspire to develop in students. At times, education has fallen into the trap of what educational philosopher Gert Biesta terms normative validity. That is, rather than measuring what we value, ‘we are just measuring what we can easily measure and thus end up valuing what we (can) measure’.
While education slowly is getting better at developing more robust ways for measuring a wider range of outcomes (work by NZCER in this country, and US organisation Panorama’s work on measuring socio-emotional learning are notable here), we are still limited by a lack of established measures.
This is something The Education Hub currently is grappling with in our work with our bright spots teams. For example, we are engaging with our team from Mt Aspiring College, who are developing a micro-credentialing programme, around how to develop a way to measure student agency (read more about this work here). And our team at Burnside Primary, who are developing an oracy curriculum, are developing a way for measuring students’ oracy development, which draws on the work of Voice 21 in the UK and the work of a group of scholars from the University of Cambridge, and links it to the New Zealand Curriculum.
There are a number of parallels between the work being done with our bright spots teams and the concept of practical measurement developed by Tony Bryk and colleagues. Practical measurement is measurement designed specifically for supporting the iterative improvement of practice. It enables educators to understand whether a change they have made is actually leading to improvement and critically, variability in improvement, i.e. whether the change is working better for some students than others. Practical measurement is designed to be rigorous but also easily undertaken within a school or lesson context, allowing for regular measurement without substantially adding to the workload of teachers.
While practical measurement does present an intriguing way forward for measurement in education, it still faces several hurdles (as we are experiencing first hand with our bright spots teams). The first is the development of the practical measures to be used. The construction of robust measures requires that researchers and teachers to work together to develop and test them, a time consuming and demanding, but also incredibly rewarding process. The second is that the application of these measures to collect data is only the first step. For ongoing improvement, teachers also need to analyse the data and then use this analysis to make changes in their practice. To engage deeply in this process of analysis to spur continuous improvement, teachers not only require time but also external support. Both things that are in short supply in our schools.
If the continual strengthening and improvement of our schools is something that we seek, we need to think carefully about how embed measurement more consistently into the work of schools. This likely will be a multipronged process, requiring not only the upskilling of teachers and school leaders but also providing resources in the form of time and external expertise to schools to facilitate this work.