The National Assessment Program – Literacy and Numeracy (NAPLAN) has since its inception been a bane to teachers’ existence and a problematic measure of student achievement. However, since 2008 it has been a fact for Australian Schools, with results published to the public and used internally to measure school performance. A hugely problematic idea in the first place as students at both primary and secondary do not start with the same level of literacy or numeracy. This is especially the problem for secondary schools with the year 7 results completely tied to the primary setting with testing in March. Even so schools and teachers have had to develop methods to utilise the data for leadership and teaching decisions. It seems these efforts which have been undermined from the start are now subject to a ridiculous level visual rebranding and propaganda to hide the extent of problems within the education system.
The old bands
The primary realignment in 2023 has been to remove the previous band system which placed students’ scores into a particular band or level. This was always confusing for casual observers as the band distinguished by numbers did not actually correlate to year level standards. Within education this was never really the problem as it was understood complication however was never understood by a lot of parents despite the lengthy explanations included on NAPLAN reports. It would seem like the easiest means of communicating student progress and level would be to measure them in a similar way as the curriculum which is based on year level. However, we got used to this and the separation of results into top, middle and bottom did provide some clarity regarding students’ position in comparison with others and accepted standards. Not ideal but workable.
Progression levels
In contrast, the idea of implementing a system of progression levels with a clear explanation of students’ capabilities at each point is a better educational principle. In this way student results can be theoretically more effectively used to differentiate and target students’ learning needs. One wonders why this was not implemented earlier, although NAPLANs use as an educational tool has always been a secondary consideration. This change does create issues for schools as the two systems of measurement are not compatible or linked in any real way. As a school leader attempting to measure the potential impact of new programs, student growth or assess the accuracy of other data sets this creates issues. However, this can be overcome by looking at the scaled scores which have remained the same and a comparison with year level averages still provides valuable insights if the time is available. However, the change does render a quick assessment difficult.
Manipulating the figures

In general, the change to progressive levels is worth it. However, the way this has been implemented is not only flawed to impact the usability to analysis for 2023 but an obvious attempt at data manipulation. This is obvious from the comparison of graphs found on Panaroma (one of the services used by schools to access data) the decision to use the same colour scheme (black, grey, and Teal) with the new progression levels immediately invites comparison. Schools will find the percentage of students in the black (bottom group) will be significantly smaller. This would give the false impression that students have improved, and school initiatives have worked. However, the truth is that students that would have been in the bottom 2 bands previously are now in the developing (grey) category. Similarly at the top end the decision to no longer compare the top 2 bands with similar and network schools but rather include two different progression levels also serves to inflate these numbers. Therefore, despite the band and new levels clearly being two different measures, the presentation of data draws a comparison that appears favourable to the schools and the education system.

Drilling down further into the data and the level of manipulation become even more apparent as while these bands aren’t comparable the scaled scores remain similar. In this instance a comparison of scores from one year to another shows the extent of this fraud. As a student, they could have improved little and remain in range for students significantly below their own age level yet move within these new bands (from black into grey). In this instance a student achieving a result comparable to a student in grade 4 could be in the black during year 7 but a couple of years later moving up to a grade 5 average appear in the grey section. This means things aren’t in such dire straits.
Euphemisms
Even the names for these progression levels demonstrate a clear desire for political representation. While additional support is reasonable for students that are well below expected capabilities, developing seems to be generous for students that could still be several years behind. The most ridiculous is the strong category which is centered around students performing at the expected level or slightly above. No personally, it doesn’t sit right to suggest students are strong in a particular area when they are just achieving at average or standard level. Not only is this a fraud in terms of data representation, but it also robs those that are above the level of their achievement while potentially giving others unfair expectations of their abilities. As an English teacher I recognise how language can be used to manipulate and the names of these new levels are a clear example of not-so-subtle representation.

The Good and the Bad
As I said the new progression levels themselves with a clear indication of student ability is a positive step that can be used by teachers. However, implementation not only seems like a haphazard attempt to fix a flawed system but a deliberate desire to misrepresent data. Hopefully, people aren’t being fooled.
Read more of my thoughts on education and why people are leaving the profession.




Leave a Reply