Measuring ‘Progress’ in year 11

So this is my first proper blog. I thought it was important to share some of the work we have done to support HODs in this difficult period. As well as running Methodmaths, I spend a significant amount of my time supporting schools in Essex and within a large multi academy trust (AET). Like many of you, most of these schools completed the Edexcel secure mocks in December. We looked at these results collectively and are now looking closely at the Pearson report which was released this week.

The difficulty with any data analysis is how to act upon it. In my view, the two key purposes of conducting a mock exam are to a) identify specific gaps in knowledge and b) identify the key students that need additional intervention. This is where we took a slightly different approach to most.

It seems that the new headline measure for schools (Progress 8) is slowly trickling down to department level but not quickly enough. The number of students who achieve a grade 4/5 or above in this summer exam is of course important, but there needs to be a culture shift away from just traditional borderline students with greater emphasis on the potential of every student. Mathematics is double weighted and contributes 20% to the overall P8 score. So how is it calculated?

Apologies if you already know this but 3 levels of progress is a dead and buried measure. What happens now is that students are placed into one of 34 prior attainment groups (PAGs) based upon their combined English and Maths key stage 2 scores. Once students have completed their exams in the summer they will be awarded a point score ranging from 1-9 in line with the new grades. An average is taken from all students in the same PAG and 34 ‘national estimates’ are generated. At this point a progress score for a student can be calculated based on whether they are above or below the national estimate for their PAG. You can find some technical guidance on P8 here and the national estimates for Maths in 2016 here.

So how could we simulate this approach with regard to the recent mock. To generate a point score for each student we still needed grades and here we were stuck with guesswork and uncertainty. After some discussion we decided to use a much simpler but very effective approach. By looking purely at raw scores we could completely bypass grades and get a strong sense of whether a student was on track relative to their peers.

We collected three key pieces of information for each student. Their KS2 average fine grade, their raw score out of 240 for higher or foundation tier and their raw score out of 75 from the cross over questions. We collected both Maths progress and attainment scores from 2016 raise online reports to get a sense of how the sample compared to the national population last year.

So this is what we discovered. Due to data protection, I can only share with you the summary data from a mix of the higher performing Maths departments that I work with. I can tell you that collectively they were slightly above national average for both progress and attainment in 2016. The fine grade PAGs have been banded into broader groups due to smaller sample sizes. In this sample there were over 1000 students:

Due to some very small sub groups a few of the averages looked out of place. The 5+ students scored just over 50% on higher tier.  The 4 Mid students scored just over 16% on higher and 35% on foundation. In general, you can see the sorts of scores you might have expected from your own students in December compared to other students with the same KS2 starting points. The student counts also give you an idea of the tiering decisions made.

To support the analysis within each individual school we compared these averages against each student to generate a raw mark residual for every learner. This enabled us to quickly identify any student who was way off track from a raw mark perspective. This was provided in a spreadsheet format with filters so that the data could be easily interrogated in different ways.

Here you can see the first student only scored 7 marks on the higher tier test and the average for his PAG was 70 marks so clearly a cause for concern (-63 residual). The second student on the other hand scored 72 marks on higher against a PAG of 44 so is doing better than other students with the same starting point (+28 residual).

The beauty of this model was that we could interpret the numbers without fear that the grade boundaries were incorrect. It is a real time measure since everyone took the same exam at roughly the same time and tangible targets could be set in terms of the number of topics needed to get back on track. The sample sizes also gave us an indication of tiering decisions relative to PAGs.

So what next? Pearson will not be summarising the next set of mock papers but we have offered to support any schools who are interested in this approach. The next batch of mock papers will hopefully be released the week beginning 20th Feb. All of the academies I work with will be completing the next round of secure mock papers at the end of February / early March. We will be collecting results from schools by 18th March and will return the outcomes to all schools by 27th March following a data analysis meeting with Graham Cummings from Edexcel. Its a tight turn around but this will then give everyone enough time to make final adjustments to tiering decisions if necessary. Our sample will have at least 5,000 students in it. We are also inviting a number of large MATs to share their data with us. We want to collect as many data sets as possible so if you would like to join our sample, complete the attached spreadsheet here by the 18th March and send it back to us at methodmaths@live.co.uk . If you cant make this deadline we will still be collating the information and updating the data averages until the end of 17th April.  We hope you can join in.

Many thanks to the central Maths team @AETmaths / www.AETmathematics.org for allowing us to share their methodology.