Procedures are not the enemy

So yesterday I took the trip to Manchester to see what the LeSalle Education Maths CPD conferences were all about. It was astonishing to see the number of Maths teachers freely giving up a Saturday to develop their own CPD.  The range of sessions that delegates had to choose from were incredibly varied and came from the perspective of teachers at different points in their careers. What a great model.

A few weeks back I decided to submit a workshop proposal to share some of my insights over a 20 year period in Maths education.  I was delighted by the response of delegates as over 100 attendees signed up for my session.  Slightly daunted by the fact that I was following Jo Morgan from (who delivered an excellent session on Indices in depth) and Simon Singh who is a best selling author, this was a great opportunity for me to give something back to a community of maths tweachers whose ideas have helped me support the large network of schools that I work with within AET.

My session was entitled 'procedures are not the enemy' and its purpose was to attempt to address the idea that efficient algorithms in Maths are not something to be ashamed of.  Once any conceptual journey has been completed, students should be encouraged to develop quick, efficient methods so that they focus their attention on bigger ideas or the problem solving aspect of a question. 

The workshop was well received and I had lots of interesting discussions with individuals during session intervals. Twitter has has already started doing its thing and some great conversations are happening around some of the ideas I presented.  Lattice multiplication v long multiplication in particular seems to be causing a stir.  I presented a 3x3 multiplication calculation and invited delegates to generate a solution using both methods. Looking objectively at computational steps, the lattice method was more efficient (if you can get the students to draw the grid in the first place). More importantly however, is that the lattice method can be easily connected back to the box method and the inter change in computational steps (multiply, exchange, add) is far less problematic.

All of the ideas that we explored in the session can be downloaded here and I also promised to share the amazing calculation policy here and proportion revision grids here that the AET Maths Team have kindly shared.

It is really important that we remain open to different methods in maths and question everything. Over the coming year, I plan to provide lots more examples that will hopefully keep these types of discussions going.




We have added 9-1 Grade Boundaries

One of the cool features (in our opinion) of methodmaths is its ability to build confidence around the quantity of correct work required to achieve each grade in the GCSE maths qualification. Despite loads of requests, we were reluctant to predict what the grade boundaries would be for the first set of live exams so our gradeometers were only reporting a %.

Now the June 17 test series is over we have something to work with. We have applied the published grade boundaries to all of our practice test materials and added some new colours to the dashboard to cover the new grades. Visually it paints quite an interesting picture particularly when you compare this to the legacy content. We must stress that we anticipate grade boundaries to be higher next year as schools become more familiar with the new specification. That said, we hope this is useful to teachers and students.

GCSE Foundation Dashboard  This visual suggests you need about 6 questions to secure a grade 1 , 9 questions for a grade 2, 13 questions for a grade 3,  16 questions for a grade 4 and 20 questions for a grade 5.

GCSE Foundation Dashboard

This visual suggests you need about 6 questions to secure a grade 1 , 9 questions for a grade 2, 13 questions for a grade 3,  16 questions for a grade 4 and 20 questions for a grade 5.

GCSE Higher Tier Dashboard  This visual suggests that 3 correct questions will secure a grade 3, 5 questions would get you to grade 4, 7 questions  will earn you a grade 5, 10 questions for a grade 6, 13 questions for a grade 7, 16 questions for a grade 8 and 19 questions for grade 9.   

GCSE Higher Tier Dashboard

This visual suggests that 3 correct questions will secure a grade 3, 5 questions would get you to grade 4, 7 questions  will earn you a grade 5, 10 questions for a grade 6, 13 questions for a grade 7, 16 questions for a grade 8 and 19 questions for grade 9.


How hard do year 11 really work?

Every year I look back to my own experience as a year 11 Maths teacher and more recently the teams of year 11 teachers that I currently support and my observations always bring me back to this same question. Teachers are breaking their backs to ensure that their students reach their potential and I am constantly amazed by the commitment and passion that I see.  But it’s got to be a two way process.  How hard are these kids really working at a collective level? How much practice are they really doing outside of lessons and revision sessions? How do HODs keep an overview of this? This was the point of creating methodmaths in the first place.  I wanted my students to have access to a resource that was based on real exam questions, rich in interaction, would keep them motivated, and hold every student to account.

So it’s been 5 years now since we took methodmaths online and we are very proud to have supported over 1000 Edexcel schools with their GCSE revision.  We’ve been tinkering away at this for a long time and feel we have perfected the art of creating dynamic, interactive exam content that engages learners. In our humble opinion, no one else comes close. Today, we have finally finished and released every 9-1 test paper Pearson have produced. This now gives students access to 30 self marking tests in addition to 32 legacy papers and the 150 topic workbooks dating back to 2003 content!

Once we get some real grade boundaries in the Summer we will update the dashboard and the colour coding (we are thinking of something sparkly for grade 9!) The topic filter is proving very popular at the moment because students and teachers can explore a concept across specifications without losing how it sits within the bigger picture. Here is an example for Histograms

If you are not already on board with us, it might be too late for your current year 11. But we always need to keep an eye on year 10 and with the long summer holiday almost upon us this could be the prefect tool to keep their minds ticking over. The menu of test papers is set to grow again in September and January and we are also working on a topic specific test menu to support personalised learning.

Measuring ‘Progress’ in year 11

So this is my first proper blog. I thought it was important to share some of the work we have done to support HODs in this difficult period. As well as running Methodmaths, I spend a significant amount of my time supporting schools in Essex and within a large multi academy trust (AET). Like many of you, most of these schools completed the Edexcel secure mocks in December. We looked at these results collectively and are now looking closely at the Pearson report which was released this week.

The difficulty with any data analysis is how to act upon it. In my view, the two key purposes of conducting a mock exam are to a) identify specific gaps in knowledge and b) identify the key students that need additional intervention. This is where we took a slightly different approach to most.

It seems that the new headline measure for schools (Progress 8) is slowly trickling down to department level but not quickly enough. The number of students who achieve a grade 4/5 or above in this summer exam is of course important, but there needs to be a culture shift away from just traditional borderline students with greater emphasis on the potential of every student. Mathematics is double weighted and contributes 20% to the overall P8 score. So how is it calculated?

Apologies if you already know this but 3 levels of progress is a dead and buried measure. What happens now is that students are placed into one of 34 prior attainment groups (PAGs) based upon their combined English and Maths key stage 2 scores. Once students have completed their exams in the summer they will be awarded a point score ranging from 1-9 in line with the new grades. An average is taken from all students in the same PAG and 34 ‘national estimates’ are generated. At this point a progress score for a student can be calculated based on whether they are above or below the national estimate for their PAG. You can find some technical guidance on P8 here and the national estimates for Maths in 2016 here.

So how could we simulate this approach with regard to the recent mock. To generate a point score for each student we still needed grades and here we were stuck with guesswork and uncertainty. After some discussion we decided to use a much simpler but very effective approach. By looking purely at raw scores we could completely bypass grades and get a strong sense of whether a student was on track relative to their peers.

We collected three key pieces of information for each student. Their KS2 average fine grade, their raw score out of 240 for higher or foundation tier and their raw score out of 75 from the cross over questions. We collected both Maths progress and attainment scores from 2016 raise online reports to get a sense of how the sample compared to the national population last year.

So this is what we discovered. Due to data protection, I can only share with you the summary data from a mix of the higher performing Maths departments that I work with. I can tell you that collectively they were slightly above national average for both progress and attainment in 2016. The fine grade PAGs have been banded into broader groups due to smaller sample sizes. In this sample there were over 1000 students:

Due to some very small sub groups a few of the averages looked out of place. The 5+ students scored just over 50% on higher tier.  The 4 Mid students scored just over 16% on higher and 35% on foundation. In general, you can see the sorts of scores you might have expected from your own students in December compared to other students with the same KS2 starting points. The student counts also give you an idea of the tiering decisions made.

To support the analysis within each individual school we compared these averages against each student to generate a raw mark residual for every learner. This enabled us to quickly identify any student who was way off track from a raw mark perspective. This was provided in a spreadsheet format with filters so that the data could be easily interrogated in different ways.

Here you can see the first student only scored 7 marks on the higher tier test and the average for his PAG was 70 marks so clearly a cause for concern (-63 residual). The second student on the other hand scored 72 marks on higher against a PAG of 44 so is doing better than other students with the same starting point (+28 residual).

The beauty of this model was that we could interpret the numbers without fear that the grade boundaries were incorrect. It is a real time measure since everyone took the same exam at roughly the same time and tangible targets could be set in terms of the number of topics needed to get back on track. The sample sizes also gave us an indication of tiering decisions relative to PAGs.

So what next? Pearson will not be summarising the next set of mock papers but we have offered to support any schools who are interested in this approach. The next batch of mock papers will hopefully be released the week beginning 20th Feb. All of the academies I work with will be completing the next round of secure mock papers at the end of February / early March. We will be collecting results from schools by 18th March and will return the outcomes to all schools by 27th March following a data analysis meeting with Graham Cummings from Edexcel. Its a tight turn around but this will then give everyone enough time to make final adjustments to tiering decisions if necessary. Our sample will have at least 5,000 students in it. We are also inviting a number of large MATs to share their data with us. We want to collect as many data sets as possible so if you would like to join our sample, complete the attached spreadsheet here by the 18th March and send it back to us at . If you cant make this deadline we will still be collating the information and updating the data averages until the end of 17th April.  We hope you can join in.

Many thanks to the central Maths team @AETmaths / for allowing us to share their methodology.