Do you work in a key worker/public sector role? Want to share your experiences anonymously? Please contact our editor, Tom Parkin, at email@example.com.
In an anonymous submission to Liberal Base, a secondary maths teacher explains their Covid experience of teaching and preparing predicted grades for their GCSE and A-Level students.
The chaos, frustration and anger surrounding the release of exam results in Scotland last week will be nothing in comparison to what is to come in the next fortnight with GCSE and A-Level results due for release across the rest of the UK. Prepare for some stark headlines on news programmes and for the front of newspapers to depict students in floods of tears. All this because the government trusts statistical modelling and not the judgment of teachers to decide the future of a whole generation of students.
Thanks to Covid, public examinations did not take place this Summer. Many outside the school system won’t know the process teaching staff and school leadership have had to go through since then and what most people don’t realise (at least at the time of writing) is that much of this work was for no real purpose at all – the future was already pre-determined by a computer model created by Ofqual and signed off by the Department for Education. The mechanics of this model will not be released until at least the day the results are announced, and even then, we may not fully understand how they operate. Ofqual’s reasoning for not releasing the details of the model they use is “early publication of this information could also lead to some students unfairly finding out their results early, or cause unhelpful anxiety if they are incorrectly calculated”. There is a word I could use to describe this reasoning, but I won’t use it on this blog.
In June, teachers made individual assessments for their GCSE and A level students. Understandably, there is a range of experience in teaching and there would have been unease that some teachers and some schools will inflate their predictions. As a result, internal and external processes had to be in place to ensure the robustness of the examination system. As a teacher of many years in a core subject in an inner-city secondary school, I’ve become fairly experienced at predicting what I think my students will achieve in their examinations. I use mock exam and past paper performance, as well as my understanding of the students in front of me, trying to factor in what I know happens between sitting a mock (which doesn’t – or at least didn’t – matter) and sitting the real thing in the Summer. I have a good track record of preparing the students I teach for their final exams and can usually predict, to a good degree of accuracy, what they will achieve. Using the same process I’ve always used I made my initial predictions on what I believed my students would have achieved. Not only this, I then had to rank my students in order of how likely they will be to achieve the grade I have predicted. Then the whole cohort for a particular qualification is ranked within each grade. So for example, at GCSE, all those predicted to get a level 5 in that subject at that school are ranked from most likely to achieve this level to the least likely with the same process happening for each other level or grade available in that qualification.
Days after doing this I was asked to look again (it happened more than once) at what I had predicted. Our predictions were too positive and we had to reduce the numbers of students predicted to get certain grades. This meant I had to change my predictions downwards. The exam board model uses the last three years results of each school to give some indication of what the grade distribution should look like for that particular school in that particular subject. The leadership of the school (this would be no different in any other school) were worried that the predictions the teachers had made would not be accepted by the exam board’s model and so all our students’ grades would then be downgraded. I had to revisit my predictions a number of times before they were acceptable and wouldn’t mean our performance would improve too much.
The use of the last three years as a basis for a model has many pitfalls. For English and Mathematics, this is only the third run through of tougher specifications at GCSE and for all other subjects, the second year. A similar picture of specification change applies to A level subjects also. All of these would normally contribute to much greater variability in performances from individual schools in individual subjects than you would see later in the life of a specification as they become more familiar with that particular specification. This will not be considered by the model. Schools do have significant changes (up and down) in performance as well due to many reasons such as changes in leadership, different cohorts etc. These will also not be considered by the model. Schools from more disadvantaged areas where historical results are usually not as good and often more variable, will be hit the hardest.
Since the fiasco in Scotland, Ofqual has said that it will allow schools (not individuals) to appeal if they can “evidence grades are lower than expected because previous cohorts are not sufficiently representative of this year’s students”. Believe me this is a mere fig leaf. The evidence base required to overturn the model’s results will be extreme – but it at least allows the government to say that an appeals process is in place for schools (not students) who fear they’ve been disadvantaged and they can kick that particular can down the road for later when tempers are not as frayed and memories not as sharp.
The galling fact for me as a teacher however, is that the work I did to marry various sources of evidence in order to make what I believe to be valid predictions about my students was a total waste of time. The TES (Times Educational Supplement) has discovered that where there is an entry of more than 15 students for a particular qualification, teacher predictions are not being used – only the ranking. In short, the results have already been pre-determined by the Ofqual model.
Everyone understands the particular difficulties and unique challenges this year has provided in trying to give GCSE and A level students the results they deserve and at the same time, to ensure the exam system is as fair and rigorous as possible but for a school to be tied to its past historical performance so tightly is a total disgrace. The government and Ofqual, in their efforts to avoid headlines of grade inflation will let down the current generation of young people and will continue to embed the disadvantage many areas find themselves in. At the same time, they have spat in the face of the professionalism of the thousands of teachers who have worked so hard to deliver fair results for the students they have taught this year.