Explanation of a Senior MATHS RESULTS sheet – PART B

This kind of profile sheet was rolled out in many state schools as recently as 2010 and given to maths students in all high school grades. Even more recently, private schools are also pressured to mark this way in all subjects due to moderation panel demands for results that match the QSA requirements – otherwise students’ grades will not be taken seriously.  For example, more physics tests are now being ‘marked’ this way. So, this is a new assessment model in development.

Your children in the higher grades now get no numbers and no percentage, just this kind of profile sheet stapled to the back of their paper.

It is alarming that a system in development that has been described as “a radically different approach” of “much confusion” (Matters G & Wyatt-Smith, 2008) compared with other states and countries, is now possibly expanding in an adapted form (the LASD sheets) into the entire school system  -  Years P-10  -  to overlay the Australian Curriculum… just when teachers want to get on with teaching and testing the Australian Curriculum, which already has a succinct list of achievement standards which could be marked directly.

For the purpose of discussing the difference  between the time-honoured adding up of marks on test-papers and this new questionable letter-standard method – there is a conversion back from letters into numbers and total of 82 possible marks is written on the side. The maximum possible ‘letter’ now is given by the yellow highlighted shading. In the past, a simple question would normally be weighted as 1 mark but is instead now called ‘D’ standard,  ‘C’ was about 2 marks,  ‘B’ was  3 marks, and ‘A’ was 4 marks.  Most teachers agree on weighted marks because all of the child’s correct results get added up and contribute fairly to final grade.

This is a simplified results sheet version – Additionally, teachers are required to use an even more complicated rubric, whereby some students’ ‘affective’ or ‘behavioural’ responses are listed in wordy, waffly criteria and standards cut-offs and are tick-boxed (see example of  Criteria Maths Sheet)

THE CRITICAL EXPLANATION:

This marking sheet is an example of how a poor kid should have been told they clearly got 76% of the test material correct and instead gets a C grade on a hunch – due to the confounded arrangement of ‘letters’ as marks, and QSA directive to not use any attempt to numerically average out the letters.

Consider the letters on the attached profile (marking) sheet: The curriculum body made Qld teachers use letters to pre-ordain the worth of each question in an exam. (Usually teachers gave weighted marks). In the past this superfluous ‘Profile Sheet’ was not even needed because each ‘D’ standard question used to be simply marked 1/1 and C questions was marked 2/2 or 1 out of 2 marks if partly correct, directly on the child’s exam paper answers. The overall total was then given on the front cover of the exam paper.

Until this year, this Senior maths students could get their test back and say: “Hmmm,” looking at just the fully-correct answers, “I got 1/1, 1/1, 4/4, 2/2, 1/1, 2/2 etc, WOW, just as I thought, I got about 20/31 of the test questions perfectly correct. I knew I studied. I thought I got about two-thirds of the test questions right – better than last term when I only got about half the questions correct.” In fact, the teacher would say: “Actually, kid, you got 62/82 marks! That’s 76% this term – huge improvement – you knew a lot more topics – congratulations!”

Indeed, the student could also look at the part-marks on his/her exam paper (numerically weighted) questions and double-check that the teacher totalled their marking correctly – also a good thing. In the past, what would be now deemed ‘D’ standard questions were simply weighted as 1 mark. An ‘A’ standard question would have been weighted about 4 marks, and so on. But this, year teachers were told in workshops to give a A/A for a question with 4 steps (roughly) and C out of A for, say, what used to be a 2/4 mark question; and to not add up marks.

(Most teachers – looking at the same exam paper – are surprisingly consistent in their agreement on such things, so prior number weighting was easily agreed on. Now, at least, teachers do award a ‘C’ for an ‘A’ standard question that is about half correct and so on – they are naturally applying weighting to the alphabet letters. However, because student’s A-E report card is indeed on a spectrum with A being worth more than an E obviously – that is anyway a mathematical sliding scale – yet teachers are told to NOT convert back, aggregate or average the letters!) Yet, it is far better to use weighted numbers because the units of marks have no position on a spectrum. So, children can get an assortment of things right and they all aggregate up to better achievement.

So, let’s look at this profile sheet in reverse and convert back to how teachers used to add up these questions: if you look at the top possible score (31 questions and weight them as No. 1a) question is only D standard, so top possible mark = 1 mark, etc) there are 82 possible marks. Then add up this kid’s marks in again in the same reliable way (D=1 mark, C=2 marks, B=3 marks, A=4 marks) they would have got 62 marks out of 82 marks – 76%. This is a very good result. The bulk (3/4) of the subject has been learnt by the child. In fact, it is more than the two-thirds initial glance by the child because of the weighted difficulty of some questions he or she did get correct. Also, 76% is only 4% under an 80% cut-off for which many teachers award an A. Therefore, in the past, this kid got a deserved B + grading on this exam. Of course, different exams in different schools are designed differently, but this gave fair feedback on his or her coverage of this class’ term learning.

However, looking at this profile sheet of letters, there is no intuitive way to gauge where the child stands. In fact, many teachers wrongly would give this child a ‘C’ based on looking at all the Cs!. The child is getting most questions correct, so this profile sheet is very counter-intuitive!! OR , if the teacher is thinking a little more carefully, they look at it a little more closely and they recognise that every single C question is answered correctly, therefore the child deserves to have their Bs and As looked at. However, then it gets tricky again. Only one A q has been answered correctly and only 6 out of 9 B questions are answered correctly, and then a teacher often looks at all the Ds and Es and may be tempted to think that they balance the others out, thus bringing the child back down to a C again. The only way that this poor kid is going to get his deserved-for B is if the teacher counts that there are 6/9 B questions answered correctly and makes the decision (based on ‘aggregating’ the Bs) that he does deserve a B. But… hang on… that’s only possible by using NUMERICAL methods.

BUT, even worse still, many teachers look at the average to poor judgments on their Communication skills box – which is NOT a box of questions – it merely lists in great unnecessary detail how to mark the communication of the various exam questions ( EVEN if they got the bulk of answers correct!) .

So then, this kid has here been awarded only Cs and a D for communication – especially if they are indigenous kids or any kids (especially boys) who typically do NOT have refined English essay-type answers in the ‘written’ “Explain” or “Connect” component unfairly mandated in  maths exams nowadays.

This is not fair to them, because the allocation of a third ‘criterion’ box on just ‘communication’ makes some teachers think that this must be weighed equally with the Knowledge box and the Modelling & Problem-solving box (‘criterion’). All this does is double-dip and now makes some teachers think that the badly-judged C students should stay a C, if not worse! Only, we know that this student has a VERY GOOD grip on maths and deserves more. C usually equates with a PASS that usually means “about half-correct”. Yet, this kid – with a solid handle on all foundational skills and most of the exam topics – is ideal for university or a top job that involves maths and should be given a B for this exam.

It  has been the experience of many teachers in the staff room this year using the new lettering system, that every teacher, and their supervisors, are handing out WILDLY DEVIATING grades – not just a little bit off but – heaps compared to each other. There is no one reproducible way to get the same objective grade from different teachers UNLESS you convert the letters back to weighted numbers and do the maths – what should be done in the first place! Caring teachers are doing just that secretly and it takes hours. But it still doesn’t fix the problem of accountability OR transparency for the students AND parents – the teachers all use different methods (And this brings home the point again: aggregating alphabetical letters in any way defeats the inherently stupid purpose of disposing of the numerical marking in the first instance!)

When teachers have pointed this out, we have people in authority ignore logic and repeat the mantra of the Education Department QSA curriculum body, i.e., we should “not be judging children with numbers- it’s all about the quality”!!!! Some of us have had barbs about ‘can we really cope in our job next year if we cannot cope with the new system’. Others have been patronisingly told by QSA reps that we just need more ‘conversations’ to sort it out OR QSA reps ‘generously’ tell us that we can sort it out at the school level. Some maths, accounting and science teachers have used maths in other careers and university, eg in medical research, where this kind of basic mathematical analysis is routinely done to make fair judgments about many important quality things – which undermines everything the Ed Dpt is saying. They are treating us as utter fools.