Concerned teacher and review panellist

I am writing in relation to the current recent correspondence/discussion that has been occurring between yourself and Professor Peter Ridd of James Cook University, and in particular with regard to recent changes to the Mathematics A, B and C syllabi in Queensland. Details of these discussions have been forwarded to me as one of a group of teachers who have genuine concerns about recent changes to assessment practices in these subject areas. I would be grateful if you could take the time to read my concerns as outlined in the following email.
I have been teaching Mathematics across all secondary levels in both state and independent schools in NSW and Queensland since 1975. I am a member of the Mathematics B Review Panel. I believe that my opinions are based on well over 30 years experience and that my concerns are, and should be considered, as genuine, and not simply a knee-jerk reaction to something that is new.
I would say from the outset that I have given lengthy consideration to even writing this letter, not because of any concerns relating to some form of victimisation of myself or my school for expressing an opinion, but rather from the point of view of whether or not my concerns would be considered seriously by the Queensland Studies Authority, or if we are indeed simply dealing with a ‘fait accompli’.
In an article published in the Australian newspaper (11/12/2009) it was reported that ‘/QSA was aware that a small minority of teachers had concerns about the assessment requirements, but the vast majority were satisfied/’. My only response to this claim is that I consider myself to be a fairly ‘typical’ Mathematics teacher. I know many other teachers of Mathematics and none of them has expressed satisfaction with this proposed new system. Many feel that the consultation with teachers by QSA has been inadequate. At a recent meeting I attended in Rockhampton a representative from QSA indicated to the effect that the revised syllabus and assessment procedures had been implemented after gaining teacher support from a survey. The representative also indicated that less than 30% of mathematics teachers had responded to the survey, but that QSA thought this was a reasonable response, and a ‘majority’ of those who responded supported the changes. Given that this ‘majority’ could represent only 15% of Mathematics teachers in Queensland, I would question if this is widespread support. Very few teachers to whom I spoke at the meeting in Rockhampton had any knowledge of this survey.
As Learning Area Coordinator in my school, it has been part of my role to try to implement the new syllabi in Mathematics B and C, and I can say that I have approached this task with an open mind, and genuinely tried to implement the syllabus changes and recommended assessment practices in Year 11 during 2009.
I would also have to say that my major concerns are in relation to the recommended changes to assessment practices, and what in essence is a requirement on Mathematics teachers to move away from the more traditional methods of using marks when ‘scoring’ assessment items, particularly in the areas of Knowledge and Procedures. I note in a recent article in the Australian that it is acknowledged that ‘/the syllabus is “silent” on the use of marks and teachers may use them if they wish/’. While this suggests that QSA is not entirely opposed to the use of marks, I can only indicate that this is not the indication that we are receiving in terms of what is required for our assessment items to be acceptable to QSA.
Support from QSA in terms of expectations and teacher professional development has been minimal in relation to these proposed changes. In a recent response to Professor Ridd you indicated that ‘/if, for example, workshops on these matters are required, then QSA is well placed to provide that professional development/’. In reality this is not what has occurred. The first training that I became aware of that was offered by QSA to Mathematics teachers in the design of assessment items under the new syllabus was around September 2009, almost a year after teachers in schools were meant to have been developing such assessment items for  implementation in their schools. The number of places available was limited, and for my own part I had to travel four hours to Rockhampton to even be able to get into a training session. Quite simply the training provided to teachers by QSA to  implement such a radical change in approach was far too late, and far too inadequate.
In an attempt to be as fair as possible to my students I decided to try to implement the changes gradually, and have persevered with the use of marks in Knowledge and Procedures until the end of Term 3 2009, but at the same time started moving to a marking grid that classified individual parts of questions as being of different standards (essentially A-D). Students were then given an overall rating at each standard as having attained the standard or not. I took this approach so that I could maintain some comparability with the previous system, and hopefully ensure that my students were not being disadvantaged under a new approach. In Term 4 of 2009 I totally removed marks from assessment items in Mathematics B and Mathematics C, to the dismay of my students.
I have not at this stage introduced overall instrument specific criteria sheets because after 12 months of concerted effort, I can only say that I have what are genuine concerns about the validity of the recommended new approach to the design and implementation of assessment instruments. My major concerns are outlined below:
To begin, the suggested approaches from QSA seem to have an inherent reliance on students performing upwardly in Mathematics in a ‘linear’fashion. That is, an A level student will easily cope with D, C and B level questions but to a lessening degree as the questions increase in complexity. My experience across numerous assessment items in 2009 is that this simply does not always occur. It is not uncommon for an A-level student to make simple errors in a C or D level question, for a whole variety of reasons, and a C level student can also gain occasional success in A-level questions. The real confusion arises when, for example and A-level student succeeds at A and C level questions, but does not achieve a required standard in B and D level tasks on the same assessment item. At this point the instrument specific criteria sheets in my experience are essentially useless in determining an overall view of a student’s performance. My staff and I have deliberated long and hard about trying to decide on any given student’s level of attainment in a particular assessment item. We did not have such concerns under the previous system.
The trite response to this would be that the problem is with our assessment items, but I and my staff are confident that we can set items that are of good standard, and this has been affirmed frequently in the past by Review Panels in the acceptance without objection of our submissions for Monitoring and Certification. My own belief is that individual students have success and failure with assessment items of differing standards for a wide variety of reasons ranging from ‘they were absent when an item of content was covered and never really caught it up’ to ‘they simply misread the question’, or ‘that topic just doesn’t appeal to them’. Whatever the reason, it seems to me that different students perform in unexpected ways from one question to another, across a whole range of types of assessment items. Rightly or wrongly, the traditional system of assigning marks to responses, despite some short-comings, did give a general indication of a student’s overall level of performance and consistency on Knowledge and Procedures items in an assessment instrument. I am not convinced that the same level of detail is achievable under the proposed new system.
A further concern is that the recommended method of instrument specific criteria sheets conveys little in the way of meaningful feed-back to students. Most teachers would agree that assessment is part of the learning process, and that feedback to students is essential if they are to gain anything meaningful from an assessment item. The criteria sheets seem to me intended to convey an overall ‘/holistic/’ indication of attainment only. The language used in the sheets as recommended by QSA in your own training materials is confusing to teachers, and in my experience conveys little meaning to students and/or their parents.
Under traditional marking procedures, giving a student 3 out of 4 on a Knowledge and Procedures question, where two ½ marks had been lost for errors in technique or logic, told the student exactly what mistakes they had made, and the severity of that mistake in relation to successfully solving the problem. Now they, and their parents who are partners in their education, have to try to interpret their performance on any given item by referring to a criteria sheet designed to give a holistic summary of overall achievement, rather than any specific detail. To me, this is simply contrary to the primary purpose of assessment, which is that of providing useful feed-back to students so that they can best profit from the assessment instrument. It seems fashionable at the moment dismiss traditional marking, a system that has been used world-wide in Mathematics teaching for generations, as irrelevant and incapable of adequately assessing a student’s performance on particular types of assessment items. I wonder if by trying to be progressive in this area we are really ‘throwing the baby out with the bath water’.
A third major concern is the excessive time that is required to develop an assessment item under the proposed new approach. I have been told at the meeting I attended in Rockhampton that the new system is simpler, but trying to put this into practice does not bear this out.
This would also to me seem to be supported by the material provided to teachers at the recent training sessions given by Wayne Stevens. In the documentation for ‘Mathematics ABC Assessment Workshop 2009’, it is explained how to develop an assessment item starting with the General Objectives and progressing through to the Task Specific Criteria Sheets. The documented detail involved in this process is considerable, and the examples given are for only a small number of questions on a specific topic within one assessment item. Any individual examination or written report is vastly more comprehensive than the minimal examples provided by QSA and without doubt requires many additional hours of teacher preparation.
Further on this point, I would like to make a simple request of the Subject Advisory Officer for Mathematics, and QSA in general given the cited simplicity for teachers of the new system. I have attached to this email a copy of a Year 11 examination and a Year 12 research task that my school has used in the past, which has been acceptable to our subject area Review Panel and considered a good standard of assessment. My request simply is, can the Subject Advisory Officer develop for me Task Specific Criteria Sheets for each item in total that will clearly indicate to me that developing the same items is simpler and easier under the new recommended approaches for assessment, and will also clearly indicate what QSA requires as a reasonable method of determining a student’s level of performance on these assessment items? I have attached the marking scheme for the research task to give some indication of the level of detail expected if this is of use. I would like to point out that I am not trying to be difficult here, but am simply trying to indicate the complexity of what is being asked of practicing teachers. In schools this task would translate in any given year to probably 2 examinations and 1 research task each semester in both Years 11 and 12, in all Mathematics subjects at senior level. This represents a significant increase in work load.
As a Review Panellist I also have serious reservations about the proposed new system of assessment. At the above-mentioned training workshop it was also indicated that schools need to develop some form of summative system that indicates at the end of a course of study that the assessment items developed by the school do in fact address all of the objectives required by the syllabus. It was suggested that some form of colour-coding of objectives addressed by items within each assessment instrument might achieve the aim of indicating overall adequate coverage.
As a panellist we are facing different approaches to assessment items and different methods of indicating overall coverage by each school. To expect that panellists will be able to adequately interpret what each school has been doing, let-alone reviewing the performance of individual students, and documenting this review in the 2 hours that is allowed by QSA for each school is almost beyond belief. If we are going to use such a system then my own view is that the entire panel system for Mathematics needs review if those panels are to serve any worthwhile purpose at all.
As a Head of Department trying to determine individual levels of achievements for students in Mathematics B and C, and to then further develop SAIs for these students indicating relativity to the performance of other students, the proposed new method of ‘holistic’assessment could only be described in my view as totally inadequate. In the previously mentioned article in The Australian it was stated that ‘/The Queensland Studies Authority says marks encourage “a quantitative notion of grading” that may not reflect quality or have any reference to the syllabus standards, and that it encourages comparisons between students’. /Unless I am seriously mistaken the whole Queensland system of determining LOAs, SAIs and OAIs leading to the determination of an OP score, let-alone the QCS exam itself, is largely based on /”a quantitative notion of grading” /and ‘/encouraging comparisons between students/’. To expect teachers to use systems within subject areas that discourages any form of comparison and is entirely un-helpful in making such comparisons, but then to require teachers to somehow produce these end-of-course summary statistics is contradictory in the extreme.
Finally however, can I express my thanks for you taking the time to read of my concerns. As I indicated at the start, I feel that these concerns are genuine, and that a significant number of Mathematics teachers in Queensland share similar concerns. My basis for this is purely anecdotal, but I genuinely believe this to be the case based on my interactions with other teachers of Mathematics. My primary motivation for raising these issues is not to be in conflict with QSA policies, but out of concern that my students over the coming years are not disadvantaged by a new system that has been introduced without adequate testing and adequate training for teachers facing the implementation of this system. I am of the opinion that the proposed new system can only be successful if it has the overwhelming support of the majority of Mathematics teachers in this state. I am contributing to this discussion with the view that we have a joint aim in the development of successful and effective syllabi and assessment practices for senior high school Mathematics in Queensland.