Mathematics assessment in Queensland Schools needs improvement urgently.

1)  A central authority could outline the general requirements for a series of exam papers, such as having a suitable range of difficulty level, a range of theoretical and practical questions, familiar to unfamiliar, and a range from short response to extended.  There would be no need for the massive amount of criteria and verbal descriptors of standards that we have now.  After that, it should be up to the schools to set exams and use marks to assess the students and grade them.  Many schools believe that they must avoid using marks and use criteria sheets instead, which are matrices of cells containing verbal statements of standards.

QSA’s position is that marks can be used, but decisions on ratings cannot be made by simply using cutoff marks.  The standards matrices must be applied.  They insist that a mark of say 90% doesn’t guarantee an “A” standard of work because the student may not have fulfilled all of the “A”-descriptors in the syllabus.  However, the following extract from p32 of the Qld. MAB syllabus, produced by QSA, states that a standard can be obtained without necessarily ticking every descriptor.  I can’t see the difference:-
 
“When teachers are determining a standard for each criterion, it is not always necessary for the student to have met each descriptor for a particular standard; the standard awarded should be informed by how the qualities of the work match the descriptors overall”.

If we were simply able to award marks when assessing a well-set exam paper, it would be the job of review panels to check that suitable standards and balances have been maintained.  This is what happened before the QSA became such a powerful influence and advocate for non-marks assessment.  It worked well.  The best students got the best ratings.  We don’t need a cumbersome system in order to achieve that.  Mathematics teachers are good at setting suitably balanced assessment and also at awarding part marks for imperfect solutions and giving full credit for correct alternative solution methods.

The advantage of this would be greater simplicity in devising assessment tasks and in marking them and grading students.  There would be no loss of validity of the grades produced.  In classrooms, the change would produce better teaching and learning.  At the moment, assessment is driving the agenda, because of the many requirements to be satisfied by an “assessment package”.  That’s wrong – our emphasis should be on teaching and learning.  Young teachers including pre-service teachers, and also experienced teachers, have to spend large amounts of professional development time trying to understand QSA’s onerous assessment requirements.  But what we all need to be doing instead, is learning more about our subject and its applications and about the best ways of introducing topics to students and inspiring them in the subject.  We are prevented from using our time to the best pedagogical advantage, because of the time taken to embed a host of little detailed requirements into an “assessment package”.

The marks system is working well in NSW, Victoria, England, and other places, and in most Universities.  Some of QSA’s own QCS tests are assessed with a marks system.  It’s crazy that students suffer under QSA’s cumbersome system for school assessment, and then enter University to find that they are assessed simply and validly by using marks only.

Using marks allows teachers to differentiate between students more readily than by placing ticks in the cells of criteria sheets.  Advocates of the latter system place ticks toward the right or left edges of the cells in the criterion sheet, or in the centre, depending on how well they judge the student to have met the particular descriptor.  That’s all very subjective and unreliable.  Awarding marks according to a marking scheme, and giving credit for alternative solutions which are different from the adopted marking scheme, produces fair and defensible judgments and allows teachers to rank students in order of merit when required to do so.

2) At present there is a plethora of criteria and descriptors of standards in the syllabuses. There are dot-points such as “comment on the strengths and limitations of models, both given and developed” or “identification of assumptions and their associated effects, parameters and/or variables” which require understandings that are sometimes more appropriate to tertiary level studies.  It’s a more urgent priority for high school students to gain and apply mathematical skills to make predictions or solve problems, but these other idealistic dot-point requirements draw away from the time available in school for students to develop basic appreciations of their subject.  The tragedy of the current situation is that if a school should happen to omit even one of these micro-specifications from their assessment “package”, their proposed ratings will be reduced by review panels, even though the level of challenge in the school’s questions may be higher than that of schools who have diligently ticked all the boxes by attending to the many little dot-points.  When exam questions are designed with lots of tick-boxes in mind, the assessment can become somewhat stilted and artificial.  Questions need only be graded in level of challenge, and to be broadly divided into a basic level, suitable for students to obtain a basic “Pass”, and a higher level that would examine problem-solving skills in a range of situations and allow people to achieve the highest ratings.

3) At the website below this paragraph, you will see the newest incarnation of the standards to be applied in Years 1-10, in this case Year 1.  They are called “Standards Elaborations” and they are the means by which teachers are expected to assess their Year 1 students.  By changing the “1″ where it says “yr1″ in the URL, you can get to the other year levels:

http://www.qsa.qld.edu.au/downloads/p_10/ac_math_yr1_se.pdf

If you compare the descriptors in say columns 1 and 2 for Year 1, you might agree with me that it would be very hard for a Year 1 teacher, or anyone else, to decide which column is most appropriate for a particular student’s responses.  Who wants their children or grandchildren to bring home a results sheet like this, or a single letter rating derived from this matrix?  It would be far superior and more useful for the child to bring home a spelling or other test or project marked “18/20″ with the mistakes highlighted, etc.  Marking would be much easier and more reliable, and families could easily see how the child is progressing.  Yet this criterion sheet system is now being used throughout Years 1-10 in our State.

4) External exams would be better than the patchwork quilt system that Queensland has now, where schools all set different exams from each other, and no-one is really sure whether questions marked “unfamiliar” are really so.  The NSW system combining school assessment with the HSC works very well, and it should be easy for Qld to move to that kind of system.  It ensures more reliable comparability of students from different schools.  As the National Curriculum is implemented throughout Qld schools, it makes sense for us to adopt a national assessment approach.  QSA have been setting good external Senior exams in Qld, but have announced that they will discontinue them.  It seems easily possible that they could continue to set them for full-scale use in schools.  Money saved on review panels could be used to pay for teams of markers.

5) QSA claims that Queensland’s assessment system is “world’s best practice”.  There is no objective evidence to support this.  It is the belief of academic theorists.  No other education system has conferred this compliment on QSA – it is merely a wistful statement by insiders, and therefore of little value.  If it were world’s best practice, why haven’t NSW, Victoria, the UK, etc, converted to it?  Why don’t our University Maths departments adopt it?

Let’s improve mathematics teaching in the schools by freeing up teachers’ time so they can concentrate on improving pedagogy and inspiring students, instead of suffocating in a straitjacket of unnecessary assessment requirements.  Let parents once again receive clear simple statements of marks earned by their children.

Education assessment in QLD schools

As a psychologist, who has worked with both adolescents and teachers, it has been my experience that students have been discouraged to achieve and may, rather, experience feelings of failure.  At the same time, many teachers are feeling burnt out, stressed and are leaving the profession due to the current assessment system which appears to be deeply flawed and riddled with inconsistencies.  The Queensland methods appear in many ways in complete contrast to promoting an emotionally and psychologically healthy learning environment.  As I understand it, commonly established assessment systems – approved by many jurisdictions around the world – were already in place in Queensland before this latest new system was developed. Commonly accepted school systems employ analytical methods that involve fair weighted marking of tests so that students may receive  direct feedback on achievement and also percentage combinations (compositions) of multiple assessment results, some of which should include common statewide tests, towards a predictable grade.

 

Why QSA’s assessment system is wrong

Currently all assessment in Queensland schools are school based, meaning each teacher sets the exam papers, marks them and grades their students. In my opinion this form of assessment lends itself to many abuses and makes comparison of performance between schools meaningless. Listed below are 12 of my concerns. I must stress that none of these concerns is as a result of any incident that happened at the school where I am currently teaching.

  1. Schools have the opportunity to offer revision sheets that have direct similarities to the exam students sit a few days later. Also, some teachers, during revision, work through questions similar to those that appear in the exam.
  2. Most schools recycle a substantial percentage of an exam paper year after year. Students are able to access their exam papers after term 1 of the following year, raising issues of future student familiarity with the assessment items provided by the school.
  3. All members of the teaching staff have electronic access to assessment items stored on the network. It is impossible to know when an unauthorised copy has been made. A similar situation occurs when paper copies of exams are stored in unsecure areas accessible by all staff and in some cases by students. Some of these staff have relatives studying at the school, and some tutor students from the school.
  4. There have also been instances where students have managed to gain access to staff drive on the school computer system and copy supposedly secure files.
  5. Old exam papers and draft versions of upcoming exam papers are often not disposed of securely. It is not uncommon to see students rummaging through school rubbish bins prior to an exam period.
  6. Inconsistency in teacher-marking and in levels of exam difficulty between schools within a district and between districts is impossible to monitor or moderate.  A student’s grade depends more on which school they attend or who their teacher is and not on what they know.
  7. It is difficult for a panellist to determine whether challenging/complex problems included in an assessment package have been partially or fully rehearsed.
  8. The level of assistance provided to students in an exam is often not disclosed. ‘Assistance’ changes the conditions of the exam and is difficult for the panellists to gauge. Some teachers help students during exams. Some schools allow their students to bring with them one or two pages of ‘cheat sheets’ containing formulae, examples, definitions, graphs and diagrams. These sheets are not attached to the student’s answer sheets when they are sent to the panel. In one case I know (not at the current school), two pages containing differentiation and integration formulae with worked examples were handed to the students during a Maths B exam and was not attached to the answer sheets when sent to the panel.
  9. Teachers know what questions are in the exam and, when talking to students, are placed in the situation of reassuring students without jeopardising the ‘unseen’ nature of an exam.
  10.  Sometimes students doing the same exam are allowed to sit next to each other. A quick glance at the neighbour’s graph could mean the difference between getting an A and an E in that question.
  11.  Students, who are unable to do an exam on the scheduled day due to being sick or being on holidays, sit the exam on their return. These students have the opportunity to find out from their fellow students what questions appeared in the exam.
  12.  Students are made to feel they mustn’t upset their teacher in any way, in case they are penalised for their behaviour when their papers are marked.

The current system is unfair to students, teachers and the schools. It is naive to assume that breaches are easily detected at moderation/verification. The panel does not have the time, resource or sometimes the expertise to investigate all possible breaches. Even if breaches are detected, verification occurs in October, making it difficult for schools to fix any problems, and limiting opportunities for students to suddenly demonstrate their full potential.

It is reasonable to assume that most teachers will conduct themselves in a professional and ethical manner most of the time. But it isn’t reasonable to assume that all teachers will conduct themselves in a professional and ethical manner all of the time, especially when their performance as a teacher is often judged by the number of VHA and HA students they produce.

I also keep asking myself why it is that after 40+ years of all the ‘educational benefits’ the moderated school-based assessment system has brought to Queensland, the overwhelming majority of school authorities around the world, including those in the other five states in Australia, are continually refusing to adopt it.

A totally school based assessment is unreliable and makes comparison of student achievement meaningless. Without some form of external assessment, we do not have a full picture of current standard of student performance or school performance. The National Curriculum must include clear statements on assessment and not leave it entirely to organisations such as the QSA which favour a totally school based system.

Concerns on Assessment

My major concerns on assessment centre around the following:

1. Despite stated word limits for tasks, the limits refer only to a relatively small proportion of the total assignment – this is particularly true in the EEI

2. The ability to communicate clearly and precisely underpins success instead of the concepts, principles and applications taking primacy. Ie, for two students with the same chemistry ability, the one with the greater facility with communication will be assessed at a higher level.

3. Criterion-based assessment is still a blunt-edged tool. Despite increased consistency among panellists, differing interpretations still exist on the features and characteristics of work required to meet specific standards.

4. The EEI is a flawed piece of assessment – it is not possible, within the time and resource constraints imposed, to design and conduct an investigation of the sophistication originally intended by the syllabus documents.

5. The learning is the assessment. With the extended tasks, students have no time to incorporate complex concepts into their cognitive framework before they are assessed on them. The time for reflection and practising until processes are ingrained has gone.

6. The work load on both teachers and students is overwhelming. I know, even with my experience, that it takes approximately one hour to assess to best of my ability a single student’s EEI.

All of this is not to suggest that our students do not have valuable skill sets when they exit secondary school; it purely that the skill set is different. Knowledge of details and the ability to undertake routine applications has been diminished, but the ability to analyse and evaluate have been enhanced.

QSA system encourages teacher cheating

I have been teaching science in Brisbane for 15 years, and I have worked in the state and independent sectors. A very important issue that I think needs to be raised about the current QSA assessment system is that it allows, and in fact fosters, teacher cheating.

There is tremendous pressure on teachers to ensure student work passes the moderation process without being lowered. Parents love good grades, and nothing looks worse for the school when A students are lowered to A- or B level. Most teachers are very honest, but some I have observed manipulate the system to make sure their work gets through the lottery of the moderation process without being savaged. These teachers are not motivated by self-interest, but actually self-preservation and fear of a moderation process that is byzantine and arbitrary. 

There are several strategies I have seen colleagues use:

 1. Coaching
In Queensland, teachers write and grade the tests. A common criticism fed back from review panels is that questions are not hard enough- which is used as a reason to move students down. What teachers do is put in very hard questions, but then coach the students in how to answer them. I have even seen teachers give students the test questions in advance to study. Obviously no mention of this is made to the panel. The panel sees really hard questions answered very well- what a great job that teacher must be doing! Unfortunately coaching the answers to these really tough questions is not necessarily building broad understanding of the important concepts. 

2. Bait and switch
Another way teachers make their submissions look better is to manipulate the test conditions. This can involve giving more time to the students, or setting the test as “open book” without mentioning this in the submission to panel. I once worked with a teacher who would let the students work on the test, with the help of their textbook and notebook, for as long as they needed to finish it. I even overheard him telling students how to answer the questions. That teacher was the review panel chair for our region. At first I was amazed at the results he was getting out of his students, until I realised how. When I challenged him over the issue he just laughed it off, telling me everybody did it and I was disadvantaging my students by not doing the same. 

3. Practice makes perfect
In Queensland, assignments (ERTs and EEIs) play a very important part in determining a students grade. It is impossible to determine how much help students have received from teachers and parents in completing these tasks. I know many teachers heavily edit student drafts in order to improve their standard. This is usually done quite openly, and in fact is usually encouraged by the school. As a result, how much of the final draft is the students own work?

 4. Panel  magic
For years I could never understand why my submissions would be moved at panel. The advice from year to year would often be contradictory, and I would spend hours trying to figure out how to do it properly. Finally I joined the panel, and suddenly my submissions sailed through without problems!. The review process is not anonymous, and when the other panelists know you they are reluctant to move your students. The review process is very subjective and a schools often get judged more on their reputation than on the student work. 

5. The Trojan
A panel submission does not contain a random sample of students. In fact, the samples sent off are selected by the teacher, with the intention that they represent the other students on the same level. What teachers will do is send of a really good example of a VHA, while giving much weaker students the same grade. These less deserving VHAs are never seen by the panel. Teachers, especially at private schools, are under pressure from parents for good grades. This way, borderline students are secretly given “the benefit of the doubt”.

 For the first 10 years of my career I worked in an fancy private school. The pressure for good grades was intense, and teachers and schools use every trick in the book to get the best for their students. 5 years ago I made the switch to the state system, and I am often amazed at how honest (to the point of being naive) the teachers are. There is still assessment fraud, but it is much more subtle and ad hoc. The reason for this is primarily because there is less pressure from parents for good grades. 

Assessment fraud is real, and there is very little the QSA assessment system does to prevent it. In fact, the combination of fear, pressure, confusion, and lack of oversight means the current system encourages teacher cheating. 

There is a very simple, in fact blindingly obvious solution, to the problem. External assessment.