Why EEIs and ERTs are not good assessment methods in chemistry

The school year is crowded. Public holidays, sports days, reporting deadlines, co-curricular activities, QCS practice, pastoral care initiatives, together with shortened lessons for school assemblies and ceremonies all cut into the time available for teaching and learning. It often seems that there is barely time available for the minimum 55 hours per semester required by the Senior Science syllabi. This is especially true of the second semester in year 12 when Seniors finish early in order to fit in with state reporting deadlines. Emphasis on long duration non-test assessment such as EEIs and ERTs mean there is less time available for basic concepts and skills. There is a danger that students will not develop a good grounding in writing formulas, balancing equations, stoichiometry, periodicity, and structure and bonding.

If the recommended minimum time of four weeks for an EEI is adhered to then available time for teaching other topic in Chemistry is significantly reduced. ERTs have a similar effect on student learning although the requirement of two weeks minimum class time is less. Having tutored students from schools where the assessment is divided evenly between test and non- test I can say that their grasp of fundamental concepts definitely seems weaker than students from schools where the emphasis is on exams. For this reason I have always advocated a minimalist approach to non – test assessment in schools where I’ve taught Chemistry. I believe this leads to enhanced student outcomes. In my experience students suffer less stress and learn more Chemistry when they are not asked to spend many hours constructing EEI reports and ERT essays. Evidence that supports my belief are the high proportion of my Chemistry students who achieve VHA’s (usually over 20% of the cohort) and comments from former students that their high-school Chemistry gave them an excellent grounding for first year University Chem.

As Science teachers we will always face the dilemma of what aspects of our subject to continue to teach as  being  ‘fundamental’ to its understanding and which aspects have to be left for later specialised studies. I do not believe emphasising EEIs and ERTs in a Chemistry teaching program is a solution to this.Too often students have to investigate concepts in Chemistry or Physics that are beyond the scope of high-school Science or are trying to collect data of detail or precision beyond the limits of school laboratory equipment. Good design of EEI tasks will avoid this; however I have seen students from other schools struggling with concepts such as the detailed Chemistry of marine artefact restoration or of the Physics of resonance of wine glasses. The danger of poor design of ERT or EEI tasks is that students are faced with the predicament of either dealing with a topic in a superficial way or of going into depth beyond their understanding and quoting formulas and research articles that neither they nor their teacher can understand. The net result of such tasks is that students spend a lot of time learning about a very narrow aspect of Chemistry or Physics and that they may not even understand what they are investigating. They would gain a far better understanding of Chemistry or Physics by spending the time on broader, more guided studies.

Lastly, an ironic consequence of EEIs is that even though they are an ‘experimental’ investigation, they can result in limiting the amount of practical laboratory manipulative skills a student will experience during the two year course. A term spent on spectroscopy or wine-making means less time for titration or gravimetric analysis. In contrast, a regular program of small experiments mean students can incrementally build their lab skills over the two year course.

In summary, I believe that many Chemistry work programs have too much emphasis on non-test assessment. Many EEIs and ERTs are poorly designed and cause undue stress to students while limiting their overall understanding of Chemistry. While formulating hypotheses, designing experiments, researching literature, and learning practical skills are essential aspects of a high-school Chemistry course, there are much better ways to cover these than by the current QSA emphasis on EEIs and ERTs.

Fizz-iks

I am a Curriculum Coordinator/Physics/Maths teacher at a private secondary school. I happened to hear your interview on 612AM Brisbane radio today and I could not agree with you more about the misfortunate direction that physics education in Queensland has taken in the last decade. In fact after listening intently to your opinions, I spent this morning marking physics ERTs (extended response tasks) and I totally agree with your assessment of such tasks as being difficult, confusing and inaccurate to score while also seriously lacking in mathematical rigour.

At the risk of sounding very jaded I will give you some specifics. After fifteen years of teaching senior physics I had temporarily abandoned the idea in disgust five years ago in the hope that it may turn a corner. As I return to my first love of teaching I find it still involves far too much ‘sandpit science’ (let’s get in and play!!) although I have seen some marginal improvements to the standard I had left. Early in our first term this year I taught a Cosmology/Space unit for Yr12 that required me to teach no mathematics at all and I was actually directed to give the students no actual specific guidance throughout the entire task. I was supposed to sit back and witness students struggling with understanding the mystery of dark energy as explained on a Youtube clip and then watch on while they sought the guidance of Google for higher understanding. Some years ago my cosmology unit involved hands on work with telescopes during night sessions and mathematical processes for measuring stellar distances based on light spectra. Now we don’t even look at a picture of a telescope! This type of laptop research activity has literally replaced weeks of teaching time in the classroom over the two senior years of science. I have a brother at another local state high school who is also dismayed with senior school physics. It would seem that the mystery of dark energy pales into insignificance alongside the mystery of how one can gain approval from panel members for samples of Year 11 and 12 student assessment pieces. Knowing how to jump through the hoops for panel reviews is all so ill defined.

I have some regular contact with a state level QSA official for Physics and when I hear this new direction being explained and justified I can only think of the story of the ‘Emperor’s new clothes’ as some others seem to voice approval. Are there any significant numbers of experienced physics teachers actually looking on with approval at this new “Fizz-iks”? As we lose experienced hands, I am worried that younger teachers will know no better and end up assuming physics is just a ‘research and critique’ subject with the minimal levels of mathematical application present being considered as a reasonable standard. I could say more but I’m sure you have heard much of the same from other disgruntled academics from around our great state.

As was mentioned on the radio, I would welcome attending an evening meeting on this important topic in Brisbane once you have decided on a venue and a time. Thank you for your service to the future of education in Queensland in challenging this worrying issue.

QSA Issues

As the result of my experience of last year’s verification, which I describe below, I would like to suggest that the only certain way to be sure of fair and accurate assessment is to have external assessment, either whole or in part. I say this coming from a UK background where I taught A level physics for ten years completely with external assessment and enjoyed the freedom to be able to concentrate on teaching to a high standard without the constant stress and demand on my time of incessant setting and marking of assessments.

Let me share the nightmare I experienced over verification in 2011. I awarded my top two physics students VHA 8 & 7 respectively – these were not inflated grades, I had considered placing them higher. At verification both students were moved down to HA 3! Yes, ‘HA’ 3, a drop of 14 rungs!! I was in total shock, stunned, could not believe it! As you can imagine I then spent considerable time thoroughly reviewing the panel’s comments, my assessment instruments, my marking, everything – how could I have got it so wrong? My confidence as a competent teacher was severely shaken even with thirty years’ experience. Without going into all the details, after my review of the material I was confident (as confident as one can be in this vague, confused, contradictory system) that I was right in my original placement of these students. After a lengthy and detailed discussion of the instruments and students scripts, the panel chair agreed to reinstated them to VHA 6 & 5 – back up 13 rungs. At this point I decided discretion was the better part of valor, accepted my gains and did not point out that QSA instructs panel members not to drop students less than three rungs if they are going to be moved at all. So there we were – from VHA 8 to HA 3 and back to VHA 6! Incredible!

However, this was not the end of the nightmare! I teach the same two students in maths B and awarded them VHA 8 & 6. At verification the panel moved them down to HA 10 & 9 – on appeal they were reinstated to VHA 7 & 5!! It was the same dreadful, emotion draining mess, all over again.

SECTION REMOVED TO PROTECT PRIVACY BUT THE AUTHOR STATES THAT THESE WERE BRILLIANT STUDENTS WHO GOT LOTS OF AWARDS AND SCHOLARSHIPS

The point of bringing all this to your attention is to illustrate the gross failings of this cumbersome, time wasting system. These two examples are not isolated from what I hear from other colleagues. What would have happened to these two students had I not successfully been able to contest the panel’s decision? How could the system have got it so monumentally wrong? Every year one waits with apprehension for what the lottery of verification will return.

It would appear that the system of internal assessment, panel moderation and verification, much vaunted by QSA is, at best, one that is muddled, poorly managed, variable from region to region, and open to subjective interpretation. At worst, it is highly wasting of teachers’ valuable time, prone to gross inaccuracies and leads to a lowering of standards.

So, are there solutions?  The notion of using criteria has some merit and is not the problem in and of itself. It is a good thing to assess criteria that are central to being able ‘to do’ physics, chemistry, maths b, etc.. As I see it, there are two central problems and a number of peripheral issues:

The first central problem is that of writing assessment instruments: the difficulty and inordinate amount of time needed to write suitable quality assessment instruments that adequately assess the breadth and depth of the required criteria to the satisfaction of QSA makes the job daunting to say the least.

The second problem is that of marking: of interpreting whether or not students’ responses meet such and such a criteria to such and such a standard. This is primarily because, by their very nature, criteria statements, at best, are not precise and are open to subjective interpretation by teachers and panel alike. At worst they are unclear, confusing and very difficult to use accurately. Again, a huge amount of time is taken in trying to do the job as accurately as possible.

Peripheral issues include situations such as:

If a student fails to attempt a question on a certain criteria altogether what grade should he be awarded – cannot be an E as he has not met even that criteria.

Or what if a student has a set of marks for, let’s say the Knowledge and conceptual understanding criteria, such as:

Out of four C grade questions he got one completely correct, the other three were nonsense or not attempted and awarded no grade.

Out of three B grade questions two were completely correct, the other not attempted

Out of two A grade questions one was done poorly and awarded C, the other was not attempted.

How do you award an overall grade for that combination? In the old days the marks would simply be totaled, but what does one do with that variety of grades?

Another issue with using criteria/letter grading is how the final verification grade given?

If criteria grades are awarded on an A, B, C basis i.e. the student has or has not met the standard for an A grade criteria, how can a student be considered an VHA 6 compared to a VHA 3 say. Again we are back to vague, subjective, uncertain decisions.

In the light of all this I propose that the only way to overcome all these uncertainties is to use external assessment where instruments are written by people who have the necessary amount of time, who have had adequate training and experience in writing instruments that properly assess the required criteria. Similarly, marking should be done by people who have had adequate training in understanding and interpreting criteria, who will bring a consistency and fairness to all students’ results.

In this way it could be hoped to avoid the huge range of interpretations of criteria and marking procedures that one hears regularly from colleagues in different regions. The constant re-inventing of the wheel from school to school would be avoided with associated waste of time; and most of all teachers would be freed up to do what they do best – teach.