It’s not exactly a secret that many IELTS candidates are unpleasantly surprised when they receive their Writing scores; it’s not uncommon for marks in this area of the test to be a full band, or even a band-and-a-half, lower than in the other three sections. Very often, they wonder whether there has been some kind of mistake, and one of their first question is usually whether it’s worth it for them to request an Enquiry on Results (EOR) and have their essay re-marked.
As I’ve written about before, one of the overlooked challenges of the IELTS Writing test is that it is always administered third, after Listening and Reading. By that point, most test-takers are already starting to get tired from the intense concentration required in the previous sections, and shifting into writing mode can be very difficult. If a normally strong writer does take a little while to warm up, it is entirely possible that the beginning of their Task 1 response will not in fact be representative of their overall skill level.
In other cases, a test-taker may get through Task 1 without a problem and then crash at the beginning of Task 2, only to recover partway through their essay. By that time, however, the damage may have already been done.
Examiners are only human: no matter how much training they’ve received, their judgment is inevitably going to be influenced by their first impression. That’s how the human mind works. On a purely subconscious level, an examiner may be less swayed by strong writing later in an essay than they would have been at the beginning, and award a slightly lower score as a result.
While there are exceptions, the reality is that the level of most candidates’ writing does not change significantly throughout an essay. (And I say this having read hundreds of responses.) In the vast, vast majority of cases, what you see in the introduction is basically what you get.
Moreover, if someone’s essay is on the longer side (300+ words), the examiner may just be skimming by the time the reach the end. Readers have a lot of essays to get through, and they can’t afford to spend too much time parsing details.
Given that, it’s understandable how the occasional outlier might end up with a slightly lower score than they should have received. It isn’t fair, but I don’t think there’s a way to make the system entirely foolproof.
So if you have received a lower Writing score than you expected, and if you really struggled initially because of fatigue but feel that your writing got stronger as you went on, then it may be worthwhile to have your score double-checked.
This is purely anecdotal, but I do know of a case in which this fact happened, and in which the Writing score—and in fact the overall score—was raised by half a band after an EOR. And, for the record, notification was given in only a few days. While there is obviously no way of proving that the original score was influenced by a comparatively weak (Task 2) introduction, it seems possible that an examiner’s initial impression may have played a role.
To be clear, I don’t want to give anyone false hope here; most of the time, scoring is quite accurate. But given all the uncertainty around when exactly an EOR is worth going for, I think it’s worth 1) looking at some of the specifics involved so that candidates can make more informed choices; and 2) reiterating that yes, it is actually possible for scores to get changed.
Hi Erica. I absolutely agree. I used to advise students not to waste their money on an appeal, but recently I’ve noticed a lot more EORs resulting in higher scores. I think this is related to the average number of scripts that examiners mark these days. I used to grade a maximum of 8 in one sitting, and now some examiners are making 50-100 a day (how?!), so there are more likely to be fatigue-related errors made by the examiner.
Unfortunately, I’m totally not surprised at that number, although I don’t see how anyone’s mind could not start to melt after about the twentieth response. I’m assuming that cost-cutting measures on IDP’s part are playing a big role here, as seems to be the case in the testing industry as a whole. At least they’re not relying on AI to mark essays… Speaking from my own experience, I know that there are certain errors that my brain has now been primed to expect, and sometimes I have to really force myself to slow down to notice that a writer has produced a given construction correctly. Given that, it makes total sense that some scores would end up being artificially deflated.