"Can we really trust AI to mark GCSE English Literature essays?" It's one of the most common questions we receive from schools.
For questions with a limited marking range like this 8-mark question, we focus on a particularly important metric:Mean Absolute Error (MAE). MAE tells us, on average, how many marks our AI differs from the exam board's marks. A low MAE means high accuracy.
As such, we've conducted extensive testing to show exactly how accurate the Top Marks' GCSE English Literature AI marking tools really are. The outcomes may surprise you!
We're examining the performance on AQA English Language -- specifically, Paper 1, Question 2.
AQA makes available numerous exemplar essays for their exam papers and we've put our tool to the test using 40 of those very same exam board approved standardisation materials. These exemplars showcase a broad spectrum of answer quality. These exemplars are used for standardisation, showing teachers what responses at various levels look like.
We took 40 of these essays and ran them through our dedicated marking tool. Then we measured the difference between the official marks the board awarded each essay, and the marks Top Marks AI assigned to those same essays.
What level of accuracy do experienced human markers achieve when marking essays already marked by a lead examiner?
Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.
The Mean Absolute Error (average difference) between the experienced markers and the chief examiner was 5.64 marks on a 40-mark question -- that's an average difference of 14.1%. If you are interested, you can find the study here.
The results showed Top Marks achieving a correlation of, our system achieved a Mean Absolute Error of 0.59 marks. On average, the AI differed from the board by just 0.59 marks on this 8-mark question. As a percentage, that's an average of 7.4% difference -- significantly better than the 14.1% human marker difference in the Cambridge study.
Moreover, 85.00% of the marks we gave were within 1 mark of the grade given by the chief examiner.
As an additional measure of accuracy, we also calculated the Pearson correlation coefficient, which was 0.94. This indicates a strong positive relationship between our marks and the exam board's marks, showing that when the board assigns higher marks, Top Marks AI does too, and vice versa.
We don't claim that Top Marks is infallible, but when it does get things wrong, just how bad is it? Well, let's turn to the Root Mean Square Error to find out. Root Mean Square Error (RMSE) is a measure of the severity of large errors. When you square the number 1, you still get 1, and when you square 2, you still only make a small jump to 4. But square 5, and you're suddenly all the way up at 25. That's how RMSE works - it (essentially!) highlights large errors by squaring them.
Top Marks AI's Root Mean Square Error was 0.72, meaning even when larger errors occur, they remain remarkably small relative to the 8-mark scale.
You can see the full side-by-side human and AI scores below.
| Essay ID | Board Score | Top Marks AI Score | Difference |
|---|---|---|---|
| Exem AQ-EL1-2 S 32 (-) (2).docx | 2.0 | 1.7 | -0.3 |
| Exem AQ-EL1-2 S 33 (-) (3).docx | 3.0 | 2.9 | -0.1 |
| Exem AQ-EL1-2 S 34 (-) (4).docx | 4.0 | 4.1 | +0.1 |
| Exem AQ-EL1-2 S 35 (-) (6).docx | 6.0 | 5.9 | -0.1 |
| Exem AQ-EL1-2 S 36 (-) (7).docx | 7.0 | 7.0 | +0.0 |
| Exem AQ-EL1-2 S 1 (-) (6).docx | 6.0 | 5.4 | -0.6 |
| Exem AQ-EL1-2 S 2 (-) (2).docx | 2.0 | 1.7 | -0.3 |
| Exem AQ-EL1-2 S 9 (-) (1).docx | 1.0 | 1.7 | +0.7 |
| Exem AQ-EL1-2 S 37 (-) (8).docx | 8.0 | 7.0 | -1.0 |
| Exem AQ-EL1-2 S 10 (-) (2).docx | 2.0 | 0.9 | -1.1 |
| Exem AQ-EL1-2 S 3 (-) (3).docx | 3.0 | 4.1 | +1.1 |
| Exem AQ-EL1-2 S 38 (-) (7).docx | 7.0 | 7.0 | +0.0 |
| Exem AQ-EL1-2 S 17 (-) (8).docx | 8.0 | 7.0 | -1.0 |
| Exem AQ-EL1-2 S 39 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 18 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 11 (-) (3).docx | 3.0 | 2.9 | -0.1 |
| Exem AQ-EL1-2 S 4 (-) (4).docx | 4.0 | 5.4 | +1.4 |
| Exem AQ-EL1-2 S 21 (-) (1).docx | 1.0 | 0.5 | -0.5 |
| Exem AQ-EL1-2 S 40 (-) (7.5).docx | 7.5 | 8.0 | +0.5 |
| Exem AQ-EL1-2 S 19 (-) (6).docx | 6.0 | 7.0 | +1.0 |
| Exem AQ-EL1-2 S 12 (-) (4).docx | 4.0 | 4.7 | +0.7 |
| Exem AQ-EL1-2 S 29 (-) (4).docx | 4.0 | 4.8 | +0.8 |
| Exem AQ-EL1-2 S 22 (-) (2).docx | 2.0 | 1.7 | -0.3 |
| Exem AQ-EL1-2 S 5 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 20 (-) (8).docx | 8.0 | 7.0 | -1.0 |
| Exem AQ-EL1-2 S 13 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 6 (-) (6).docx | 6.0 | 5.4 | -0.6 |
| Exem AQ-EL1-2 S 30 (-) (6).docx | 6.0 | 5.4 | -0.6 |
| Exem AQ-EL1-2 S 23 (-) (3).docx | 3.0 | 4.1 | +1.1 |
| Exem AQ-EL1-2 S 31 (-) (8).docx | 8.0 | 7.0 | -1.0 |
| Exem AQ-EL1-2 S 14 (-) (6).docx | 6.0 | 5.4 | -0.6 |
| Exem AQ-EL1-2 S 7 (-) (8).docx | 8.0 | 7.0 | -1.0 |
| Exem AQ-EL1-2 S 24 (-) (4).docx | 4.0 | 2.9 | -1.1 |
| Exem AQ-EL1-2 S 15 (-) (7).docx | 7.0 | 5.4 | -1.6 |
| Exem AQ-EL1-2 S 8 (-) (7).docx | 7.0 | 6.6 | -0.4 |
| Exem AQ-EL1-2 S 25 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 16 (-) (5).docx | 5.0 | 5.4 | +0.4 |
| Exem AQ-EL1-2 S 26 (-) (6).docx | 6.0 | 5.4 | -0.6 |
| Exem AQ-EL1-2 S 27 (-) (7).docx | 7.0 | 7.0 | +0.0 |
| Exem AQ-EL1-2 S 28 (-) (8).docx | 8.0 | 8.0 | +0.0 |
Absolutely.
First, here's a scatter graph to show you what a theoretical perfect correlation of 1 would look like:
Now, let's look at the real-life graph, drawn from the data above:
On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.
Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.
We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Learn more in our Cookie Policy.