"How accurate are your AI GCSE English Literature marking tools?" It's the question that comes up time and again in our conversations with schools and teachers.
As such, we've performed comprehensive evaluations to demonstrate the accuracy of the Top Marks' GCSE English Literature AI marking tools really are. We hope you'll be as impressed by the results as we were!
We're examining the performance on OCR English Literature -- specifically, the GCSE OCR English: 40 Mark Nineteenth Century Novels.
OCR makes available numerous exemplar essays for their exam papers and we've put our tool to the test using 64 of those very same exam board approved standardisation materials. These exemplars showcase a broad spectrum of answer quality. These essays are provided for standardisation purposes - so teachers can see what different levels of responses actually look like in practice.
We took 64 of these essays and ran them through our dedicated marking tool. Then we measured the correlation between the official marks the board awarded each essay, and the marks Top Marks AI assigned to those same essays.
We employed the Pearson correlation coefficient. In short:
What sort of correlation do experienced human markers achieve when marking essays already marked by a lead examiner?
Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.
The Pearson correlation coefficient between the scores these experienced examiners gave and the chief examiner was just below 0.7. This indicated a positive correlation, though far from perfect. If you are interested, you can find the study here.
Our system demonstrated a correlation of 0.90 -- an incredibly strong positive correlation that far outperforms the experienced human markers in the Cambridge study. (Top Marks AI was also not privy to the "correct marks" or any annotations).
Moreover, 67.19% of the marks we gave were within 4 marks of the grade given by the chief examiner.
Another interesting metric is the Mean Absolute Error, for which our system scored 3.20. On average, the AI differed from the board by 3.20 marks, which is comfortably within 4 marks. As a percentage, that's an average of 8.0% difference.
In contrast, in that same Cambridge study, experienced examiners marking a 40-mark question showed a Mean Absolute Error of 5.64 marks, that's a difference of 14.1%. These results highlight the exceptional accuracy of Top Marks AI compared to traditional marking practices.
We don't claim that Top Marks is infallible, but when it does get things wrong, just how bad is it? Well, let's turn to the Root Mean Square Error to find out. Root Mean Square Error (RMSE) is a measure of the severity of large errors. When you square the number 1, you still get 1, and when you square 2, you still only make a small jump to 4. But square 5, and you're suddenly all the way up at 25. That's how RMSE works - it (essentially!) highlights large errors by squaring them.
Top Marks AI's Root Mean Square Error was 4.02, meaning even when larger errors occur, they remain remarkably small relative to the 40-mark scale.
You can see the full side-by-side human and AI scores below.
| Essay ID | Board Score | Top Marks AI Score | Difference |
|---|---|---|---|
| Summer 2017 Q1 -40 Marks 1 (-) (40).docx | 40.0 | 34.6 | -5.4 |
| Summer 2017 Q1 -40 Marks 2 (-) (29).pdf | 29.0 | 28.0 | -1.0 |
| Summer 2017 Q1 -40 Marks 3 (-) (29).pdf | 29.0 | 27.0 | -2.0 |
| Summer 2017 Q1 -40 Marks 5 (-) (36).pdf | 36.0 | 34.8 | -1.2 |
| Summer 2017 Q1 -40 Marks 8 (-) (20).pdf | 20.0 | 16.7 | -3.3 |
| Summer 2017 Q1 -40 Marks 4 (-) (12).pdf | 12.0 | 12.6 | +0.6 |
| Summer 2017 Q1 -40 Marks 6 (-) (33).pdf | 33.0 | 27.4 | -5.6 |
| Summer 2017 Q1 -40 Marks 11 (-) (19).pdf | 19.0 | 17.3 | -1.7 |
| Summer 2017 Q1 -40 Marks 7 (-) (26).pdf | 26.0 | 27.7 | +1.7 |
| Summer 2017 Q1 -40 Marks 9 (-) (36).pdf | 36.0 | 32.6 | -3.4 |
| Summer 2017 Q1 -40 Marks 12 (-) (38).pdf | 38.0 | 38.5 | +0.5 |
| Summer 2017 Q1 -40 Marks 18 (-) (28).pdf | 28.0 | 29.9 | +1.9 |
| Summer 2017 Q1 -40 Marks 10 (-) (20).pdf | 20.0 | 22.7 | +2.7 |
| Summer 2017 Q1 -40 Marks 17 (-) (18).pdf | 18.0 | 15.9 | -2.1 |
| Summer 2017 Q1 -40 Marks 20 (-) (22).pdf | 22.0 | 24.9 | +2.9 |
| Summer 2017 Q1 -40 Marks 19 (-) (5).pdf | 5.0 | 15.0 | +10.0 |
| Summer 2017 Q1 -40 Marks 13 (-) (34).pdf | 34.0 | 37.1 | +3.1 |
| Summer 2017 Q1 -40 Marks 22 (-) (16).pdf | 16.0 | 17.4 | +1.4 |
| Summer 2017 Q1 -40 Marks 21 (-) (21).docx | 21.0 | 15.6 | -5.4 |
| Summer 2017 Q1 -40 Marks 14 (-) (22).pdf | 22.0 | 21.0 | -1.0 |
| Summer 2017 Q1 -40 Marks 15 (-) (17).pdf | 17.0 | 22.3 | +5.3 |
| Summer 2018 Q1 -40 Marks 1 (-) (38).pdf | 38.0 | 33.1 | -4.9 |
| Summer 2017 Q1 -40 Marks 16 (-) (14).pdf | 14.0 | 20.8 | +6.8 |
| Summer 2018 Q1 -40 Marks 3 (-) (24).pdf | 24.0 | 28.1 | +4.1 |
| Summer 2018 Q1 -40 Marks 2 (-) (30).pdf | 30.0 | 26.1 | -3.9 |
| Summer 2018 Q1 -40 Marks 4 (-) (18).pdf | 18.0 | 13.8 | -4.2 |
| Summer 2018 Q1 -40 Marks 6 (-) (23).pdf | 23.0 | 25.3 | +2.3 |
| Summer 2018 Q1 -40 Marks 9 (-) (40).pdf | 40.0 | 37.4 | -2.6 |
| Summer 2018 Q1 -40 Marks 11 (-) (40).docx | 40.0 | 38.5 | -1.5 |
| Summer 2018 Q1 -40 Marks 7 (-) (21).pdf | 21.0 | 20.5 | -0.5 |
| Summer 2018 Q1 -40 Marks 5 (-) (14).pdf | 14.0 | 17.2 | +3.2 |
| Summer 2018 Q1 -40 Marks 10 (-) (34).pdf | 34.0 | 35.0 | +1.0 |
| Summer 2018 Q1 -40 Marks 8 (-) (12).pdf | 12.0 | 13.6 | +1.6 |
| Summer 2018 Q1 -40 Marks 12 (-) (26).pdf | 26.0 | 35.8 | +9.8 |
| Summer 2018 Q1 -40 Marks 14 (-) (32).pdf | 32.0 | 30.6 | -1.4 |
| Summer 2018 Q1 -40 Marks 19 (-) (38).pdf | 38.0 | 37.4 | -0.6 |
| Summer 2018 Q1 -40 Marks 16 (-) (40).pdf | 40.0 | 34.4 | -5.6 |
| Summer 2018 Q1 -40 Marks 21 (-) (33).pdf | 33.0 | 32.3 | -0.7 |
| Summer 2018 Q1 -40 Marks 13 (-) (19).pdf | 19.0 | 14.6 | -4.4 |
| Summer 2018 Q1 -40 Marks 15 (-) (18).pdf | 18.0 | 19.1 | +1.1 |
| Summer 2018 Q1 -40 Marks 17 (-) (28).pdf | 28.0 | 36.2 | +8.2 |
| Summer 2018 Q1 -40 Marks 20 (-) (20).pdf | 20.0 | 29.6 | +9.6 |
| Summer 2018 Q1 -40 Marks 22 (-) (27).pdf | 27.0 | 28.6 | +1.6 |
| Summer 2018 Q1 -40 Marks 18 (-) (15).pdf | 15.0 | 15.0 | +0.0 |
| Summer 2018 Q1 -40 Marks 23 (-) (8).pdf | 8.0 | 13.7 | +5.7 |
| Summer 2019 Q1 -40 Marks 1 (-) (37).docx | 37.0 | 34.2 | -2.8 |
| Summer 2019 Q1 -40 Marks 2 (-) (23).pdf | 23.0 | 21.3 | -1.7 |
| Summer 2019 Q1 -40 Marks 5 (-) (36).docx | 36.0 | 37.8 | +1.8 |
| Summer 2019 Q1 -40 Marks 3 (-) (38).pdf | 38.0 | 35.9 | -2.1 |
| Summer 2019 Q1 -40 Marks 7 (-) (38).pdf | 38.0 | 36.9 | -1.1 |
| Summer 2019 Q1 -40 Marks 6 (-) (25).pdf | 25.0 | 21.9 | -3.1 |
| Summer 2019 Q1 -40 Marks 4 (-) (22).pdf | 22.0 | 22.5 | +0.5 |
| Summer 2019 Q1 -40 Marks 10 (-) (28).pdf | 28.0 | 27.8 | -0.2 |
| Summer 2019 Q1 -40 Marks 8 (-) (31).pdf | 31.0 | 30.8 | -0.2 |
| Summer 2019 Q1 -40 Marks 12 (-) (40).pdf | 40.0 | 32.4 | -7.6 |
| Summer 2019 Q1 -40 Marks 11 (-) (16).pdf | 16.0 | 10.8 | -5.2 |
| Summer 2019 Q1 -40 Marks 14 (-) (38).pdf | 38.0 | 40.0 | +2.0 |
| Summer 2019 Q1 -40 Marks 16 (-) (40).pdf | 40.0 | 37.0 | -3.0 |
| Summer 2019 Q1 -40 Marks 18 (-) (38).pdf | 38.0 | 33.1 | -4.9 |
| Summer 2019 Q1 -40 Marks 13 (-) (25).pdf | 25.0 | 18.9 | -6.1 |
| Summer 2019 Q1 -40 Marks 9 (-) (21).pdf | 21.0 | 25.6 | +4.6 |
| Summer 2019 Q1 -40 Marks 15 (-) (33).pdf | 33.0 | 34.6 | +1.6 |
| Summer 2019 Q1 -40 Marks 17 (-) (22).pdf | 22.0 | 18.2 | -3.8 |
| Summer 2019 Q1 -40 Marks 19 (-) (28).pdf | 28.0 | 22.8 | -5.2 |
Absolutely.
First, here's a scatter graph to show you what a theoretical perfect correlation of 1 would look like:
Now, let's look at the real-life graph, drawn from the data above:
On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.
Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.
We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Learn more in our Cookie Policy.