"How accurate are your AI GCSE marking tools?" It's the question that comes up time and again in our conversations with schools and teachers.
As such, we've been running a series of tests to demonstrate just how accurate the Top Marks' GCSE English AI marking tools really are. We hope you'll be as impressed by the results as we were!
In this experiment, we will be examining AQA English Literature -- specifically, the 34 mark Shakespeare question.
AQA makes available numerous exemplar essays for their exam papers and we've put our tool to the test using 24 of those very same exam board approved standardisation materials. These exemplars showcase a broad spectrum of answer quality. These essays are provided for standardisation purposes - so teachers can see what different levels of responses actually look like in practice.
We took 24 of these essays and ran them through our dedicated marking tool. Then we measured the correlation between the official marks the board awarded each essay, and the marks Top Marks AI assigned to those same essays.
We used a measurement called the Pearson correlation coefficient. In short:
What sort of correlation do experienced human markers achieve when marking essays already marked by a lead examiner?
Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.
The Pearson correlation coefficient between the scores these experienced examiners gave and the chief examiner was just below 0.7. This indicated a positive correlation, though far from perfect. If you are interested, you can find the study here.
Top Marks, across the 24 essays, achieved a correlation of 0.94 -- an incredibly strong positive correlation that far outperforms the experienced human markers in the Cambridge study. (Top Marks AI was also not privy to the "correct marks" or any annotations).
Moreover, 75% of the marks we gave were within 4 marks of the grade given by the chief examiner.
Another interesting metric is the Mean Absolute Error, for which our system scored a 2.47. On average, the AI differed from the board by 2.47 marks, which is comfortably within a 3.4 mark tolerance. As a percentage, that's an average of 7.26% difference.
In contrast, in that same Cambridge study, experienced examiners marking a 40-mark question showed a Mean Absolute Error of 5.64 marks, that's a difference of 14.1%. These results highlight the exceptional accuracy of Top Marks AI compared to traditional marking practices.
We don't claim that Top Marks is infallible, but when it does get things wrong, just how bad is it? Well, let's turn to the Root Mean Square Error to find out. Root Mean Square Error (RMSE) is a measure of the severity of large errors. When you square the number 1, you still get 1, and when you square 2, you still only make a small jump to 4. But square 5, and you're suddenly all the way up at 25. That's how RMSE works - it (essentially!) highlights large errors by squaring them.
Top Marks AI's Root Mean Square Error was 3.38, meaning even when larger errors occur, they remain remarkably small relative to the 34-mark scale.
You can see the full side-by-side human and AI scores below.
Absolutely.
First, here's a scatter graph to show you what a theoretical perfect correlation of 1 would look like:
Now, let's look at the real-life graph, drawn from the data above:
On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.
Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.
We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Learn more in our Cookie Policy.