There's one key question that we get from teachers more than any other: "How accurate are your AI GCSE marking tools?"
We've been conducting a series of tests to show just how accurate the Top Marks' GCSE English AI marking tools really are. We hope you'll be as impressed by the figures as we were!
In today's experiment, we will be looking at AQA English Language - specifically, the 16-mark 'Compare how the writers convey…' task that appears in Paper 2.
AQA publishes numerous exemplar essays for their exam papers and we've put our tool to the test using 34 of those very same exam board approved standardisation materials. These exemplars represent a broad range of quality of answers. These essays are made available for standardisation purposes - so teachers can see what various levels of responses actually look like in the wild.
We took 34 of these essays and put them through our dedicated marking tool. Then we measured the correlation between the official marks the board gave the essay, and the marks Top Marks AI gave those essays.
We used a measurement called the Pearson correlation coefficient. In short:
What sort of correlation do experienced human markers achieve when marking essays already marked by a lead examiner?
Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.
The Pearson correlation coefficient between the scores these experienced examiners gave and the chief examiner was just below 0.7. This indicated a positive correlation, though far from perfect. If you are interested, you can find the study here.
Top Marks, across the 34 essays, achieved a correlation of 0.94 -- an incredibly strong positive correlation that far outperforms the experienced human markers in the Cambridge study. (Top Marks AI was also not privy to the "correct marks" or any annotations).
Moreover, 85% of the marks we gave were within a 2 mark tolerance of the grade given by the chief examiner.
Another interesting metric is the Mean Absolute Error, for which our system scored a 1.24. On average, the AI differed from the board by 1.24 marks, which is comfortably within a 2 mark tolerance. As a percentage, that's an average of 7.74% difference.
In contrast, in that same Cambridge study, experienced examiners marking a 40-mark question showed a Mean Absolute Error of 5.64 marks, that's a difference of 14.1%. These results highlight the exceptional accuracy of Top Marks AI compared to traditional marking practices.
We don't claim that Top Marks is infallible, but when it does get things wrong, just how bad is it? Well, let's turn to the Root Mean Square Error to find out. Root Mean Square Error (RMSE) is a measure of the severity of large errors. When you square the number 1, you still get 1, and when you square 2, you still only make a small jump to 4. But square 5, and you're suddenly all the way up at 25. That's how RMSE works - it (essentially!) highlights large errors by squaring them.
Top Marks AI's Root Mean Square Error was 1.64, still well within the 2 mark tolerance for a 16 mark question such as this.
You can see the full side-by-side human and AI scores below. Please note that three of the board grades are shown as a decimal. This is because, instead of assigning an extract grade, AQA occasionally places an essay within a 2 mark bracket, and, in those instances, we therefore chose a decimal halfway between the two grades.
Absolutely.
First, here's a scatter graph to show you what a theoretical perfect correlation of 1 would look like:
Now, let's look at the real-life graph, drawn from the data above:
On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.
Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.
We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Learn more in our Cookie Policy.