Get the latest tech news

University examiners fail to spot ChatGPT answers in real-world test


ChatGPT-written exam submissions for a psychology degree mostly went undetected and tended to get better marks than real students’ work

The AI-generated answers were submitted alongside real students’ work, and accounted for, on average, 5 per cent of the total scripts marked by academics. “On average, the AI responses gained higher grades than our real student submissions,” says Scarfe, though there was some variability across modules. “We know that generative AI can produce reasonable sounding responses to simple, constrained textual questions.” He points out that unsupervised assessments including short answers have always been susceptible to cheating.

Get the Android app

Or read this on r/technology

Read more on:

Photo of ChatGPT answers

ChatGPT answers

Photo of world test

world test

Photo of University examiners

University examiners