Get the latest tech news
University examiners fail to spot ChatGPT answers in real-world test
ChatGPT-written exam submissions for a psychology degree mostly went undetected and tended to get better marks than real students’ work
The AI-generated answers were submitted alongside real students’ work, and accounted for, on average, 5 per cent of the total scripts marked by academics. “On average, the AI responses gained higher grades than our real student submissions,” says Scarfe, though there was some variability across modules. “We know that generative AI can produce reasonable sounding responses to simple, constrained textual questions.” He points out that unsupervised assessments including short answers have always been susceptible to cheating.
Or read this on r/technology