
A study reveals who writes better, and it turns out, it’s not the AI.
Artificial intelligence has yet to match the quality of real student writing, according to new research from the University of East Anglia (UK). In a study published in the journal Written Communication
Although the AI-produced essays were generally well-structured and grammatically correct, they lacked a key element: the human perspective. The absence of personal insight and nuanced critical thinking distinguished them from authentic student submissions.
As AI tools become increasingly capable, the study emphasizes the need to promote critical literacy and ethical awareness in education. Researchers hope their findings will assist educators in identifying machine-generated content, helping to address academic dishonesty across schools, colleges, and universities worldwide.
Prof Ken Hyland, from UEA’s School of Education and Lifelong Learning, said: “Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments.
“The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts.
“In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers.”
The research team analysed 145 essays written by real university students and another 145 generated by ChatGPT.
Key Differences in Engagement
“We were particularly interested in looking at what we called ‘engagement markers’ like questions and personal commentary,” said Prof Hyland.
“We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive.
“They were full of rhetorical questions, personal asides, and direct appeals to the reader – all techniques that enhance clarity, connection, and produce a strong argument.
“The ChatGPT essays on the other hand, while linguistically fluent, were more impersonal. The AI essays mimicked academic writing conventions, but they were unable to inject text with a personal touch or to demonstrate a clear stance.
“They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic.
“This reflects the nature of its training data and statistical learning methods, which prioritise coherence over conversational nuance,” he added.
Despite its shortcomings, the study does not dismiss the role of AI in the classroom.
Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts.
“When students come to school, college, or university, we’re not just teaching them how to write, we’re teaching them how to think – and that’s something no algorithm can replicate,” added Prof Hyland.
Reference: “Does ChatGPT Write Like a Student? Engagement Markers in Argumentative Essays” by Feng (Kevin) Jiang and Ken Hyland, 30 April 2025, Written Communication.
DOI: 10.1177/07410883251328311
This study was led by UEA in collaboration with Prof Kevin Jiang of Jilin University, China.
Never miss a breakthrough: Join the SciTechDaily newsletter.