Get the latest tech news

EditLens: Quantifying the extent of AI editing in text (2025)


A significant proportion of queries to large language models ask them to edit user-provided text, rather than generate new text from scratch. While previous work focuses on detecting fully AI-generated text, we demonstrate that AI-edited text is distinguishable from human-written and AI-generated text. First, we propose using lightweight similarity metrics to quantify the magnitude of AI editing present in a text given the original human-written text and validate these metrics with human annotators. Using these similarity metrics as intermediate supervision, we then train EditLens, a regression model that predicts the amount of AI editing present within a text. Our model achieves state-of-the-art performance on both binary (F1=94.7%) and ternary (F1=90.4%) classification tasks in distinguishing human, AI, and mixed writing. Not only do we show that AI-edited text can be detected, but also that the degree of change made by AI to human writing can be detected, which has implications for authorship attribution, education, and policy. Finally, as a case study, we use our model to analyze the effects of AI-edits applied by Grammarly, a popular writing assistance tool. To encourage further research, we commit to publicly releasing our models and dataset.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Text

Text

Photo of extent

extent

Photo of EditLens

EditLens

Related news:

News photo

How to make your text look futuristic (2016)

News photo

The surprisingly complex journey to text-selectable client-side generated PDFs

News photo

Natural Language Autoencoders: Turning Claude's Thoughts into Text