Get the latest tech news

'Copyright Traps' Could Tell Writers If an AI Has Scraped Their Work


An anonymous reader quotes a report from MIT Technology Review: Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used...

An anonymous reader quotes a report from MIT Technology Review: Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. Now they have a new way to prove it: " copyright traps" developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. "There is a complete lack of transparency in terms of which content is used to train models, and we think this is preventing finding the right balance [between AI companies and content creators]," says Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, who led the research.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of Work

Work

Photo of writers

writers

Photo of copyright traps

copyright traps

Related news:

News photo

Does AI increase productivity at work? New study suggests otherwise

News photo

The future of work: How Salesforce and Workday’s AI alliance will transform your office

News photo

How my 4 favorite AI tools help me get more done at work