Get the latest tech news

Stanford Professor Accused of Using AI to Write Expert Testimony Criticizing Deepfakes


Plaintiffs in a lawsuit challenging Minnesota's law criminalizing election deepfakes say an expert brought in by the state likely wrote his opinion with the help of AI.

Jeff Hancock, the founding director of Stanford’s Social Media Lab, submitted his expert opinion earlier this month in Kohls v. Ellison, a lawsuit filed by a YouTuber and Minnesota state representative who claim the state’s new law criminalizing the use of deepfakes to influence elections violates their First Amendment right to free speech. His opinion included a reference to a study that purportedly found “even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content.” But according to the plaintiff’s attorneys, the study Hancock cited—titled “The Influence of Deepfake Videos on Political Attitudes and Behavior” and published in the Journal of Information Technology & Politics—does not actually exist. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.”

Get the Android app

Or read this on r/technology

Read more on:

Photo of Stanford

Stanford

Photo of deepfakes

deepfakes

Photo of expert testimony

expert testimony

Related news:

News photo

How Generative AI Is flooding the web with deepfakes and disinformation

News photo

ForceField helps detect deepfakes and digital deception by verifying source data

News photo

EU has an innovative new way of fighting against deepfakes