Get the latest tech news

Stanford study finds AI legal research tools prone to hallucinations


A study by researchers at Stanford University shows that AI-powered legal research tools make hallucinations, contrary to claims by providers.

“Our team had conducted an earlier study that showed that general-purpose AI tools are prone to legal hallucinations — the propensity to make up bogus facts, cases, holdings, statutes, and regulations,” Daniel E. Ho, Law Professor at Stanford and co-author of the paper, told VentureBeat. However, the authors note that despite their current limitations, AI-assisted legal research can still provide value compared to traditional keyword search methods or general-purpose AI, especially when used as a starting point rather than the final word. Pablo Arredondo, VP of CoCounsel at Thomson Reuters, told VentureBeat, “I applaud the conversation Stanford started with this study, and we look forward to diving into these findings and other potential benchmarks.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Stanford

Stanford

Photo of Stanford study

Stanford study

Photo of hallucinations

hallucinations

Related news:

News photo

To solve AI's energy crisis, 'rethink the entire stack from electrons to algorithms,' says Stanford prof

News photo

The Stanford Startup and the MIT Startup (2013)

News photo

Convolutional Neural Networks for Visual Recognition