Get the latest tech news
Stanford study finds AI legal research tools prone to hallucinations
A study by researchers at Stanford University shows that AI-powered legal research tools make hallucinations, contrary to claims by providers.
“Our team had conducted an earlier study that showed that general-purpose AI tools are prone to legal hallucinations — the propensity to make up bogus facts, cases, holdings, statutes, and regulations,” Daniel E. Ho, Law Professor at Stanford and co-author of the paper, told VentureBeat. However, the authors note that despite their current limitations, AI-assisted legal research can still provide value compared to traditional keyword search methods or general-purpose AI, especially when used as a starting point rather than the final word. Pablo Arredondo, VP of CoCounsel at Thomson Reuters, told VentureBeat, “I applaud the conversation Stanford started with this study, and we look forward to diving into these findings and other potential benchmarks.
Or read this on Venture Beat