Get the latest tech news
How logic can help AI models tell more truth, according to AWS
Linking AI models to formal verification methods can correct LLM shortcomings such as false assertions. Amazon's Byron Cook explains the promise of automated reasoning.
Recently, Amazon AWS distinguished scientist Byron Cook made the case for what is called "automated reasoning," also known as "symbolic AI" or, more abstrusely, "formal verification." The other reason for a hybrid is to deal with the limitations of generative AI that have become apparent, especially what are called hallucinations or confabulations, the tendency for large language models (LLMs) to produce false assertions, sometimes wildly so. Amazon AWSExplained Cook, "In the background, what we're doing is we're taking the natural language text, we're mapping it into mathematical logic, we're proving or disproving the correctness of the statements, and then we're providing witnesses so you can, as a customer, pull on that, the log of the argument, that the property is true, but in a way that could be audited."
Or read this on ZDNet