Get the latest tech news

A new research project is the first comprehensive effort to categorize all the ways AI can go wrong, and many of those behaviors resemble human psychiatric disorders.


New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.

The goal is to reach what the researchers have termed a state of "artificial sanity" — AI that works reliably, stays steady, makes sense in its decisions, and is aligned in a safe, helpful way. "This framework is offered as an analogical instrument … providing a structured vocabulary to support the systematic analysis, anticipation, and mitigation of complex AI failure modes,” the researchers said in the study. They think adopting the categorization and mitigation strategies they suggest will strengthen AI safety engineering, improve interpretability, and contribute to the design of what they call "more robust and reliable synthetic minds."

Get the Android app

Or read this on r/technology

Read more on:

Photo of Ways

Ways

Photo of behaviors

behaviors

Photo of comprehensive effort

comprehensive effort

Related news:

News photo

5 ways to run Windows apps on MacOS - and 2 are free

News photo

New 3D mapping tech goes way beyond GPS to let us see the earth in ways never before possible

News photo

Yes, you can run Windows apps on Linux - here are my top 5 ways