Get the latest tech news

Things Get Strange When AI Starts Training Itself: What happens if AI becomes even less intelligible?


What happens if AI becomes even less intelligible?

Generative AI already detects patterns and proposes theories that humans could not discover on their own, from quantities of data far too massive for any person to comb through, via internal algorithms that are largely opaque even to their creators. Researchers have seen particular success with automating quality control for narrow, well-defined tasks, such as mathematical reasoning and games, in which correctness or victory provide a straightforward way to evaluate synthetic data. Whether self-training AI leads to catastrophic disaster, subtle imperfections and biases, or unintelligible breakthroughs, the response cannot be to entirely trust or scorn the technology—it must be to take these models seriously as agents that today can learn, and tomorrow might be able to teach us, or even one another.

Get the Android app

Or read this on r/technology

Read more on:

Photo of things

things

Related news:

News photo

Sam Altman Says AI Could Make 'Things Go Horribly Wrong' | OpenAI CEO warns that 'societal misalignments' could make artificial intelligence dangerous

News photo

ChatGPT will now remember — and forget — things you tell it to

News photo

US Patent Office: AI is all well and good, but only humans can patent things