Get the latest tech news
Things Get Strange When AI Starts Training Itself: What happens if AI becomes even less intelligible?
What happens if AI becomes even less intelligible?
Generative AI already detects patterns and proposes theories that humans could not discover on their own, from quantities of data far too massive for any person to comb through, via internal algorithms that are largely opaque even to their creators. Researchers have seen particular success with automating quality control for narrow, well-defined tasks, such as mathematical reasoning and games, in which correctness or victory provide a straightforward way to evaluate synthetic data. Whether self-training AI leads to catastrophic disaster, subtle imperfections and biases, or unintelligible breakthroughs, the response cannot be to entirely trust or scorn the technology—it must be to take these models seriously as agents that today can learn, and tomorrow might be able to teach us, or even one another.
Or read this on r/technology