Get the latest tech news
MIT study finds that AI doesn’t, in fact, have values
A recent study out of MIT suggests that AI systems don't have discernible values or preferences, but instead mostly imitate and hallucinate.
A more recent paper out of MIT pours cold water on that hyperbolic notion, drawing the conclusion that AI doesn’t, in fact, hold any coherent values to speak of. The co-authors of the MIT study say their work suggests that “aligning” AI systems — that is, ensuring models behave in desirable, dependable ways — could be more challenging than is often assumed. “One thing that we can be certain about is that models don’t obey [lots of] stability, extrapolability, and steerability assumptions,” Stephen Casper, a doctoral student at MIT and a co-author of the study, told TechCrunch.
Or read this on TechCrunch