Get the latest tech news
"Superhuman" Go AIs still have trouble defending against these simple exploits
Plugging up "worst-case" algorithmic holes is proving more difficult than expected.
LLMs that can succeed at some complex creative and reference tasks might still utterly fail when confronted with trivial math problems(or even get"poisoned" by malicious prompts). Visual AI models that can describe and analyze complex photos may nonetheless fail horribly when presented with basic geometric shapes. That suggests "it may be possible to fully defend a Go AI by training against a large enough corpus of attacks," they write, with proposals for future research that could make this happen.
Or read this on r/technology