Get the latest tech news
Researchers Propose a Better Way to Report Dangerous AI Flaws
After identifying major flaws in popular AI models, researchers are pushing for a new system to identify and report bugs.
In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways. These include encouraging vulnerable users to engage in harmful behavior or helping a bad actor to develop cyber, chemical, or biological weapons. Ruth Appel, a postdoctoral fellow at Stanford University who worked on the proposal, says that a formal way for faults in AI models to be flagged quickly and will hold companies publicly accountable.
Or read this on Wired