Get the latest tech news
New approach to agent reliability, AgentSpec, forces agents to follow rules
Researchers from Singapore Management University developed a new domain-specific language for agents to remain reliable.
Agents would allow enterprises to automate more steps in their workflows, but they can take unintended actions while executing a task, are not very flexible, and are difficult to control. The first AgentSpec tests integrated on LangChain frameworks, but the researchers said they designed it to be framework-agnostic, meaning it can also run on ecosystems on AutoGen and Apollo. Experiments using AgentSpec showed it prevented “over 90% of unsafe code executions, ensures full compliance in autonomous driving law-violation scenarios, eliminates hazardous actions in embodied agent tasks, and operates with millisecond-level overhead.” LLM-generated AgentSpec rules, which used OpenAI’s o1, also had a strong performance and enforced 87% of risky code and prevented “law-breaking in 5 out of 8 scenarios.”
Or read this on Venture Beat