Get the latest tech news
Llama-agents: an async-first framework for building production ready agents
Contribute to run-llama/llama-agents development by creating an account on GitHub.
Next, when working in a notebook or for faster iteration, we can launch our llama-agents system in a single-run setting, where one message is propagated through the network and returned. Once you are happy with your system, we can launch all our services as independent processes, allowing for higher throughput and scalability. An orchestrator can be agentic (with an LLM making decisions), explicit (with a query pipeline defining a flow), a mix of both, or something completely custom.
Or read this on Hacker News