Get the latest tech news
Arch-Function LLMs promise lightning-fast agentic AI for complex enterprise workflows
Katanemo's new Arch-Function LLMs promise 12x faster function-calling capabilities, empowering enterprises to build ultra-fast, cost-effective agentic AI applications.
This includes detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request and managing the observability of prompts and LLM interactions in a centralized way. As the founder puts it, these new LLMs – built on top of Qwen 2.5 with 3B and 7B parameters – are designed to handle function calls, which essentially allows them to interact with external tools and systems for performing digital tasks and accessing up-to-date information. Arch-Function analyzes prompts, extracts critical information from them, engages in lightweight conversations to gather missing parameters from the user, and makes API calls so that you can focus on writing business logic,” Paracha explained.
Or read this on Venture Beat