Get the latest tech news

Arch-Function LLMs promise lightning-fast agentic AI for complex enterprise workflows


Katanemo's new Arch-Function LLMs promise 12x faster function-calling capabilities, empowering enterprises to build ultra-fast, cost-effective agentic AI applications.

This includes detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request and managing the observability of prompts and LLM interactions in a centralized way. As the founder puts it, these new LLMs – built on top of Qwen 2.5 with 3B and 7B parameters – are designed to handle function calls, which essentially allows them to interact with external tools and systems for performing digital tasks and accessing up-to-date information. Arch-Function analyzes prompts, extracts critical information from them, engages in lightweight conversations to gather missing parameters from the user, and makes API calls so that you can focus on writing business logic,” Paracha explained.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of lightning

lightning

Photo of Arch-Function

Arch-Function

Photo of fast agentic AI

fast agentic AI

Related news:

News photo

The Cost of Lightning

News photo

These Apple Products Are Still Sold With Lightning After More AirPods Switch to USB-C

News photo

Groq unveils lightning-fast LLM engine; developer base rockets past 280K in 4 months