Get the latest tech news
Groq unveils lightning-fast LLM engine; developer base rockets past 280K in 4 months
Groq now allows you to make lightning fast queries and perform other tasks with leading large language models (LLMs) directly on its web site. On the tests I did, Groq replied at around 1256.54 tokens per second, a speed that appears almost instantaneous, and something that GPU chips from companies like Nvidia are unable to do.
Groq now allows you to make lightning fast queries and perform other tasks with leading large language models (LLMs) directly on its web site. It was almost instantaneous at providing feedback, including suggesting clearer categorization, more detailed session descriptions and better speaker profiles. So far Groq has offered its service to power LLM workloads for free, and it’s gotten a massive uptake from developers, now at more than 282,000, Ross told VentureBeat.
Or read this on Venture Beat