Get the latest tech news

Groq unveils lightning-fast LLM engine; developer base rockets past 280K in 4 months


Groq now allows you to make lightning fast queries and perform other tasks with leading large language models (LLMs) directly on its web site. On the tests I did, Groq replied at around 1256.54 tokens per second, a speed that appears almost instantaneous, and something that GPU chips from companies like Nvidia are unable to do.

Groq now allows you to make lightning fast queries and perform other tasks with leading large language models (LLMs) directly on its web site. It was almost instantaneous at providing feedback, including suggesting clearer categorization, more detailed session descriptions and better speaker profiles. So far Groq has offered its service to power LLM workloads for free, and it’s gotten a massive uptake from developers, now at more than 282,000, Ross told VentureBeat.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of months

months

Photo of lightning

lightning

Photo of Groq

Groq

Related news:

News photo

Affinity’s Adobe-rivaling creative suite is now free for six months

News photo

Temperatures 1.5C Above Pre-industrial Era Average For 12 Months, Data Shows

News photo

Amazon Prime Day early deals include five months of Amazon Music Unlimited for free