Get the latest tech news
Nvidia reveals its Eos supercomputer for AI processing sporting 4,608 H100 GPUs | Its the ninth fastest supercomputer in the world
Nvidia has given enthusiasts their first look at Eos, a data-center scale supercomputer designed for AI applications. It first introduced Eos at the Supercomputing Conference in November...
With a network architecture supporting data transfer speeds of up to 400Gb/s, the Eos can train large language models, recommender systems, and quantum simulations, among other AI tasks. 9 in the Top500 list of the world's fastest supercomputers – a notable achievement, ServeTheHome points out, since Nvidia stopped focusing on double-precision gains for AI performance some time ago. Nvidia claims that because the benchmark uses a portion of the complete GPT-3 data set, by extrapolation, Eos could now train in just eight days or 73x faster than a system using 512 A100 GPUs, which was the standard peak performance when GPT-3 came out in 2020.
Or read this on r/technology