Get the latest tech news

NVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data


by Ryan Smith on March 18, 2024 5:00 PM EST Already solidly in the driver’s seat of the generative AI accelerator market at this time, NVIDIA has long made it clear that the company isn’t about to slow down and check out the view. Instead, NVIDIA intends to continue iterating along its multi-generational product roadmap for GPUs and accelerators, to leverage its early advantage and stay ahead of its ever-growing coterie of competitors in the accelerator market.

Already solidly in the driver’s seat of the generative AI accelerator market at this time, NVIDIA has long made it clear that the company isn’t about to slow down and check out the view. Ahead of today’s keynote (which by the time you’re reading this, should still be going on), NVIDIA offered the press a limited pre-briefing on the Blackwell architecture and the first chip to implement it. As we’ve noted in our previous HBM3E coverage, the memory is ultimately designed to go to 9.2Gbps/pin or better, but we often see NVIDIA play things a bit conservatively on clockspeeds for their server accelerators.

Get the Android app

Or read this on AnandTech

Read more on:

Photo of b200

b200

Photo of smaller data

smaller data

Photo of b100 accelerators

b100 accelerators