Get the latest tech news
The von Neumann bottleneck is impeding AI computing?
The von Neumann architecture, which separates compute and memory, is perfect for conventional computing. But it creates a data traffic jam for AI.
But for AI computing, whose operations are simple, numerous, and highly predictable, a conventional processor ends up working below its full capacity while it waits for model weights to be shuttled back and forth from memory. About a decade ago, the von Neumann bottleneck wasn’t a significant issue because processors and memory weren’t so efficient, at least compared to the energy that was spent to transfer data, said Le Gallo-Bourdeau. This module brings the speed and bandwidth density of fiber optics to the edge of chips, supercharging their connectivity and hugely reducing model training time and energy costs.
Or read this on Hacker News