Get the latest tech news
How much slower is random access, really?
by Sam Estep, 2025-06-23 You may know that, because your computer has different caches (L1, L2, L3...), and memory operations operate on cache lines of about 64 bytes each, you should write programs that exhibit locality to get maximum performance. (Disk not shown, of course.) But how well do you understand this idea? For instance, let's say you have an array of floating-point numbers, and an array of all the indices of the first array.
A Linux desktop with an AMD Ryzen 5 3600X, 24 GiB of Corsair Vengeance LPX DDR4 3000MHz DRAM, and a Western Digital 1 TB 3D NAND SATA SSD. Just like the MacBook, there's a huge spike when there's too much to fit everything in RAM, but the interesting difference here is that random-order performance starts to degrade sharply even before reaching that point, while first-to-last order stays relatively stable. Interestingly, while that effect stays when switching to a more direct approach on Linux, it seems to magically go away on macOS; perhaps due to a difference in how the two OSes handle memory-mapped files?
Or read this on Hacker News