Get the latest tech news

Scheduling Model in LLVM – Part I


Instruction scheduling is essential to modern compilers. It tries to hide latencies and increases the throughput of a straight line code by reordering the enclosing instructions. In order to do that, compilers have to know a whole bunch of information, ranging from individal instruction’s latency to microarchitecture details. The system that describes these is called a scheduling model. In LLVM, a scheduling model is used by not just the instruction scheduler, but also target-specific optimizations like MachineCombiner and components like MCA (Machine Code Analyzer)1.

By default, LLVM’s scheduling model assumes that operand reads finish instantly, so a SchedRead cannot be assigned a latency property nor consuming any cycle the same way as a SchedWrite. Knowing the size of that buffer is crucial to our scheduling model, because then we can make a wiser decision on distributing the instructions evenly across all pipes, rather than jamming them into a single one just like freeways in LA. SiFive’s X280, which we’ve seen its scheduling model previously, and X390 are good examples, where they save area by adopting in-order design and enjoying a whopping 512- and 1024-bit vector, respectively.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of llvm

llvm

Photo of Scheduling Model

Scheduling Model

Related news:

News photo

Integrated assembler improvements in LLVM 19

News photo

AMD AI Compiler Engineer Lands A Generic MLIR To SPIR-V Pass In LLVM 19

News photo

AMD Lands Support For Vendor Flavored SPIR-V Within LLVM