Get the latest tech news

Lm.rs: Minimal CPU LLM inference in Rust with no dependency


Minimal LLM inference in Rust. Contribute to samuel-vitorino/lm.rs development by creating an account on GitHub.

Isn't it incredible that in a few years, we could have AGI running in a few lines of poorly written Rust code? Download the.safetensors, config.json and tokenizer.model files from the original model's page on huggingface (So we don't have to clone the pytorch repo). Compile the rust code with cargo (make sure to pass the target-cpu flag):

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Rust

Rust

Photo of dependency

dependency

Photo of Minimal CPU LLM

Minimal CPU LLM

Related news:

News photo

Regrad Is Micrograd in Rust

News photo

My negative views on Rust (2023)

News photo

Rust is rolling off the Volvo assembly line