Get the latest tech news

AMD GPU Inference


Contribute to slashml/amd_inference development by creating an account on GitHub.

This project provides a Docker-based inference engine for running Large Language Models (LLMs) on AMD GPUs. This ensures that all required ROCm drivers and libraries are available for the inference engine to utilize the AMD GPU effectively. Ensure that your AMD GPU drivers and ROCm are correctly installed and configured on your host system.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of AMD Inference

AMD Inference