Get the latest tech news

Fine-tune and deploy open LLMs as containers using AIKit - Part 1


A Blog post by Sertaç Özercan on Hugging Face

In this series, we'll explore inference and fine-tuning, automating these processes using GitHub Actions and Kubernetes, and addressing the security implications of deploying LLMs in production environments. 🌈 Supports air-gapped environments with self-hosted, local, or any remote container registries to store model images for inference on the edge. Similar to the model image we ran earlier, we will call docker run, but this time with an NVIDIA GPU enabled with--gpus all flag.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of containers

containers

Photo of tune

tune

Photo of open LLMs

open LLMs

Related news:

News photo

How does hardware acceleration work with containers?

News photo

Mistral launches new services and SDK to let customers fine-tune its models

News photo

Take a look at Traefik, even if you don't use containers