Get the latest tech news
Show HN: BrowserAI โ Run LLMs directly in browser using WebGPU (open source)
Run local LLMs inside your browser. Contribute to sauravpanda/BrowserAI development by creating an account on GitHub.
๐ Privacy First: All processing happens locally - your data never leaves the browser ๐ฐ Cost Effective: No server costs or complex infrastructure needed ๐ Offline Capable: Models work offline after initial download ๐ Blazing Fast: WebGPU acceleration for near-native performance ๐ฏ Developer Friendly: Simple API, multiple engine support, ready-to-use models โก WebGPU acceleration for blazing fast inference ๐ Seamless switching between MLC and Transformers engines ๐ฆ Pre-configured popular models ready to use ๐ ๏ธ Easy-to-use API for text generation and more ๐ฏ Simplified model initialization ๐ Basic monitoring and metrics ๐ Simple RAG implementation ๐ ๏ธ Developer tools integration
Or read this on Hacker News