Get the latest tech news

Show HN: I compressed 10k PDFs into a 1.4GB video for LLM memory


Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed. - Olow304/memvid

Unlike traditional vector databases that consume massive amounts of RAM and storage, Memvid compresses your knowledge base into compact video files while maintaining instant access to any piece of information. mem.mp4 🎥 Video-as-Database: Store millions of text chunks in a single MP4 file 🔍 Semantic Search: Find relevant content using natural language queries 💬 Built-in Chat: Conversational interface with context-aware responses 📚 PDF Support: Direct import and indexing of PDF documents 🚀 Fast Retrieval: Sub-second search across massive datasets 💾 Efficient Storage: 10x compression compared to traditional databases 🔌 Pluggable LLMs: Works with OpenAI, Anthropic, or local models 🌐 Offline-First: No internet required after video generation 🔧 Simple API: Get started with just 3 lines of code 📖 Digital Libraries: Index thousands of books in a single video file 🎓 Educational Content: Create searchable video memories of course materials 📰 News Archives: Compress years of articles into manageable video databases 💼 Corporate Knowledge: Build company-wide searchable knowledge bases 🔬 Research Papers: Quick semantic search across scientific literature 📝 Personal Notes: Transform your notes into a searchable AI assistant

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLM

LLM

Photo of 10k PDFs

10k PDFs

Photo of LLM memory

LLM memory

Related news:

News photo

When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack

News photo

Which LLM should you use? Token Monster automatically combines multiple models and tools for you

News photo

White House releases health report written by LLM, with hallucinated citations