Get the latest tech news
Finding thousands of exposed Ollama instances using Shodan
We uncovered 1,100+ exposed Ollama LLM servers—20% with open models—revealing critical security gaps and the need for better LLM threat monitoring.
Widely adopted platforms such as ChatGPT, Grok, and DeepSeek have contributed to the mainstream visibility of LLMs, while open-source frameworks like Ollama and Hugging Face have significantly lowered the barrier to entry for deploying these models in custom environments. Development of a proof-of-concept tool, written in Python, to detect exposed Ollama servers through Shodan queries Analysis of identified instances evaluate authentication enforcement, endpoint exposure, and model accessibility Recommendations for mitigating common vulnerabilities in LLM deployments, with a focus on practical security improvements By combining port-based filtering with banner analysis and keyword validation, our system aims to strike a balance between recall and precision in identifying genuinely exposed LLM servers, thus enabling accurate and responsible vulnerability assessment.
Or read this on Hacker News