Get the latest tech news

Ollama Release v0.1.45. Support DeepSeek-Coder-V2


New models DeepSeek-Coder-V2: A 16B & 236B open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. ollama show ollama show wi...

DeepSeek-Coder-V2: A 16B & 236B open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. ollama show will now show model details such as context length, parameters, embedding size, license and more: ollama show <model> will now show model information such as context window size Model loading on Windows with CUDA GPUs is now faster Setting seed in the/v1/chat/completions OpenAI compatibility endpoint no longer changes temperature Enhanced GPU discovery and multi-gpu support with concurrency The Linux install script will now skip searching for network devices Introduced a workaround for AMD Vega RX 56 SDMA support on Linux Fix memory prediction for deepseek-v2 and deepseek-coder-v2 models api/show endpoint returns extensive model metadata GPU configuration variables are now reported in ollama serve Update Linux ROCm to v6.1.1

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Ollama Release

Ollama Release