Get the latest tech news

Show HN: Use Third Party LLM API in JetBrains AI Assistant


Proxy remote LLM API as Ollama and LM Studio, for using them in JetBrains AI Assistant - Stream29/ProxyAsLocalModel

The official Java SDKS uses too many dynamic features, making it hard to compile into a native image, even with a tracing agent. So I decided to implement a simple client of streaming chat completion API by myself with Ktor and kotlinx.serialization which are both no-reflex, functional and DSL styled. The Kotlin world uses more functional programming and less reflexion, which makes it more suitable for GraalVM native image, with faster startup and less memory usage.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Assistant

Assistant

Photo of jetbrains

jetbrains

Photo of party llm api

party llm api

Related news:

News photo

Small change in Wear OS shows it's ready to kick the Assistant for Gemini

News photo

JetBrains defends removal of negative reviews for unpopular AI Assistant

News photo

JetBrains releases Mellum, an ‘open’ AI coding model