Get the latest tech news
Show HN: Use Third Party LLM API in JetBrains AI Assistant
Proxy remote LLM API as Ollama and LM Studio, for using them in JetBrains AI Assistant - Stream29/ProxyAsLocalModel
The official Java SDKS uses too many dynamic features, making it hard to compile into a native image, even with a tracing agent. So I decided to implement a simple client of streaming chat completion API by myself with Ktor and kotlinx.serialization which are both no-reflex, functional and DSL styled. The Kotlin world uses more functional programming and less reflexion, which makes it more suitable for GraalVM native image, with faster startup and less memory usage.
Or read this on Hacker News