Get the latest tech news

Show HN: Prompt Engine – Auto pick LLMs based on your prompts


Applications that use LLMs typically tend to use more than one model or provider, depending on your use case. This is because not all LLMs are built the same, while GPT-4o might be good at basic customer support chat, Claude 3.5 Sonnet would be good ...

More and more apps are using a Mixture of Agent orchestration to power their application, resulting in better consistency and quality of output with less breakage. The engine automatically enhances the initial prompt to improve accuracy, reduce token usage, and prevent output structure breakage. This significantly increases the quality and consistency of running prompts on LLMs while reducing hallucinations.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of prompts

prompts

Photo of Prompt Engine

Prompt Engine

Related news:

News photo

AI hallucinations: Why LLMs make things up (and how to fix it)

News photo

LLMs may have a killer enterprise app: ‘digital labor’ — at least if Salesforce Agentforce is any indicator

News photo

Test Driven Development (TDD) for your LLMs? Yes please, more of that please