Get the latest tech news
Show HN: Prompt Engine – Auto pick LLMs based on your prompts
Applications that use LLMs typically tend to use more than one model or provider, depending on your use case. This is because not all LLMs are built the same, while GPT-4o might be good at basic customer support chat, Claude 3.5 Sonnet would be good ...
More and more apps are using a Mixture of Agent orchestration to power their application, resulting in better consistency and quality of output with less breakage. The engine automatically enhances the initial prompt to improve accuracy, reduce token usage, and prevent output structure breakage. This significantly increases the quality and consistency of running prompts on LLMs while reducing hallucinations.
Or read this on Hacker News