Get the latest tech news
The rise of prompt ops: Tackling hidden AI costs from bad inputs and context bloat
AI models can get fatigued, prompt ops can help manage, measure, monitor and tune prompts.
Couple this with all the tinkering involved with prompting — it can take a few tries to get to the intended result, and sometimes the question at hand simply doesn’t need a model that can think like a PhD — and compute spend can get out of control. Additionally, incorrect prompting API configurations (such as OpenAI o3, which requires a high reasoning effort) will incur higher costs when a lower-effort, cheaper request would suffice. While using bullet points, itemized lists or bold indicators (****) may seem “a bit cluttered” to human eyes, Emerson noted, these callouts can be beneficial for an LLM.
Or read this on Venture Beat